id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
7,375,698 | https://en.wikipedia.org/wiki/Pythagorean%20cup | A Pythagorean Cup (also known as a Pythagoras Cup, Greedy Cup, Cup of Justice, Anti Greedy Goblet or Tantalus Cup) is a practical joke device in a form of a drinking cup, credited to Pythagoras of Samos. When it is filled beyond a certain point, a siphoning effect causes the cup to drain its entire contents through the base. The cup has been used to make statements about greed.
Origin
Pythagorean siphons were originally introduced by Pythagoras in 6th century B.C.
Form and function
A Pythagorean cup looks like a normal drinking cup, except that the bowl has a central column in it, giving it a shape like a bundt pan. The central column of the bowl is positioned directly over the stem of the cup and over a hole at the bottom of the stem. A small open pipe runs from this hole almost to the top of the central column, where there is an open chamber. The chamber is connected by a second pipe to the bottom of the central column, where a hole in the column exposes the pipe to (the contents of) the bowl of the cup.
When the cup is filled, liquid rises through the second pipe up to the chamber at the top of the central column, following Pascal's principle of communicating vessels. As long as the level of the liquid does not rise beyond the level of the chamber, the cup functions as normal. If the level rises further, however, the liquid spills through the chamber into the first pipe and out of the bottom. Gravity then creates a siphon through the central column, causing the entire content of the cup to be emptied through the hole at the bottom of the stem.
Mechanics
A Pythagorean siphon is composed of four chambers with one chamber in the center that liquids can escape through. As liquid fills up the four chambers, the pressure acting on the liquids remains constant and so the level of liquid in each chamber remains the same. Once the liquid reaches the top of the Pythagorean siphon it begins to escape through the central chamber as the effects of gravity take hold.
As this process happens, the liquid from both two chambers next to each side of the central chamber forms a seal above the central chamber due to the surface tension of the liquids. Due to this seal, air can then not escape through the central chamber, so the weight of the water in the central chamber forces all the remaining liquid in every chamber to pour out of the Pythagorean siphon.
Applications
Toilets
Most modern American flush toilets operate on the same principle: when the water level in the bowl rises high enough, a siphon is created, emptying the bowl.
Washing machines
The fabric softener tray in a top-load washing machine operates by utilizing a Pythagorean siphon to distribute fabric softener diluted with water across the clothing in the washing machine. Before starting the washing machine, the user pours fabric softener below the maximum fabric softener line in the loading tray. This line designates the point where if the softener were to be poured above it, then all the fabric softener would resultingly escape the device due to the mechanics of the Pythagorean siphon. As one pours the fabric softener under the line, it does not escape anywhere because it has not begun to escape through the center chamber.
Once the washing machine works to distribute the fabric softener into the tub of the machine, it pours water above the fabric softener loading tray so that the liquid goes over the maximum fill line. This starts the Pythagorean siphon process, as the mixture begins to pour through the central chamber, thus causing a seal from the surface tension of the liquids across all the chambers. The weight of the fabric softener diluted with water has no access to the outside air because of the seal which then causes all the mixture to be poured directly into the washing machine.
See also
Dribble glass
Fuddling cup
Heron's fountain
List of practical joke topics
Puzzle jug
Qiqi, a Chinese cup with a similar function.
Soxhlet extractor, which uses the same mechanism.
References
External links
James Stanley — Towards a Better Pythagorean Cup
A 2014 design for 3D printing your own Pythagoras Cup
The history of Pythagorean Cup
The Fascinating Story of the Pythagorean Cup: A Cup of Justice and Mystery
Ancient inventions
Drinkware
Fluid dynamics
Practical joke devices
Samos
Wine accessories | Pythagorean cup | [
"Chemistry",
"Engineering"
] | 934 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
7,376,681 | https://en.wikipedia.org/wiki/Phene | A phene is an individual genetically determined characteristic or trait which can be possessed by an organism, such as eye colour, height, behavior, tooth shape or any other observable characteristic.
Phene - phenotype - phenome distinction
The term 'phene' was evidently coined as an obvious parallel construct to 'gene'. Phene is to Phenotype as Gene is to Genotype, and Similarly Phene is to Phenome as Gene is to Genome. More specifically, a Phene is an abstract concept describing a particular characteristic which can be possessed by an organism. Whereas Phenotype refers to a collection of Phenes possessed by a particular organism, and Phenome refers to the entire set of Phenes that exist within an organism or species.
Genome wide association studies use "phenes" or "traits" (symptoms) to distinguish groups in the human population. These groups are then employed to identify associations with genetic alleles that are more common in the symptomatic group than in the asymptomatic control group. Allen et al. report that with respect to Schizophrenia "Research in molecular genetics has focused on detecting multiple genes of small effect" This indicates the importance of discovering individual traits or "phenes" that are governed by single genes. Schizophrenia or bipolar disorder may be described as a phenotype but how many individual traits or "phenes" contribute to these phenotypes? Very large genome wide association studies have not found many significant gene linkages. On the contrary the results of these studies implicate a large number of gene alleles that have a very small effect (phene).
It is important to note that the word phenotype was originally used to refer to both the trait/character itself (e.g. the blue eyes phenotype) and the set of traits/characteristics possessed by the organism (clair's eye-colour phenotype is blue). While this definition is still used in many places, the lack of distinction can make in-depth explanations confusing and thus use of the term Phene becomes necessary. Indeed, it is extremely difficult to determine precisely what the fundamental building blocks of a phenome are. Since the term "phenotype" has been used to describe traits and syndromes and population characteristics it is not helpful in the collective search for specific traits that could be a consequence of a single gene or gene–environmental interaction. Phene has emerged as a candidate building block for the phenome.
Phene - gene distinction
Genes give rise to phenes. Genes are the biochemical instructions encoding what an organism can be, while phenes are what the organism is. In general it takes a combination of particular genes, environmental influences and random variation to give rise to any one phene in an organism. Both phenes and genes are subject to evolution. However, if one defines "genes" as "DNA sequences encoding polypeptides", they are not directly accessible to natural selection; the associated phenes are. Note that some, e.g. Richard Dawkins, have used a wider definition of "gene" than the one used in genetics on occasion, extending it to any DNA sequence with a function.
Due to the distinct chemical and physical properties of the nucleotides in the DNA and some mutations being "silent" (that is, not altering gene expression), the DNA primary sequence may also be a phene. For example, A-T and C-G base pairs are differently resistant to heat (see also DNA-DNA hybridization). In a thermophilic microorganism, "silent" mutations may have an effect on DNA stability and thus survival. While being subject to evolution, natural selection affects the primary sequence directly in this case, with or without it being expressed.
Consider, for example, a mutation that makes a zygote abort development as a young embryo. This mutation, obviously, will not spread, as it is quickly fatal. It is not the mutated nucleotide that is selected against, but the fact that due to this mutation, the phene (a key enzyme or developmental factor for example) does not get expressed.
Compare a (fictional) kind of mutation that breaks the DNA strand in a crucial position and defies all attempts to repair it, leading to cell death. Here, the mutated and unmutated DNA sequences would be phenes themselves; it is the changed primary sequence itself which by failing would cause death, not the corresponding polypeptide.
See also Dawkin's concept of extended phenotype.
Origin
The term has been widely adopted by the academic community and appears in scientific literature. A quick keyword search of titles and abstracts containing "phene" at PubMed returns many articles.
It is a valuable concept in the genomic era where "phenes" or "traits" (symptoms) are used to distinguish groups with genetic disorders.
Usages
"Phene" is used as to refer to relevant phenotypic traits in the OMIA (Online Mendelian Inheritance in Animals) database. One of the objectives of the OMIA is to match genotypes to phenotypes. Lenffer et al. (2006) describe the OMIA as a "comparative biology resource" "(The) OMIA is a comprehensive resource of phenotypic information on heritable animal traits and genes in a strongly comparative context, relating traits to genes where possible. OMIA is modelled on and is complementary to Online Mendelian Inheritance in Man (OMIM)." The term "phene" is equated with "trait".
See also
Phenotype
Phenotypic trait
Online Mendelian Inheritance in Man (a database of human phenes)
References
Science Of Biogenetics : Can two Brown eyed parents have blue eyed child
Genetics
1920s neologisms
Species | Phene | [
"Biology"
] | 1,200 | [
"Genetics"
] |
7,376,733 | https://en.wikipedia.org/wiki/Electron%20pair | In chemistry, an electron pair or Lewis pair consists of two electrons that occupy the same molecular orbital but have opposite spins. Gilbert N. Lewis introduced the concepts of both the electron pair and the covalent bond in a landmark paper he published in 1916.
Because electrons are fermions, the Pauli exclusion principle forbids these particles from having all the same quantum numbers. Therefore, for two electrons to occupy the same orbital, and thereby have the same orbital quantum number, they must have different spin quantum numbers. This also limits the number of electrons in the same orbital to two.
The pairing of spins is often energetically favorable, and electron pairs therefore play a large role in chemistry. They can form a chemical bond between two atoms, or they can occur as a lone pair of valence electrons. They also fill the core levels of an atom.
Because the spins are paired, the magnetic moment of the electrons cancel one another, and the pair's contribution to magnetic properties is generally diamagnetic.
Although a strong tendency to pair off electrons can be observed in chemistry, it is also possible for electrons to occur as unpaired electrons.
In the case of metallic bonding, the magnetic moments also compensate to a large extent, but the bonding is more communal, so that individual pairs of electrons cannot be distinguished and it is better to consider the electrons as a collective 'sea'.
See also
Electron pair production
Frustrated Lewis pair
Jemmis mno rules
Lewis acids and bases
Nucleophile
Polyhedral skeletal electron pair theory
References
Quantum chemistry
Chemical bonding
Molecular physics | Electron pair | [
"Physics",
"Chemistry",
"Materials_science"
] | 315 | [
"Quantum chemistry",
"Molecular physics",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Condensed matter physics",
"nan",
"Atomic",
"Chemical bonding",
" and optical physics"
] |
7,376,767 | https://en.wikipedia.org/wiki/Transponder%20%28aeronautics%29 | A transponder (short for transmitter-responder and sometimes abbreviated to XPDR, XPNDR, TPDR or TP) is an electronic device that produces a response when it receives a radio-frequency interrogation. Aircraft have transponders to assist in identifying them on air traffic control radar. Collision avoidance systems have been developed to use transponder transmissions as a means of detecting aircraft at risk of colliding with each other.
Air traffic control units use the term "squawk" when they are assigning an aircraft a transponder code, e.g., "Squawk 7421". Squawk thus can be said to mean "select transponder code" or "squawking xxxx" to mean "I have selected transponder code xxxx".
The transponder receives interrogation from the secondary surveillance radar on 1030 MHz and replies on 1090 MHz.
Secondary surveillance radar
Secondary surveillance radar (SSR) is referred to as "secondary", to distinguish it from the "primary radar" that works by passively reflecting a radio signal off the skin of the aircraft. Primary radar determines range and bearing to a target with reasonably high fidelity, but it cannot determine target elevation (altitude) reliably except at close range. SSR uses an active transponder (beacon) to transmit a response to an interrogation by a secondary radar. This response most often includes the aircraft's pressure altitude and a 4-digit octal identifier.
Operation
A pilot may be requested to squawk a given code by an air traffic controller, via the radio, using a phrase such as "Cessna 123AB, squawk 0363". The pilot then selects the 0363 code on their transponder and the track on the air traffic controller's radar screen will become correctly associated with their identity.
Because primary radar generally gives bearing and range position information, but lacks altitude information, mode C and mode S transponders also report pressure altitude. Mode C altitude information conventionally comes from the pilot's altimeter, and is transmitted using a modified Gray code, called a Gillham code. Where the pilot's altimeter does not contain a suitable altitude encoder, a blind encoder (which does not directly display altitude) is connected to the transponder. Around busy airspace there is often a regulatory requirement that all aircraft be equipped with altitude-reporting mode C or mode S transponders. In the United States, this is known as a Mode C veil. Mode S transponders are compatible with transmitting the mode C signal, and have the capability to report in increments; they receive information from a GPS receiver and also transmit location and speed. Without the pressure altitude reporting, the air traffic controller has no display of accurate altitude information, and must rely on the altitude reported by the pilot via radio. Similarly, the traffic collision avoidance system (TCAS) installed on some aircraft needs the altitude information supplied by transponder signals.
IDENT
All mode A, C, and S transponders include an "IDENT" switch which activates a special thirteenth bit on the mode A reply known as IDENT, short for "identify". When ground-based radar equipment receives the IDENT bit, it results in the aircraft's blip "blossoming" on the radar scope. This is often used by the controller to locate the aircraft amongst others by requesting the ident function from the pilot, e.g., "Cessna 123AB, squawk 0363 and ident".
Ident can also be used in case of a reported or suspected radio failure to determine if the failure is only one way and whether the pilot can still transmit or receive, but not both, e.g., "Cessna 123AB, if you read, squawk ident".
Transponder codes
Transponder codes are four-digit numbers transmitted by an aircraft transponder in response to a secondary surveillance radar interrogation signal to assist air traffic controllers with traffic separation. A discrete transponder code (often called a squawk code) is assigned by air traffic controllers to identify an aircraft uniquely in a flight information region (FIR). This allows easy identification of aircraft on radar.
Codes are made of four octal digits; the dials on a transponder read from zero to seven, inclusive. Four octal digits can represent up to 4096 different codes, which is why such transponders are sometimes described as "4096 code transponders."
The use of the word "squawk" comes from the system's origin in the World War II identification friend or foe (IFF) system, which was code-named "Parrot".
Codes assigned by air traffic control
Some codes can be selected by the pilot if and when the situation requires or allows it, without permission from air traffic control (ATC). Such codes are referred to as "conspicuity codes" in the UK. Other codes are generally assigned by ATC units.
For flights on instrument flight rules (IFR), the squawk code is typically assigned as part of the departure clearance and stays the same throughout the flight.
Flights on visual flight rules (VFR), when in uncontrolled airspace, will "squawk VFR" (1200 in the United States and Canada, 7000 in Europe). Upon contact with an ATC unit, they will be told to squawk a certain code. When changing frequency, for instance because the VFR flight leaves controlled airspace or changes to another ATC unit, the VFR flight will be told to "squawk VFR" again.
In order to avoid confusion over assigned squawk codes, ATC units will typically be allocated blocks of squawk codes, not overlapping with the blocks of nearby ATC units, to assign at their discretion.
Not all ATC units will use radar to identify aircraft, but they assign squawk codes nevertheless. As an example, London Information—the flight information service station that covers the southern half of the UK—does not have access to radar images, but does assign squawk code 1177 to all aircraft that receive a flight information service (FIS) from them. This tells other radar-equipped ATC units that a specific aircraft is listening on the London Information radio frequency, in case they need to contact that aircraft.
Emergency codes
The following codes are applicable worldwide.
See List of transponder codes for list of country-specific and historic allocations.
Transponder-related incidents
Aeroméxico Flight 498 — August 31, 1986 (one of the aircraft equipped with a Mode A, but not Mode C, transponder)
Iran Air Flight 655 — July 3, 1988 (incorrect interpretation of transponder code, a factor in mistaken identity and shoot-down)
Proteus Airlines Flight 706 — July 30, 1998 (mid-air collision; one of the aircraft had its transponder switched off)
Korean Air Flight 085 — September 11, 2001 (suspected hijack involving the transponder code, false alarm)
Gol Transportes Aéreos Flight 1907 — September 29, 2006 (midair collision; one of the aircraft had its transponder accidentally switched off)
See also
Aviation transponder interrogation modes
References
Air traffic control
Avionics
Encodings | Transponder (aeronautics) | [
"Technology"
] | 1,516 | [
"Avionics",
"Aircraft instruments"
] |
7,377,445 | https://en.wikipedia.org/wiki/List%20of%20saturated%20fatty%20acids | Saturated fatty acids are fatty acids that make up saturated fats.
See also
List of unsaturated fatty acids
Carboxylic acid
List of carboxylic acids
Dicarboxylic acid
Saturated Fatty Acids
Alkanoic acids | List of saturated fatty acids | [
"Chemistry"
] | 50 | [
"nan"
] |
7,377,916 | https://en.wikipedia.org/wiki/Heterotachy | Heterotachy refers to variations in lineage-specific evolutionary rates over time. In the field of molecular evolution, the principle of heterotachy states that the substitution rate of sites in a gene can change through time. It has been proposed that the positions that show switches in substitution rate over time (that is, heterotachous sites) are good indicators of functional divergence. However, it appears that heterotachy is a much more general process, since most variable sites of homologous proteins with no evidence of functional shift are heterotachous.
The covarion hypothesis is a specific form of heterotachy. Some studies have proposed functional divergence models that are also heterotachous. Additionally, some mixture models that do not explicitly account for rate-shift, but site-partitions evolving at different relative substitution rates across lineages are mathematically heterotachous.
Failure to take heterotachy into account in phylogenetic reconstructions may lead to incorrect phylogenetic trees. Thus Zhong et al. (2011) say that heterotachy is one of the reasons for variability in reconstructions of the origin of gnetophytes.
References
Molecular evolution | Heterotachy | [
"Chemistry",
"Biology"
] | 247 | [
"Evolutionary processes",
"Molecular evolution",
"Molecular biology"
] |
7,378,144 | https://en.wikipedia.org/wiki/Minor-planet%20designation | A formal minor-planet designation is, in its final form, a number–name combination given to a minor planet (asteroid, centaur, trans-Neptunian object and dwarf planet but not comet). Such designation always features a leading number (catalog or IAU number) assigned to a body once its orbital path is sufficiently secured (so-called "numbering"). The formal designation is based on the minor planet's provisional designation, which was previously assigned automatically when it had been observed for the first time. Later on, the provisional part of the formal designation may be replaced with a name (so-called "naming"). Both formal and provisional designations are overseen by the Minor Planet Center (MPC), a branch of the International Astronomical Union.
Currently, a number is assigned only after the orbit has been secured by four well-observed oppositions. For unusual objects, such as near-Earth asteroids, numbering might already occur after three, maybe even only two, oppositions. Among more than half a million minor planets that received a number, only about 20 thousand (or 4%) have received a name. In addition, approximately 700,000 minor planets have not been numbered, as of November 2023.
The convention for satellites of minor planets, such as the formal designation (87) Sylvia I Romulus for the asteroid moon Romulus, is an extension of the Roman numeral convention that had been used, on and off, for the moons of the planets since Galileo's time. Comets are also managed by the MPC, but use a different cataloguing system.
Syntax
A formal designation consists of two parts: a catalog number, historically assigned in approximate order of discovery, and either a name, typically assigned by the discoverer, or, the minor planet's provisional designation.
The permanent syntax is:
for unnamed minor planets: (number) Provisional designation
for named minor planets: (number) Name; with or without parentheses
For example, the unnamed minor planet has its number always written in parentheses, while for named minor planets such as (274301) Wikipedia, the parentheses may be dropped as in 274301 Wikipedia. Parentheses are now often omitted in prominent databases such as the JPL Small-Body Database.
Since minor-planet designations change over time, different versions may be used in astronomy journals. When the main-belt asteroid 274301 Wikipedia was discovered in August 2008, it was provisionally designated , before it received a number and was then written as . On 27 January 2013, it was named Wikipedia after being published in the Minor Planet Circulars.
According to the preference of the astronomer and publishing date of the journal, 274301 Wikipedia may be referred to as , or simply as . In practice, for any reasonably well-known object the number is mostly a catalogue entry, and the name or provisional designation is generally used in place of the formal designation. So Pluto is rarely written as 134340 Pluto, and is more commonly used than the longer version .
History
By 1851 there were 15 known asteroids, all but one with their own symbol. The symbols grew increasingly complex as the number of objects grew, and, as they had to be drawn by hand, astronomers found some of them difficult. This difficulty was addressed by Benjamin Apthorp Gould in 1851, who suggested numbering asteroids in their order of discovery, and placing this number in a circle as the symbol for the asteroid, such as ④ for the fourth asteroid, Vesta. This practice was soon coupled with the name itself into an official number–name designation, "④ Vesta", as the number of minor planets increased. By the late 1850s, the circle had been simplified to parentheses, "(4)" and "(4) Vesta", which was easier to typeset. Other punctuation such as "4) Vesta" and "4, Vesta" was also used, but had more or less completely died out by 1949.
The major exception to the convention that the number tracks the order of discovery or determination of orbit is the case of Pluto. Since Pluto was initially classified as a planet, it was not given a number until a 2006 redefinition of "planet" that excluded it. At that point, Pluto was given the formal designation (134340) Pluto.
See also
List of minor planets, see index
Astronomical naming conventions
Meanings of minor-planet names
Name conflicts with minor planets
References
External links
Minor Planet Names: Alphabetical List
IAU FAQ on minor planets
MPC explanation of provisional designations
Dr. James Hilton, When Did the Asteroids Become Minor Planets?
Minor planets
Astronomical nomenclature | Minor-planet designation | [
"Astronomy"
] | 939 | [
"Astronomical nomenclature",
"Astronomical objects"
] |
7,378,151 | https://en.wikipedia.org/wiki/World%20Ocean%20Circulation%20Experiment | The World Ocean Circulation Experiment (WOCE) was a component of the international World Climate Research Program, and aimed to establish the role of the World Ocean in the Earth's climate system. WOCE's field phase ran between 1990 and 1998, and was followed by an analysis and modeling phase that ran until 2002. When the WOCE was conceived, there were three main motivations for its creation. The first of these is the inadequate coverage of the World Ocean, specifically in the Southern Hemisphere. Data was also much more sparse during the winter months than the summer months, and there was—and still is to some extent—a critical need for data covering all seasons. Secondly, the data that did exist was not initially collected for studying ocean circulation and was not well suited for model comparison. Lastly, there were concerns involving the accuracy and reliability of some measurements. The WOCE was meant to address these problems by providing new data collected in ways designed to "meet the needs of global circulation models for climate prediction."
Goals
Two major goals were set for the campaign.
1. Develop ocean models that can be used in climate models and collect the data necessary for testing them
Specifically, understand:
Large scale fluxes of heat and fresh water
Dynamical balance of World Ocean circulation
Components of ocean variability on months to years
The rates and nature of formation, ventilation and circulation of water masses that influence the climate system on time scales from ten to one hundred years
In order to achieve Goal 1, the WCRP outlined and established Core Projects that would receive priority. The first of these was the "Global Description" project, which was meant to obtain data on the circulation of heat, fresh water and chemicals, as well as the statistics of eddies. The second project—"Southern Ocean"—placed particular emphasis on studying the Antarctic Circumpolar Current and the Southern Ocean’s interaction with the World Ocean. The third and final Core Project serving goal one was the "Gyre Dynamics Experiment." The second and third of these focuses are designed specifically to address the ocean’s role in decadal climate changes. Initial planning of the WOCE states that achievement of Goal 1 would involve "strong interaction between modeling and field activities," which are described further below.
2. Find the representativeness of the dataset for long-term behavior and find methods for determining long-term changes in ocean currents
Specifically:
Determine representative of specific WOCE data sets
Identify those oceanographic parameters, indices and fields that are essential for continuing measurements in a climate observing system on decadal time scales
Develop cost effective techniques suitable for deployment in an ongoing climate observing system
Modeling
Models in WOCE were used for both experimental design and data analysis. Models with use of data can incorporate various properties, including thermal wind balance, maintenance of the barotropic vorticity budget, and conservation of heat, fresh water, or mass. Measurements useful for these parameters are heat, fresh water or tracer concentration; current, surface fluxes of heat and fresh water; sea surface elevation.
Both inverse modeling and data assimilation were employed during WOCE. Inverse modeling is the fitting of data using a numerical least squares or maximum likelihood fitting procedure. The data assimilation technique requires data to be compared with an initial integration of a model. The model is then progressed in time using new data and repeating the process.
The success of these methods requires sufficient data to fully constrain the model, hence the need for a comprehensive field program.
Field Program
Goals for the WOCE Field Program were as follows.
The experiment will be global in nature and the major observational components will be deployed in all oceans.
The requirement of simultaneity of measurements will be imposed only where essential.
The flexibility inherent in the existing arrangements for cooperative research in the worldwide oceanographic (and meteorological) community will be exploited as far as possible.
Major elements of the WOCE Field Program
Satellite Altimetry plans built around the availability of ERS–1 and ERS–2 (European), TOPEX/POSEIDON (US/French) to study fields of surface forcing and oceanic surface topography
Hydrography high quality conductivity-temperature-pressure profilers as well as free-fall instruments to provide a climatological temperature-salinity database
Geochemical Tracers using chemical information (such as radioactive decay and atmospheric history) of passive compounds to study the formation rates and transport of water masses on climatological timescales
Ocean Surface Fluxes using in-situ and satellite measurements to quantify fluxes of heat, water and momentum (necessary for modeling thermohaline and wind-driven circulation)
Satellite Winds using surface buoys, Voluntary Observing Ships (VOS) and satellite microwave scatterometer systems to measure the surface wind field
Surface Meteorological Observations from VOS improvement of sampling and accuracy in surface meteorological measurements, as well increasing area coverage
Upper Ocean Observations from Merchant Ships-of-Opportunity expendable bathythermograph (XBT) sampling lines to study changes in heat content of the upper ocean
In-Situ Sea Level Measurements upgrading and installing new sea-level gauges to calibrate altimetry measurements
Drifting Buoys and Floats surface drifting buoys provide measurements such as sea level pressure, sea-surface temperature, humidity, precipitation, surface salinity, and near-surface and mid-depth currents
Moored instrumentation provides detailed temporal information at a number of sites and depths
Resulting Conclusions
This list, though not comprehensive, outlines a sampling of the most highly cited articles and books resulting from the WOCE.
Ocean Circulation and Climate, Observing and Modelling the Global Ocean, 1st Edition, Eds. Gerold Siedler, John Gould & John Church, Academic Press, 736pp. (International Geophysics Series 77) 2001
Revisiting the South Pacific subtropical circulation: A synthesis of World Ocean Circulation Experiment observations along 32°S, S. E. Wijffels, J. M. Toole, R. Davis, Journal of Geophysical Research, September 2012
See also
Geochemical Ocean Sections Study (GEOSECS)
Global Ocean Data Analysis Project (GLODAP)
World Ocean Atlas (WOA)
External links
Access page to the WOCE data legacy at the National Oceanographic Data Center (US)
Electronic Atlas of WOCE Data at the Alfred Wegener Institute for Polar and Marine Research, Bremerhaven
at the National Oceanography Centre, Southampton (UK)
Searchable set of WOCE data, archived in the information system PANGAEA
WOCE observations 1990–1998; a summary of the WOCE global data resource, WOCE International Project Office, WOCE Report No. 179/02., Southampton, UK. (pdf 18.9 MB)
References
Oceanography
Physical oceanography
World Ocean | World Ocean Circulation Experiment | [
"Physics",
"Environmental_science"
] | 1,366 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
7,378,394 | https://en.wikipedia.org/wiki/Bilingual%20sign | A bilingual sign (or, by extension, a multilingual sign) is the representation on a panel (sign, usually a traffic sign, a safety sign, an informational sign) of texts in more than one language. The use of bilingual signs is usually reserved for situations where there is legally administered bilingualism (in bilingual regions or at national borders) or where there is a relevant tourist or commercial interest (airports, train stations, ports, border checkpoints, tourist attractions, international itineraries, international institutions, etc.). However, more informal uses of bilingual signs are often found on businesses in areas where there is a high degree of bilingualism, such as tourist venues, ethnic enclaves and historic neighborhoods. In addition, some signs feature synchronic digraphia, the use of multiple writing systems for a single language.
Bilingual signs are widely used in regions whose native languages do not use the Latin alphabet (although some countries like Spain or Poland use multilingual signs); such signs generally include transliteration of toponyms and optional translation of complementary texts (often into English). Beyond bilingualism, there is a general tendency toward the substitution of internationally standardized symbols and pictograms for text.
Around the world
The use of bilingual signs has experienced a remarkable expansion in recent years. The increase in bilingualism there has been paralleled by increases in international travel and a greater sensitivity to the needs of ethnic and linguistic minorities.
Europe
Bilingual signs arose in places like Belgium where, because of the cohabitation of Dutch-speaking and French-speaking communities (especially in the central part of the country near Brussels), bilingualism signaled a simple willingness to accommodate all citizens equally. As a result, all street signs in the Brussels-Capital Region are bilingual in Dutch and French.
Switzerland has several cantons (Bern, Fribourg, Valais and Graubünden) and towns (e.g. Biel/Bienne, Murten, Fribourg, Siders and Disentis/Mustér), where two, or in one case (Graubünden) even three languages have official status and therefore the signs are multilingual. With Biel/Bienne, both the German and the French name of the town are always officially written with the compound name; and similarly with Disentis/Mustér (German/Romansh).
Another example is the German-speaking South Tyrol, which was annexed to Italy during World War I and eventually became the focus of assimilation policies. In observance of international treaties, Italy was eventually compelled to acknowledge and accommodate its German-speaking citizens through the use of bilingual signs. The situation of the Slovene minority living in the Trieste, Gorizia and Udine provinces is very different as only in recent years have the bilingual signs become visible and only in smaller villages. In the French-speaking Aosta Valley, official road and direction signs are usually in both languages, Italian and French.
In Greece, virtually all signs are bilingual, with the Greek text in yellow and the English in white. If a sign is in Greek only, an equivalent sign in English will often be situated nearby.
In Spain, bilingual signs in the local language and Spanish appear irregularly in the autonomous communities of Galicia, Basque Country, Navarre, Catalonia, Valencian Community and the Balearic Islands.
Bilingual signs are also used in the Republic of Ireland, with all roads, towns, important buildings etc. named in both the Irish and English languages. The Irish appears on the top of the sign (usually in italic text) with the English underneath. The exception to this is in Gaeltacht regions, where only Irish language signage tends to be used.
In Germany, first bilingual German–Sorbian road and street signs as well as city-limit signs and train station signs were introduced in the 1950s in Lusatia. After reunification, at least bilingual city-limit signs were also adapted in some regions, were Danish or Frisian are spoken. In Brandenburg and Saxony, German and Sorbian place names nowadays have to be shown in the same size, with German names on the top.
In Finland, multilingual signs appeared at the end of the 19th century. The signs were in the official languages Swedish, Finnish and, during that period, also Russian. After the independence of Finland, the signs became bilingual Finnish–Swedish in the official bilingual areas of the country and bilingual Finnish–Sami in the northern parts.
Bilingual signs are used in the United Kingdom. In Wales, Welsh and English are official languages and most road signs are bilingual. Until 2016 each local authority decided which language is shown first, from 2016 new signage will feature Welsh first. In Scotland, Scottish Gaelic is increasingly visible on road signs, not only in the north-west and on the islands, but also on main primary routes. Railway station signs and signs on public buildings such as the Scottish Parliament are increasingly bilingual. In Northern Ireland, some signs in Irish and/or Ulster Scots are found. In Cornwall, some signs such as street names are found in English and Cornish; and similarly in the Isle of Man in English and Manx Gaelic.
In parts of Slovenia, where languages other than Slovene are official (Italian in parts of Slovenian Istria and Hungarian in parts of Prekmurje), the law requires all official signs (including road signs) to be in both official languages. This regulation is not always strictly enforced, but nevertheless all road signs in these areas are bilingual.
In many regions of Poland bilingual signs are used: Polish and Ruthenian in Lemkivshchyna, Polish and German in Upper Silesia, Polish and Lithuanian in Puńsk commune and Polish and Kashubian in Pomerania.
European airports have signs that are generally bilingual with the local language and English, although there are significant variations between countries. In multilingual countries such as Belgium and Switzerland, airports generally have signs in three or four languages. Some airports, such as Amsterdam Airport Schiphol, are used primarily by international travellers, and choose to use monolingual English signs, even though they are located in a country whose native language is not English.
North America
The Government of Canada and the Province of New Brunswick are officially bilingual in English and French, so all signs issued or regulated by those governments are bilingual regardless of where they are located. Provincial road signs are also bilingual in French-designated areas of Manitoba and Ontario. Each local authority decides which language is shown first. In Ottawa, the national capital, the municipal government is officially bilingual so all municipal traffic signs and road markers are bilingual. Since airports are regulated by the federal government, most airports in Canada have bilingual signs in English and French.
In the Province of Nova Scotia, particularly on Cape Breton island, a number of place-name signs are bilingual in English and Scottish Gaelic.
Although Nunavut, an Inuit territory, is officially multi-lingual in English, French, Inuktitut and Inuinnaqtun, municipal road signs have remained in English only, other than stop signs. Some other road signs in various parts of Canada include other indigenous languages, such as the English/Squamish road sign in British Columbia shown here.
Quebec is officially monolingual in French, and the use of other languages is restricted under the Charter of the French Language. Commercial signs in Quebec are permitted to include text in languages other than French as long as French is "markedly predominant".
At places near the U.S.–Mexico border, some signs are bilingual in English and Spanish, and some signs near the U.S.–Canada border are bilingual in English and French. Additionally, large urban centers such as New York City, Chicago and others have bilingual and multilingual signage at major destinations. There are a few English and Russian bilingual signs in western Alaska. In Texas, some signs are required to be in English and Spanish. In Texas areas where there are large numbers of Spanish speakers, many official signs as well as unofficial signs (e.g. stores, churches, billboards) are written in Spanish, some bilingual with English, but others in Spanish only. In and around New Britain, Connecticut, it is not uncommon to see signs in Spanish and Polish as well as English.
In 2016, Port Angeles, Washington, installed bilingual signs in English and the indigenous Klallam languages to preserve and revitalize the area's Klallam culture.
New York City's Chinatown has English–Chinese signs. Seattle's Chinatown/Japantown has English–Chinese and English–Japanese signs.
Asia
In the People's Republic of China, bilingual signs are mandated by the government in autonomous regions where a minority language shares official status with Chinese. In Xinjiang, signs are in Uyghur and Chinese; in Tibet, signs are in Tibetan and Chinese; and in Inner Mongolia, signs are in Mongolian (written in the classical alphabet) and Chinese. In Guangxi, the majority of signs are in Chinese, even though the Zhuang language is official in the region. Smaller autonomous areas also have similar policies. Signs in the Yanbian Korean Autonomous Prefecture, which borders North Korea, are in Korean and Chinese. Many areas of Qinghai province mandate bilingual signs in Tibetan and Chinese. In Beijing and Shanghai, due to international exposure of the 2008 Summer Olympics and Expo 2010, almost all city traffic signs are now bilingual with Chinese and English (during the Olympics, signs on Olympic venues were also in French). English use in signs is growing in other major cities as well.
In Hong Kong and Macau, government signs are normally bilingual with Traditional Chinese and English or Portuguese, respectively. This is because, in addition to Chinese, English and Portuguese are official languages of Hong Kong and Macau, respectively. Trilingual road signs in English, Portuguese and traditional Chinese are seen in some newly developed areas of Macau.
In Israel, road signs are often trilingual, in Hebrew, Arabic and English.
In India, road signs are often multilingual, in Hindi, English and other regional languages. In addition, signs in Hindustani often feature synchronic digraphia, with an Urdu literary standard written in Arabic script and a High Hindi standard written in Devanagari.
In Sri Lanka, official road signs are in Sinhala, Tamil and English.
In Turkey bilingual (Turkish and Kurdish) village signs are used in Eastern Anatolia Region. Airports and touristic areas include an English name after the Turkish name.
In the Gulf states such as Saudi Arabia, road signs are often bilingual, in English and Arabic. Other signs (e.g. building signs) may also be displayed in English and Arabic.
Gallery
See also
Bilingualism in Canada
Gaelic road signs in Scotland
Linguistic landscape
List of multilingual countries and regions
Road signs in the Republic of Ireland
Rules of the road
Bibliography
Francescato, G. Le aree bilingui e le regioni di confine. Angeli
Baldacci, O. Geografia e toponomastica. S.G.I.
Baines, Phil. Dixon, Catherin. Signs. UK: Laurence King Co., 2004 (trad.ital. Segnali: grafica urbana e territoriale. Modena: Logos, 2004)
Boudreau, A. Dubois, L. Bulot, T. Ledegen, G. Signalétiques et signalisations linguistiques et langagières des espaces de ville (configurations et enjeux sociolinguistiques). Revue de l'Université de Moncton Vol. 36 n.1. Moncton (Nouveau-Brunswick, Canada): Université de Moncton, 2005.
Bhatia, Tej K. Ritchie, William C. Handbook of Bilingualism. Oxford: Blackwell Publishing, 2006.
Shohamy, E. & Gorter, D. (Eds.), Linguistic Landscape: Expanding the Scenery. London: Routledge, 2009.
Shohamy, E., Ben-Rafael, E., & Barni, M. (Eds.) Linguistic Landscape and the City. Bristol: Multilingual Matters, 2010.
References
Traffic signs
Transport safety
Concepts in language policy
Bilingualism | Bilingual sign | [
"Physics"
] | 2,481 | [
"Physical systems",
"Transport",
"Transport safety"
] |
11,035,490 | https://en.wikipedia.org/wiki/Nemosis | Nemosis is a process of cell activation and death in human fibroblasts.
Initially discovered as programmed necrosis, the name nemosis, is a derivative from the Goddess Nemesis in Greek mythodology. This name was adopted for fibroblast activation based on its initiation by direct cell–cell interactions as opposed to preference for extracellular matrix (ECM) contacts. Contacts between normal diploid fibroblasts induce cell activation leading to programmed cell death, PCD. This type of PCD has features of necrosis rather than apoptosis.
Nemosis of fibroblasts, or mesenchymal cells in general, generates large amounts of mediators of inflammation, such as prostaglandins, as well as growth factors such as hepatocyte growth factor. It is thus indicated to contribute to processes like acute and chronic inflammation, and cancer. Factors secreted by nemotic fibroblasts also
break down the ECM. Such factors include several matrix metalloproteinases, and plasminogen activation.
References
Cellular processes | Nemosis | [
"Biology"
] | 224 | [
"Cellular processes"
] |
11,035,882 | https://en.wikipedia.org/wiki/Potassium%20ferrioxalate | Potassium ferrioxalate, also called potassium trisoxalatoferrate or potassium tris(oxalato)ferrate(III) is a chemical compound with the formula . It often occurs as the trihydrate . Both are crystalline compounds, lime green in colour.
The compound is a salt consisting of ferrioxalate anions, , and potassium cations . The anion is a transition metal oxalate complex consisting of an iron atom in the +3 oxidation state and three bidentate oxalate ligands. Potassium is a counterion, balancing the −3 charge of the complex. In solution, the salt dissociates to give the ferrioxalate anion, , which appears fluorescent green in color. The salt is available in anhydrous form as well as a trihydrate.
The ferrioxalate anion is quite stable in the dark, but it is decomposed by light and high-energy electromagnetic radiation.
Preparation
The complex can be synthesized by the reaction between iron(III) sulfate, barium oxalate and potassium oxalate:
As can be read in the reference above, iron(III) sulfate, barium oxalate and potassium oxalate are combined in water and digested for several hours on a steam bath. Oxalate ions from barium oxalate will then replace the sulfate ions in solution, removing them as which can then be filtered and the pure material can be crystallized.
Structure
The structures of the trihydrate and of the anhydrous salt have been extensively studied. which indicates that the Fe(III) is high spin; as the low spin complex would display Jahn–Teller distortions. The ammonium and mixed sodium-potassium salts are isomorphous, as are related complexes with .
The ferrioxalate complex displays helical chirality as it can form two non-superimposable geometries. In accordance with the IUPAC convention, the isomer with the left-handed screw axis is assigned the Greek symbol Λ (lambda). Its mirror image with the right-handed screw axis is given the Greek symbol Δ (delta).
Reactions
Photoreduction
The ferrioxalate anion is sensitive to light and to high-energy electromagnetic radiation, including X-rays and gamma rays. Absorption of a photon causes the decomposition of one oxalate ion to carbon dioxide and reduction of the iron(III) atom to iron(II). This photo-sensitive property is used for chemical actinometry, the measure of luminous flux, and for preparation of blueprints. This light-catalyzed redox reaction once formed the basis of some photographic processes. However due to their insensitivity and ready availability of advanced digital photography, these processes are obsolete.
Thermal decomposition
The trihydrate loses the three water molecules at 113 °C.
At 296 °C, the anhydrous salt decomposes into the iron(II) complex potassium ferrioxalate, potassium oxalate, and carbon dioxide:
Uses
Photometry and actinometry
The discovery of the efficient photolysis of the ferrioxalate anion was a landmark for chemical photochemistry and actinometry. The potassium salt was found to be over 1000 times more sensitive than uranyl oxalate, the compound previously used for these purposes.
Chemistry education
The synthesis and thermal decomposition of potassium ferrioxalate is a popular exercise for high school, college or undergraduate university students, since it involves the chemistry of transition metal complexes, visually observable photochemistry, and thermogravimetry.
Blueprints
Before the ready availability of wide ink-jet and laser printers, large-size engineering drawings were commonly reproduced by the cyanotype method.
That was a simple contact-based photographic process that produced a "negative" white-on-blue copy of the original drawing—a blueprint. The process is based on the photolysis of an iron(III) complex which gets converted into an insoluble iron(II) version in areas of the paper that were exposed to light.
The complex used in cyanotype is mainly ammonium ferric citrate, but potassium ferrioxalate is also used.
See also
A number of other iron oxalates are known:
Iron(II) oxalate
Iron(III) oxalate
Sodium ferrioxalate
See transition metal oxalate complex.
References
Iron complexes
Iron(III) compounds
Potassium compounds
Oxalato complexes
Ferrates | Potassium ferrioxalate | [
"Chemistry"
] | 947 | [
"Ferrates",
"Salts"
] |
11,036,055 | https://en.wikipedia.org/wiki/Grid-tied%20electrical%20system | A grid-tied electrical system, also called tied to grid or grid tie system, is a semi-autonomous electrical generation or grid energy storage system which links to the mains to feed excess capacity back to the local mains electrical grid. When insufficient electricity is available, electricity drawn from the mains grid can make up the shortfall. Conversely when excess electricity is available, it is sent to the main grid. When the Utility or network operator restricts the amount of energy that goes into the grid, it is possible to prevent any input into the grid by installing Export Limiting devices.
When batteries are used for storage, the system is called battery-to-grid (B2G), which includes vehicle-to-grid (V2G).
How it works
Direct Current (DC) electricity from sources such as hydro, wind or solar is passed to an inverter which is grid tied. The inverter monitors the alternating current mains supply frequency and generates electricity that is phase matched to the mains. When the grid fails to accept power during a "black out", most inverters can continue to provide courtesy power.
Battery-to-grid
A key concept of this system is the possibility of creating an electrical micro-system that is not dependent on the grid-tie to provide a high level quality of service. If the mains supply of the region is unreliable, the local generation system can be used to power important equipment.
Battery-to-grid can also spare the use of fossil fuel power plants to supply energy during peak loads on the public electric grid. Regions that charge based on time of use metering may benefit by using stored battery power during prime time.
Environmentally friendly
Local generation can be from an environmentally friendly source such as pico hydro, solar panels or a wind turbine. Individuals can choose to install their own system if an environmentally friendly mains provider is not available in their location.
Small scale start
A micro generation facility can be started with a very small system such as a home wind power generation, photovoltaic (solar cells) generation, or micro combined heat and power (Micro-CHP) system.
Sell to and buy from mains
Excess electricity can be sold to mains.
Electrical shortfall can be bought from mains.
List of countries or regions that legally allow grid-tied electrical systems
Armenia
Australia
Bangladesh
Bosnia and Herzegovina
Brazil
Canada
Chile
Dominican Republic
El Salvador
European Union
Guatemala
India
Iran
Israel
Japan
Jordan
Mexico
New Zealand
Pakistan
Panama
Philippines (via Meralco)
Russia (from Dec 2019)
Singapore
South Africa (Only by arrangement with municipality)
Sri Lanka
United States of America
Venezuela (no legal restrictions)
See also
Cost of electricity by source
Distributed network
Electric power transmission
Electranet
Photovoltaic system
Grid tie inverter
Inverter
Deep cycle battery
Power outage
V2G
Grid-connected photovoltaic power system
Off the grid - direct current buildings
References
External links
Grid Tied Solar explained
Distributed generation
Battery (electricity)
Electric power
Low-carbon economy | Grid-tied electrical system | [
"Physics",
"Engineering"
] | 608 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
11,036,896 | https://en.wikipedia.org/wiki/Faculty%20of%20Power%20and%20Aeronautical%20Engineering%20of%20Warsaw%20University%20of%20Technology | Faculty of Power and Aeronautical Engineering (pl.: Wydział Mechaniczny Energetyki i Lotnictwa, MEL, MEiL) is located on the Central Campus of the Warsaw University of Technology. The Faculty consists of three organisation units: The Institute of Heat Engineering, the Institute of Aeronautics and Applied Mechanics and the Dean's Office.
History
The Faculty of Power and Aeronautical Engineering was created in 1960 by merging the Faculty of Aviation and the Faculty of Mechanics and Constructions. Władysław Fiszdon became the first dean of the new-created faculty. In the year 1970 the central government decided to extirpate the Polish aerospace industry, and due to this decision aviation courses were forbidden and the Faculty changed its name to the Wydział Mechaniczny Energetyki Cieplnej (Faculty of Mechanics and Power). In May 1970, the decision was canceled and the Faculty reverted to the old name.
Previous Deans:
Janusz Frączek 2016–present
Jerzy Banaszek 2008-2016
Krzysztof Kędzior 2002-2008
Tadeusz Rychter 1996-2002
Andrzej Styczek 1990-1996
Piotr Wolański 1987-1990
Jacek Stupnicki 1984-1987
Jerzy Maryniak 1978-1984
Wiesław Łucjanek 1975-1978
Marek Dietrich 1973-1975
Roman Gutowski 1969-1973
Piotr Orłowski 1967-1969
Jan Oderfeld 1965-1967
Zbigniew Brzoska 1963-1965
Władysław Fiszdon 1960-1963
Courses
Aerospace Engineering
Automatics and Robotics
Mechanical Engineering
Power Engineering
Nuclear engineering
Authorities
Dean: Janusz Frączek, Ph.D., D.Sc.
Vice-Dean for General Affairs: Artur Rusowicz, Ph.D., D.Sc.
Vice-Dean for Teaching Affairs: Maciej Jaworski, Ph.D., D.Sc.
Vice-Dean for Student Affairs: Marta Poćwierz, Ph.D., D.Sc.
External links
Faculty of Power and Aeronautical Engineering
Collegiate Club SAE
Students' Division of Vehicles' Aerodynamics
Aeronautical engineering schools
Warsaw University of Technology | Faculty of Power and Aeronautical Engineering of Warsaw University of Technology | [
"Engineering"
] | 456 | [
"Aeronautical engineering schools",
"Engineering universities and colleges",
"Aeronautics organizations"
] |
11,037,584 | https://en.wikipedia.org/wiki/CoRoT-1b | CoRoT-1b (previously named CoRoT-Exo-1b) is a transiting extrasolar planet approximately 2,630 light-years away in the constellation of Monoceros. The planet was discovered orbiting the yellow dwarf star CoRoT-1 in May 2007. The planet was the first discovery by the French-led CoRoT Mission.
Discovery
CoRoT-1b was identified as the best planetary candidate from the CoRoT spacecraft initial run from February 6 to April 2, 2007. Follow-up photometry with the Wise Observatorys 1.0 m telescope and at the Canada–France–Hawaii Telescope eliminated many of the possible false positives for the transit signal. 9 radial velocity measurements of CoRoT-1 were made at Haute-Provence Observatory in March–April and October 2007 with the SOPHIE échelle spectrograph. The radial velocity data matched the CoRoT light curve and supported the planetary nature of CoRoT-1b and eliminated other possibilities such as background stars, grazing eclipsing binaries, or a triple system.
The discovery was publicly announced on May 3, 2007 and submitted for publication on January 4, 2008.
Characteristics
The planet is a large hot Jupiter, about 1.49 times the radius of Jupiter and approximately 1.03 times as massive, based on ground observations of the star. Its large size is due to its low density combined with the intense heating of its parent star causing the outer layers of the atmosphere to bloat.
Observation of phases
In May 2009 CoRoT-1b became the first extrasolar planet for which optical (as opposed to infrared) observations of phases were reported. These observations suggest that there is not significant heat transfer between the (tidally locked) night and day sides of the planet.
See also
51 Pegasi b
CoRoT-2b
TrES-1
References
External links
Hot Jupiters
Monoceros
Transiting exoplanets
Giant planets
Exoplanets discovered in 2007
1b | CoRoT-1b | [
"Astronomy"
] | 391 | [
"Monoceros",
"Constellations"
] |
11,038,274 | https://en.wikipedia.org/wiki/Embase | Embase (often styled EMBASE for Excerpta Medica dataBASE) is a biomedical and pharmacological bibliographic database of published literature designed to support information managers and pharmacovigilance in complying with the regulatory requirements of a licensed drug. Embase, produced by Elsevier, contains over 32 million records from over 8,500 currently published journals from 1947 to the present. Through its international coverage, daily updates, and drug indexing with EMTREE, Embase enables tracking and retrieval of drug information in the published literature. Each record is fully indexed and Articles in Press are available for some records and In Process are available for all records, ahead of full indexing. Embase's international coverage expands across biomedical journals from 95 countries and is available through a number of database vendors.
History
In 1946, the beginnings of Embase was created as Excerpta Medica (EM) Abstract Journals by a group of Dutch physicians who promoted the flow of medical knowledge and reports post World War II. Included in EM were 13 journal sections which categorized the medical school curriculum by anatomy, pathology, physiology, internal medicine, and other basic clinical specialties. This database lasted until 1972 when it merged with Elsevier.
In 1972, EM had joined with Elsevier and later, in 1975, formed EMBASE (Excerpta Medica database) which had released electronic access to abstract journals. Following feedback from the EMBASE user community, EMBASE Classic was created as a separate database to supplement EMBASE as a backfile of medical journals from 1947-1973 which provides valuable documentation of drugs, adverse effects, endogenous compounds, etc. found at the time.
In 2010, Excerpta Medica, excluding EMBASE, was sold by Elsevier to the Omnicom Group.
Current status
In addition to the 28 million reports, Embase's database steadily rises each year at a rate of over 900,000 records. This wide expanse of information is used in both professional and educational environments for retrieving any published biomedical or drug related information. Currently, Embase allows further customization for a personal experience such as implementing a RSS feed and email alert system. With new drug and disease-related information constantly released, Embase is updated daily to provide a comprehensive and reliable source of information.
See also
List of academic databases and search engines
Index medicus
MEDLINE
Cochrane Library
References
Further reading
External links
Embase — description at Elsevier
Homepage
Bibliographic databases and indexes
Elsevier
Databases in Europe
Medical databases
Medical literature
Pharmacology literature | Embase | [
"Chemistry"
] | 518 | [
"Pharmacology",
"Pharmacology literature"
] |
11,038,318 | https://en.wikipedia.org/wiki/Methamphetamine | Methamphetamine (contracted from ) is a potent central nervous system (CNS) stimulant that is mainly used as a recreational or performance-enhancing drug and less commonly as a second-line treatment for attention deficit hyperactivity disorder (ADHD). It has also been researched as a potential treatment for traumatic brain injury. Methamphetamine was discovered in 1893 and exists as two enantiomers: levo-methamphetamine and dextro-methamphetamine. Methamphetamine properly refers to a specific chemical substance, the racemic free base, which is an equal mixture of levomethamphetamine and dextromethamphetamine in their pure amine forms, but the hydrochloride salt, commonly called crystal meth, is widely used. Methamphetamine is rarely prescribed over concerns involving its potential for recreational use as an aphrodisiac and euphoriant, among other concerns, as well as the availability of safer substitute drugs with comparable treatment efficacy such as Adderall and Vyvanse. While pharmaceutical formulations of methamphetamine in the United States are labeled as methamphetamine hydrochloride, they contain dextromethamphetamine as the active ingredient. Dextromethamphetamine is a stronger CNS stimulant than levomethamphetamine.
Both racemic methamphetamine and dextromethamphetamine are illicitly trafficked and sold owing to their potential for recreational use. The highest prevalence of illegal methamphetamine use occurs in parts of Asia and Oceania, and in the United States, where racemic methamphetamine and dextromethamphetamine are classified as Schedule II controlled substances. Levomethamphetamine is available as an over-the-counter (OTC) drug for use as an inhaled nasal decongestant in the United States. Internationally, the production, distribution, sale, and possession of methamphetamine is restricted or banned in many countries, owing to its placement in schedule II of the United Nations Convention on Psychotropic Substances treaty. While dextromethamphetamine is a more potent drug, racemic methamphetamine is illicitly produced more often, owing to the relative ease of synthesis and regulatory limits of chemical precursor availability.
In low to moderate doses, methamphetamine can elevate mood, increase alertness, concentration and energy in fatigued individuals, reduce appetite, and promote weight loss. At very high doses, it can induce psychosis, breakdown of skeletal muscle, seizures, and bleeding in the brain. Chronic high-dose use can precipitate unpredictable and rapid mood swings, stimulant psychosis (e.g., paranoia, hallucinations, delirium, and delusions), and violent behavior. Recreationally, methamphetamine's ability to increase energy has been reported to lift mood and increase sexual desire to such an extent that users are able to engage in sexual activity continuously for several days while binging the drug. Methamphetamine is known to possess a high addiction liability (i.e., a high likelihood that long-term or high dose use will lead to compulsive drug use) and high dependence liability (i.e., a high likelihood that withdrawal symptoms will occur when methamphetamine use ceases). Discontinuing methamphetamine after heavy use may lead to a post-acute-withdrawal syndrome, which can persist for months beyond the typical withdrawal period. At high doses, methamphetamine is neurotoxic to human midbrain dopaminergic neurons and, to a lesser extent, serotonergic neurons. Methamphetamine neurotoxicity causes adverse changes in brain structure and function, such as reductions in grey matter volume in several brain regions, as well as adverse changes in markers of metabolic integrity.
Methamphetamine belongs to the substituted phenethylamine and substituted amphetamine chemical classes. It is related to the other dimethylphenethylamines as a positional isomer of these compounds, which share the common chemical formula .
Uses
Medical
In the United States, methamphetamine hydrochloride, sold under the brand name Desoxyn, is FDA-approved for the treatment of attention deficit hyperactivity disorder (ADHD); however, the FDA notes that the limited therapeutic usefulness of methamphetamine should be weighed against the risks associated with its use. To avoid toxicity and risk of side effects, FDA guidelines recommend an initial dose of methamphetamine at doses 5–10 mg/day for ADHD in adults and children over six years of age, and may be increased at weekly intervals of 5 mg, up to 25 mg/day, until optimum clinical response is found; the usual effective dose is around 20–25 mg/day. Methamphetamine is sometimes prescribed off-label for obesity, narcolepsy, and idiopathic hypersomnia. In the United States, methamphetamine's levorotary form is available in some over-the-counter (OTC) nasal decongestant products.
Although the pharmaceutical name "methamphetamine hydrochloride" may suggest a racemic mixture, Desoxyn contains enantiopure dextromethamphetamine, which is a more potent stimulant than both levomethamphetamine and racemic methamphetamine. This naming convention deviates from the standard practice observed with other stimulants, such as Adderall and dextroamphetamine, where the dextrorotary enantiomer is explicitly identified as an active ingredient in both generic and brand-name pharmaceuticals.
As methamphetamine is associated with a high potential for misuse, the drug is regulated under the Controlled Substances Act and is listed under Schedule II in the United States. Methamphetamine hydrochloride dispensed in the United States is required to include a boxed warning regarding its potential for recreational misuse and addiction liability.
Desoxyn and Desoxyn Gradumet are both pharmaceutical forms of the drug. The latter is no longer produced and is a extended-release form of the drug, flattening the curve of the effect of the drug while extending it.
Recreational
Methamphetamine is often used recreationally for its effects as a potent euphoriant and stimulant as well as aphrodisiac qualities.
According to a National Geographic TV documentary on methamphetamine, an entire subculture known as party and play is based around sexual activity and methamphetamine use. Participants in this subculture, which consists almost entirely of homosexual male methamphetamine users, will typically meet up through internet dating sites and have sex. Because of its strong stimulant and aphrodisiac effects and inhibitory effect on ejaculation, with repeated use, these sexual encounters will sometimes occur continuously for several days on end. The crash following the use of methamphetamine in this manner is very often severe, with marked hypersomnia (excessive daytime sleepiness). The party and play subculture is prevalent in major US cities such as San Francisco and New York City.
Contraindications
Methamphetamine is contraindicated in individuals with a history of substance use disorder, heart disease, or severe agitation or anxiety, or in individuals currently experiencing arteriosclerosis, glaucoma, hyperthyroidism, or severe hypertension. The FDA states that individuals who have experienced hypersensitivity reactions to other stimulants in the past or are currently taking monoamine oxidase inhibitors should not take methamphetamine. The FDA also advises individuals with bipolar disorder, depression, elevated blood pressure, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome to monitor their symptoms while taking methamphetamine. Owing to the potential for stunted growth, the FDA advises monitoring the height and weight of growing children and adolescents during treatment.
Adverse effects
Physical
Cardiovascular
Methamphetamine is a sympathomimetic drug that causes vasoconstriction and tachycardia. Methamphetamine also promotes abnormal extra heart beats and irregular heart rhythms some of which may be life threatening.
Other physical effects
The effects can also include loss of appetite, hyperactivity, dilated pupils, flushed skin, excessive sweating, increased movement, dry mouth and teeth grinding (potentially leading to condition informally known as meth mouth), headache, rapid breathing, high body temperature, diarrhea, constipation, blurred vision, dizziness, twitching, numbness, tremors, dry skin, acne, and pale appearance. Long-term meth users may have sores on their skin; these may be caused by scratching due to itchiness or the belief that insects are crawling under their skin, and the damage is compounded by poor diet and hygiene. Numerous deaths related to methamphetamine overdoses have been reported. Additionally, "[p]ostmortem examinations of human tissues have linked use of the drug to diseases associated with aging, such as coronary atherosclerosis and pulmonary fibrosis", which may be caused "by a considerable rise in the formation of ceramides, pro-inflammatory molecules that can foster cell aging and death."
Dental and oral health ("meth mouth")
Methamphetamine users, particularly heavy users, may lose their teeth abnormally quickly, regardless of the route of administration, from a condition informally known as meth mouth. The condition is generally most severe in users who inject the drug, rather than swallow, smoke, or inhale it. According to the American Dental Association, meth mouth "is probably caused by a combination of drug-induced psychological and physiological changes resulting in xerostomia (dry mouth), extended periods of poor oral hygiene, frequent consumption of high-calorie, carbonated beverages and bruxism (teeth grinding and clenching)". As dry mouth is also a common side effect of other stimulants, which are not known to contribute severe tooth decay, many researchers suggest that methamphetamine-associated tooth decay is more due to users' other choices. They suggest the side effect has been exaggerated and stylized to create a stereotype of current users as a deterrence for new ones.
Sexually transmitted infection
Methamphetamine use was found to be related to higher frequencies of unprotected sexual intercourse in both HIV-positive and unknown casual partners, an association more pronounced in HIV-positive participants. These findings suggest that methamphetamine use and engagement in unprotected anal intercourse are co-occurring risk behaviors, behaviors that potentially heighten the risk of HIV transmission among gay and bisexual men. Methamphetamine use allows users of both sexes to engage in prolonged sexual activity, which may cause genital sores and abrasions as well as priapism in men. Methamphetamine may also cause sores and abrasions in the mouth via bruxism, increasing the risk of sexually transmitted infection.
Besides the sexual transmission of HIV, it may also be transmitted between users who share a common needle. The level of needle sharing among methamphetamine users is similar to that among other drug injection users.
Psychological
The psychological effects of methamphetamine can include euphoria, dysphoria, changes in libido, alertness, apprehension and concentration, decreased sense of fatigue, insomnia or wakefulness, self-confidence, sociability, irritability, restlessness, grandiosity and repetitive and obsessive behaviors. Peculiar to methamphetamine and related stimulants is "punding", persistent non-goal-directed repetitive activity. Methamphetamine use also has a high association with anxiety, depression, amphetamine psychosis, suicide, and violent behaviors.
Neurotoxicity
Methamphetamine is directly neurotoxic to dopaminergic neurons in both lab animals and humans. Excitotoxicity, oxidative stress, metabolic compromise, UPS dysfunction, protein nitration, endoplasmic reticulum stress, p53 expression and other processes contributed to this neurotoxicity. In line with its dopaminergic neurotoxicity, methamphetamine use is associated with a higher risk of Parkinson's disease. In addition to its dopaminergic neurotoxicity, a review of evidence in humans indicated that high-dose methamphetamine use can also be neurotoxic to serotonergic neurons. It has been demonstrated that a high core temperature is correlated with an increase in the neurotoxic effects of methamphetamine. Withdrawal of methamphetamine in dependent persons may lead to post-acute withdrawal which persists months beyond the typical withdrawal period.
Magnetic resonance imaging studies on human methamphetamine users have also found evidence of neurodegeneration, or adverse neuroplastic changes in brain structure and function. In particular, methamphetamine appears to cause hyperintensity and hypertrophy of white matter, marked shrinkage of hippocampi, and reduced gray matter in the cingulate cortex, limbic cortex, and paralimbic cortex in recreational methamphetamine users. Moreover, evidence suggests that adverse changes in the level of biomarkers of metabolic integrity and synthesis occur in recreational users, such as a reduction in N-acetylaspartate and creatine levels and elevated levels of choline and myoinositol.
Methamphetamine has been shown to activate TAAR1 in human astrocytes and generate cAMP as a result. Activation of astrocyte-localized TAAR1 appears to function as a mechanism by which methamphetamine attenuates membrane-bound EAAT2 (SLC1A2) levels and function in these cells.
Methamphetamine binds to and activates both sigma receptor subtypes, σ1 and σ2, with micromolar affinity. Sigma receptor activation may promote methamphetamine-induced neurotoxicity by facilitating hyperthermia, increasing dopamine synthesis and release, influencing microglial activation, and modulating apoptotic signaling cascades and the formation of reactive oxygen species.
Addiction
Current models of addiction from chronic drug use involve alterations in gene expression in certain parts of the brain, particularly the nucleus accumbens. The most important transcription factors that produce these alterations are ΔFosB, cAMP response element binding protein (CREB), and nuclear factor kappa B (NFκB). ΔFosB plays a crucial role in the development of drug addictions, since its overexpression in D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for most of the behavioral and neural adaptations that arise from addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others.
ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both directly oppose the induction of ΔFosB in the nucleus accumbens (i.e., they oppose increases in its expression). Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug use (i.e., the alterations mediated by ΔFosB). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sex addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sex addictions (i.e., drug-induced compulsive sexual behaviors) are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs, such as amphetamine or methamphetamine.
Epigenetic factors
Methamphetamine addiction is persistent for many individuals, with 61% of individuals treated for addiction relapsing within one year. About half of those with methamphetamine addiction continue with use over a ten-year period, while the other half reduce use starting at about one to four years after initial use.
The frequent persistence of addiction suggests that long-lasting changes in gene expression may occur in particular regions of the brain, and may contribute importantly to the addiction phenotype. In 2014, a crucial role was found for epigenetic mechanisms in driving lasting changes in gene expression in the brain.
A review in 2015 summarized a number of studies involving chronic methamphetamine use in rodents. Epigenetic alterations were observed in the brain reward pathways, including areas like ventral tegmental area, nucleus accumbens, and dorsal striatum, the hippocampus, and the prefrontal cortex. Chronic methamphetamine use caused gene-specific histone acetylations, deacetylations and methylations. Gene-specific DNA methylations in particular regions of the brain were also observed. The various epigenetic alterations caused downregulations or upregulations of specific genes important in addiction. For instance, chronic methamphetamine use caused methylation of the lysine in position 4 of histone 3 located at the promoters of the c-fos and the C-C chemokine receptor 2 (ccr2) genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The ccr2 gene is also important in addiction, since mutational inactivation of this gene impairs addiction.
In methamphetamine addicted rats, epigenetic regulation through reduced acetylation of histones, in brain striatal neurons, caused reduced transcription of glutamate receptors. Glutamate receptors play an important role in regulating the reinforcing effects of addictive drugs.
Administration of methamphetamine to rodents causes DNA damage in their brain, particularly in the nucleus accumbens region. During repair of such DNA damages, persistent chromatin alterations may occur such as in the methylation of DNA or the acetylation or methylation of histones at the sites of repair. These alterations can be epigenetic scars in the chromatin that contribute to the persistent epigenetic changes found in methamphetamine addiction.
Treatment and management
A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these.
, there is no effective pharmacotherapy for methamphetamine addiction. A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil.
Dependence and withdrawal
Tolerance is expected to develop with regular methamphetamine use and, when used recreationally, this tolerance develops rapidly. In dependent users, withdrawal symptoms are positively correlated with the level of drug tolerance. Depression from methamphetamine withdrawal lasts longer and is more severe than that of cocaine withdrawal.
According to the current Cochrane review on drug dependence and withdrawal in recreational users of methamphetamine, "when chronic heavy users abruptly discontinue [methamphetamine] use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose". Withdrawal symptoms in chronic, high-dose users are frequent, occurring in up to 87.6% of cases, and persist for three to four weeks with a marked "crash" phase occurring during the first week. Methamphetamine withdrawal symptoms can include anxiety, drug craving, dysphoric mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and vivid or lucid dreams.
Methamphetamine that is present in a mother's bloodstream can pass through the placenta to a fetus and be secreted into breast milk. Infants born to methamphetamine-abusing mothers may experience a neonatal withdrawal syndrome, with symptoms involving of abnormal sleep patterns, poor feeding, tremors, and hypertonia. This withdrawal syndrome is relatively mild and only requires medical intervention in approximately 4% of cases.
Neonatal
Unlike other drugs, babies with prenatal exposure to methamphetamine do not show immediate signs of withdrawal. Instead, cognitive and behavioral problems start emerging when the children reach school age.
A prospective cohort study of 330 children showed that at the age of 3, children with methamphetamine exposure showed increased emotional reactivity, as well as more signs of anxiety and depression; and at the age of 5, children showed higher rates of externalizing disorders and attention deficit hyperactivity disorder (ADHD).
Overdose
Methamphetamine overdose is a diverse term. It frequently refers to the exaggeration of the unusual effects with features such as irritability, agitation, hallucinations and paranoia. The cardiovascular effects are typically not noticed in young healthy people. Hypertension and tachycardia are not apparent unless measured. A moderate overdose of methamphetamine may induce symptoms such as: abnormal heart rhythm, confusion, difficult and/or painful urination, high or low blood pressure, high body temperature, over-active and/or over-responsive reflexes, muscle aches, severe agitation, rapid breathing, tremor, urinary hesitancy, and an inability to pass urine. An extremely large overdose may produce symptoms such as adrenergic storm, methamphetamine psychosis, substantially reduced or no urine output, cardiogenic shock, bleeding in the brain, circulatory collapse, hyperpy rexia (i.e., dangerously high body temperature), pulmonary hypertension, kidney failure, rapid muscle breakdown, serotonin syndrome, and a form of stereotypy ("tweaking"). A methamphetamine overdose will likely also result in mild brain damage owing to dopaminergic and serotonergic neurotoxicity. Death from methamphetamine poisoning is typically preceded by convulsions and coma.
Psychosis
Use of methamphetamine can result in a stimulant psychosis which may present with a variety of symptoms (e.g., paranoia, hallucinations, delirium, and delusions). A Cochrane Collaboration review on treatment for amphetamine, dextroamphetamine, and methamphetamine use-induced psychosis states that about 5–15% of users fail to recover completely. The same review asserts that, based upon at least one trial, antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Amphetamine psychosis may also develop occasionally as a treatment-emergent side effect.
Death from overdose
The CDC reported that the number of deaths in the United States involving psychostimulants with abuse potential to be 23,837 in 2020 and 32,537 in 2021. This category code (ICD–10 of T43.6) includes primarily methamphetamine but also other stimulants such as amphetamine, and methylphenidate. The mechanism of death in these cases is not reported in these statistics and is difficult to know. Unlike fentanyl which causes respiratory depression, methamphetamine is not a respiratory depressant. Some deaths are as a result of intracranial hemorrhage and some deaths are cardiovascular in nature including flash pulmonary edema and ventricular fibrillation.
Emergency treatment
Acute methamphetamine intoxication is largely managed by treating the symptoms and treatments may initially include administration of activated charcoal and sedation. There is not enough evidence on hemodialysis or peritoneal dialysis in cases of methamphetamine intoxication to determine their usefulness. Forced acid diuresis (e.g., with vitamin C) will increase methamphetamine excretion but is not recommended as it may increase the risk of aggravating acidosis, or cause seizures or rhabdomyolysis. Hypertension presents a risk for intracranial hemorrhage (i.e., bleeding in the brain) and, if severe, is typically treated with intravenous phentolamine or nitroprusside. Blood pressure often drops gradually following sufficient sedation with a benzodiazepine and providing a calming environment.
Antipsychotics such as haloperidol are useful in treating agitation and psychosis from methamphetamine overdose. Beta blockers with lipophilic properties and CNS penetration such as metoprolol and labetalol may be useful for treating CNS and cardiovascular toxicity. The mixed alpha- and beta-blocker labetalol is especially useful for treatment of concomitant tachycardia and hypertension induced by methamphetamine. The phenomenon of "unopposed alpha stimulation" has not been reported with the use of beta-blockers for treatment of methamphetamine toxicity.
Interactions
Methamphetamine is metabolized by the liver enzyme CYP2D6, so CYP2D6 inhibitors will prolong the elimination half-life of methamphetamine. Methamphetamine also interacts with monoamine oxidase inhibitors (MAOIs), since both MAOIs and methamphetamine increase plasma catecholamines; therefore, concurrent use of both is dangerous. Methamphetamine may decrease the effects of sedatives and depressants and increase the effects of antidepressants and other stimulants as well. Methamphetamine may counteract the effects of antihypertensives and antipsychotics owing to its effects on the cardiovascular system and cognition respectively. The pH of gastrointestinal content and urine affects the absorption and excretion of methamphetamine. Specifically, acidic substances will reduce the absorption of methamphetamine and increase urinary excretion, while alkaline substances do the opposite. Owing to the effect pH has on absorption, proton pump inhibitors, which reduce gastric acid, are known to interact with methamphetamine. Norepinephrine reuptake inhibitors (NRIs) like atomoxetine prevent norepinephrine release induced by amphetamines and have been found to reduce the stimulant, euphoriant, and sympathomimetic effects of dextroamphetamine in humans. Similarly, norepinephrine–dopamine reuptake inhibitors (NRIs) like methylphenidate and bupropion prevent norepinephrine and dopamine release induced by amphetamines and bupropion has been found to reduce the subjective and sympathomimetic effects of methamphetamine in humans.
Pharmacology
Pharmacodynamics
Methamphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a G protein-coupled receptor (GPCR) that regulates brain catecholamine systems. Activation of TAAR1 increases cyclic adenosine monophosphate (cAMP) production and either completely inhibits or reverses the transport direction of the dopamine transporter (DAT), norepinephrine transporter (NET), and serotonin transporter (SERT). When methamphetamine binds to TAAR1, it triggers transporter phosphorylation via protein kinase A (PKA) and protein kinase C (PKC) signaling, ultimately resulting in the internalization or reverse function of monoamine transporters. Methamphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through a Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent signaling pathway, in turn producing dopamine efflux. TAAR1 has been shown to reduce the firing rate of neurons through direct activation of G protein-coupled inwardly-rectifying potassium channels. TAAR1 activation by methamphetamine in astrocytes appears to negatively modulate the membrane expression and function of EAAT2, a type of glutamate transporter.
In addition to its effect on the plasma membrane monoamine transporters, methamphetamine inhibits synaptic vesicle function by inhibiting VMAT2, which prevents monoamine uptake into the vesicles and promotes their release. This results in the outflow of monoamines from synaptic vesicles into the cytosol (intracellular fluid) of the presynaptic neuron, and their subsequent release into the synaptic cleft by the phosphorylated transporters. Other transporters that methamphetamine is known to inhibit are SLC22A3 and SLC22A5. SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter.
Methamphetamine is also an agonist of the alpha-2 adrenergic receptors and sigma receptors with a greater affinity for σ1 than σ2, and inhibits monoamine oxidase A (MAO-A) and monoamine oxidase B (MAO-B). Sigma receptor activation by methamphetamine may facilitate its central nervous system stimulant effects and promote neurotoxicity within the brain. Dextromethamphetamine is a stronger psychostimulant, but levomethamphetamine has stronger peripheral effects, a longer half-life, and longer perceived effects among heavy substance users. At high doses, both enantiomers of methamphetamine can induce similar stereotypy and methamphetamine psychosis, but levomethamphetamine has shorter psychodynamic effects.
Pharmacokinetics
The bioavailability of methamphetamine is 67% orally, 79% intranasally, 67 to 90% via inhalation (smoking), and 100% intravenously. Following oral administration, methamphetamine is well-absorbed into the bloodstream, with peak plasma methamphetamine concentrations achieved in approximately 3.13–6.3 hours post ingestion. Methamphetamine is also well absorbed following inhalation and following intranasal administration. Because of the high lipophilicity of methamphetamine due to its methyl group, it can readily move through the blood–brain barrier faster than other stimulants, where it is more resistant to degradation by monoamine oxidase. The amphetamine metabolite peaks at 10–24 hours. Methamphetamine is excreted by the kidneys, with the rate of excretion into the urine heavily influenced by urinary pH. When taken orally, 30–54% of the dose is excreted in urine as methamphetamine and 10–23% as amphetamine. Following IV doses, about 45% is excreted as methamphetamine and 7% as amphetamine. The elimination half-life of methamphetamine varies with a range of 5–30hours, but it is on average 9 to 12hours in most studies. The elimination half-life of methamphetamine does not vary by route of administration, but is subject to substantial interindividual variability.
CYP2D6, dopamine β-hydroxylase, flavin-containing monooxygenase 3, butyrate-CoA ligase, and glycine N-acyltransferase are the enzymes known to metabolize methamphetamine or its metabolites in humans. The primary metabolites are amphetamine and 4-hydroxymethamphetamine; other minor metabolites include: , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone, the metabolites of amphetamine. Among these metabolites, the active sympathomimetics are amphetamine, , , , and norephedrine. Methamphetamine is a CYP2D6 inhibitor.
The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways include:
Detection in biological fluids
Methamphetamine and amphetamine are often measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Chiral techniques may be employed to help distinguish the source of the drug to determine whether it was obtained illicitly or legally via prescription or prodrug. Chiral separation is needed to assess the possible contribution of levomethamphetamine, which is an active ingredients in some OTC nasal decongestants, toward a positive test result. Dietary zinc supplements can mask the presence of methamphetamine and other drugs in urine.
Chemistry
Methamphetamine is a chiral compound with two enantiomers, dextromethamphetamine and levomethamphetamine. At room temperature, the free base of methamphetamine is a clear and colorless liquid with an odor characteristic of geranium leaves. It is soluble in diethyl ether and ethanol as well as miscible with chloroform.
In contrast, the methamphetamine hydrochloride salt is odorless with a bitter taste. It has a melting point between and, at room temperature, occurs as white crystals or a white crystalline powder. The hydrochloride salt is also freely soluble in ethanol and water. The crystal structure of either enantiomer is monoclinic with P21 space group; at , it has lattice parameters a = 7.10 Å, b = 7.29 Å, c = 10.81 Å, and β = 97.29°.
Degradation
A 2011 study into the destruction of methamphetamine using bleach showed that effectiveness is correlated with exposure time and concentration. A year-long study (also from 2011) showed that methamphetamine in soils is a persistent pollutant. In a 2013 study of bioreactors in wastewater, methamphetamine was found to be largely degraded within 30 days under exposure to light.
Synthesis
Racemic methamphetamine may be prepared starting from phenylacetone by either the Leuckart or reductive amination methods. In the Leuckart reaction, one equivalent of phenylacetone is reacted with two equivalents of to produce the formyl amide of methamphetamine plus carbon dioxide and methylamine as side products. In this reaction, an iminium cation is formed as an intermediate which is reduced by the second equivalent of . The intermediate formyl amide is then hydrolyzed under acidic aqueous conditions to yield methamphetamine as the final product. Alternatively, phenylacetone can be reacted with methylamine under reducing conditions to yield methamphetamine.
History, society, and culture
Amphetamine, discovered before methamphetamine, was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine. Shortly after, methamphetamine was synthesized from ephedrine in 1893 by Japanese chemist Nagai Nagayoshi. Three decades later, in 1919, methamphetamine hydrochloride was synthesized by pharmacologist Akira Ogata via reduction of ephedrine using red phosphorus and iodine.
From 1938, methamphetamine was marketed on a large scale in Germany as a nonprescription drug under the brand name Pervitin, produced by the Berlin-based Temmler pharmaceutical company. It was used by all branches of the combined armed forces of the Third Reich, for its stimulant effects and to induce extended wakefulness. Pervitin became colloquially known among the German troops as "Stuka-Tablets" (Stuka-Tabletten) and "Herman-Göring-Pills" (Hermann-Göring-Pillen), as a snide allusion to Göring's widely-known addiction to drugs. However, the side effects, particularly the withdrawal symptoms, were so serious that the army sharply cut back its usage in 1940. By 1941, usage was restricted to a doctor's prescription, and the military tightly controlled its distribution. Soldiers would only receive a couple of tablets at a time, and were discouraged from using them in combat. Historian Łukasz Kamieński says,
Some soldiers turned violent, committing war crimes against civilians; others attacked their own officers. At the end of the war, it was used as part of a new drug: D-IX.
Obetrol, patented by Obetrol Pharmaceuticals in the 1950s and indicated for treatment of obesity, was one of the first brands of pharmaceutical methamphetamine products. Because of the psychological and stimulant effects of methamphetamine, Obetrol became a popular diet pill in America in the 1950s and 1960s. Eventually, as the addictive properties of the drug became known, governments began to strictly regulate the production and distribution of methamphetamine. For example, during the early 1970s in the United States, methamphetamine became a schedule II controlled substance under the Controlled Substances Act. Currently, methamphetamine is sold under the trade name Desoxyn, trademarked by the Danish pharmaceutical company Lundbeck. As of January 2013, the Desoxyn trademark had been sold to Italian pharmaceutical company Recordati.
Trafficking
The Golden Triangle (Southeast Asia), specifically Shan State, Myanmar, is the world's leading producer of methamphetamine as production has shifted to Yaba and crystalline methamphetamine, including for export to the United States and across East and Southeast Asia and the Pacific.
Concerning the accelerating synthetic drug production in the region, the Cantonese Chinese syndicate Sam Gor, also known as The Company, is understood to be the main international crime syndicate responsible for this shift. It is made up of members of five different triads. Sam Gor is primarily involved in drug trafficking, earning at least $8 billion per year. Sam Gor is alleged to control 40% of the Asia-Pacific methamphetamine market, while also trafficking heroin and ketamine. The organization is active in a variety of countries, including Myanmar, Thailand, New Zealand, Australia, Japan, China, and Taiwan. Sam Gor previously produced meth in Southern China and is now believed to manufacture mainly in the Golden Triangle, specifically Shan State, Myanmar, responsible for much of the massive surge of crystal meth in circa 2019. The group is understood to be headed by Tse Chi Lop, a gangster born in Guangzhou, China who also holds a Canadian passport.
Liu Zhaohua was another individual involved in the production and trafficking of methamphetamine until his arrest in 2005. It was estimated over 18 tonnes of methamphetamine were produced under his watch.
Legal status
The production, distribution, sale, and possession of methamphetamine is restricted or illegal in many jurisdictions. In some jurisdictions, it is legally available as a prescription medication. Methamphetamine has been placed in schedule II of the United Nations Convention on Psychotropic Substances treaty, indicating that it has limited medical use.
Research
Animal models have shown that low-dose methamphetamine improves cognitive and behavioural functioning following TBI (traumatic brain injury). This is in contrast to high, repeated doses which cause neurotoxicity. These models demonstrate that low-dose methamphetamine increases neurogenesis and reduces apoptosis in the dentate gyrus of the hippocampus following TBI. It has also been found that TBI patients testing positive for methamphetamine at the time of emergency department admission have lower rates of mortality.
It has been suggested, based on animal research, that calcitriol, the active metabolite of vitamin D, can provide significant protection against the DA- and 5-HT-depleting effects of neurotoxic doses of methamphetamine. Protection against methamphetamine-induced neurotoxicity has also been observed following administration of ascorbic acid (vitamin C), cobalamin (vitamin B12), and vitamin E.
See also
18-MC
Breaking Bad, a TV drama series centered on illicit methamphetamine synthesis
Drug checking
Faces of Meth, a drug prevention project
Harm reduction
Methamphetamine and Native Americans
Methamphetamine in Australia
Methamphetamine in Bangladesh
Methamphetamine in the Philippines
Methamphetamine in the United States
Montana Meth Project, a Montana-based organization aiming to reduce meth use among teenagers
Recreational drug use
Rolling meth lab, a transportable laboratory that is used to illegally produce methamphetamine
Ya ba, Southeast Asian tablets containing a mixture of methamphetamine and caffeine
Footnotes
Reference notes
References
Further reading
External links
Methamphetamine Poison Information Monograph
Drug Trafficking: Aryan Brotherhood Methamphetamine Operation Dismantled, FBI
1893 introductions
Anorectics
Anti-obesity drugs
Aphrodisiacs
Attention deficit hyperactivity disorder management
Carbonic anhydrase activators
Cardiac stimulants
Euphoriants
Excitatory amino acid reuptake inhibitors
Human drug metabolites
Japanese inventions
Monoaminergic activity enhancers
Norepinephrine-dopamine releasing agents
Phenethylamines
Sigma agonists
Stimulants
Substituted amphetamines
Sympathomimetics
TAAR1 agonists
VMAT inhibitors | Methamphetamine | [
"Chemistry"
] | 9,230 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
11,038,447 | https://en.wikipedia.org/wiki/Neural%20tissue%20engineering | Neural tissue engineering is a specific sub-field of tissue engineering. Neural tissue engineering is primarily a search for strategies to eliminate inflammation and fibrosis upon implantation of foreign substances. Often foreign substances in the form of grafts and scaffolds are implanted to promote nerve regeneration and to repair nerves of both the central nervous system (CNS) and peripheral nervous system (PNS) due to injury.
Introduction
There are two parts of the nervous system: the central nervous system (CNS) and the peripheral nervous system (PNS). General body functions are supervised by the central nervous system (CNS), which includes the brain and spinal cord. The PNS delivers motor signals to control body activities and receives sensory data from the CNS. The PNS It is made up of nerve fibers arranged into nerves. The PNS's autonomic nervous system (ANS), whose sympathetic and parasympathetic branches preserve homeostasis and regulate involuntary physiological functions.
The "fight-or-flight" reaction is triggered by the sympathetic nervous system (SNS), which is derived from the thoracic and upper lumbar spinal cord. It readies the body for quick reactions under pressure. The parasympathetic nervous system (PSNS), on the other hand, is derived from the brainstem and sacral spinal cord and facilitates normal physiological processes by encouraging rest and energy conservation. One of the main nerves in the PSNS, the vagus nerve, originates in the brainstem and travels throughout the body, affecting different organs. It has sensory and motor fibers. Sensory messages tell the brain what the body is doing, allowing it to maintain homeostasis and control activities. Additionally, the vagus nerve influences emotions and memory through connections to several brain regions.
Neuroimmune Interactions The immune system's role is to identify and protect the body against external chemicals and infections. It is separated into innate and adaptive immunity and consists of immune organs, cells, and active ingredients. Remarkably, under certain circumstances, a variety of non-immune cells can display immunological properties. The immune system and the neurological system, which control body processes, are interdependent. By controlling humoral chemicals on a systemic level, the central nervous system CNS affects the immune system. Sleep and other psychosocial variables can affect immunological responses. Obesity and sleep deprivation, for example, can impair immunity, and long-term stress can erode immunological responses, making people more vulnerable to infections like COVID-19.
In diseases like asthma that are made worse by psychological stress or depression, neuroimmune interactions are clearly seen. The immune response can impact brain activity, and neuroendocrine hormones control the release of cytokines. Fever symptoms like drowsiness and decreased appetite are caused by proinflammatory mediators. Immune system organs get autonomic innervation from the peripheral nervous system (PNS), which facilitates specialized communication between the two systems. Comprehensive information on bidirectional crosstalk pathways is frequently lacking, despite evidence of functional links between the neurological and immune systems already in place. lymph nodes are essential components of the immune system because they serve as both collecting places for various immune cells and act as filters for dangerous chemicals. Their well-structured composition promotes efficient immune responses, protecting the body against external chemicals, infections, and malignancies. Regional innervation of lymph nodes involves complex participation from the sympathetic and parasympathetic branches of the autonomic nervous system (ANS). Furthermore, there is afferent innervation, which is in charge of immune responses in particular areas. Through the use of neuropeptides, nociceptors—specialized nerve endings that feel pain—control the immune system. Distinct nerve fibers inside lymph nodes are identified by several markers, such as TH, anti-β2-AR, ChAT, and VAChT. Studies have shown that nerve fibers originate from the hilum, travel along blood vessels, cross medullary areas, and form subscapular plexuses. Some limitations do, however, remain. These include the sparse identification of neurons and nerve fibers, the lack of a thorough examination of fine nerve fibers, the incomplete knowledge of innervation in particular regions, and the inadequate documentation in certain studies of close interactions between immune and non-immune cells and nerve fibers.
Neuroimmune interplays have possible therapeutical approaches Novel approaches focusing on neuroimmune interactions may alter the course of the disease or reduce symptoms. Targeting neuroimmune pathways is a holistic approach that seeks to affect both immune responses and brain functioning. The term "acupuncture" refers to the ancient Chinese medical technique of gently stimulating nociceptors and receptors with tiny needles inserted into certain body sites in order to treat various ailments, including pain and inflammation.
The FDA-approved therapy for depression and epilepsy, vagus nerve stimulation (VNS), may also be beneficial for non-neurological conditions such rheumatoid arthritis and inflammatory bowel disease. Chemical therapies, such as peripheral nervous system (PNS) modulation, are being investigated for the treatment of infectious and inflammatory disorders, such as rheumatoid arthritis and issues associated with diabetes. Targeting tumor innervation is being explored as a potential new treatment approach. Intratumoral innervation, which involves nerves inside or around tumors, influences the biology of cancer. Peripheral neuropathy is one of the PNS-associated disorders that can be treated with immunotherapy manipulation. According to many experimental researchers, extensive clinical studies are necessary to confirm the safety, effectiveness, and regulatory approval of these experimental techniques prior to their establishment as established therapies.
Tissue Engineering The need for neural tissue engineering arises from the difficulty of the nerve cells and neural tissues to regenerate on their own after neural damage has occurred. The PNS has some, but limited, regeneration of neural cells. Adult stem cell neurogenesis in the CNS has been found to occur in the hippocampus, the subventricular zone (SVZ), and spinal cord. CNS injuries can be caused by stroke, neurodegenerative disorders, trauma, or encephalopathy. A few methods currently being investigated to treat CNS injuries are: implanting stem cells directly into the injury site, delivering morphogens to the injury site, or growing neural tissue in vitro with neural stem or progenitor cells in a 3D scaffold. Proposed use of electrospun polymeric fibrous scaffolds for neural repair substrates dates back to at least 1986 in a NIH SBIR application from Simon. For the PNS, a severed nerve can be reconnected and reinnervated using grafts or guidance of the existing nerve through a channel.
Recent research into creating miniature cortexes, known as corticopoiesis, and brain models, known as cerebral organoids, are techniques that could further the field of neural tissue regeneration. The native cortical progenitors in corticopoiesis are neural tissues that could be effectively embedded into the brain. Cerebral organoids are 3D human pluripotent stem cells developed into sections of the brain cortex, showing that there is a potential to isolate and develop certain neural tissues using neural progenitors.
Another situation that calls for implanting of foreign tissue is use of recording electrodes. Chronic Electrode Implants are a tool being used in research applications to record signals from regions of the cerebral cortex. Research into the stimulation of PNS neurons in patients with paralysis and prosthetics could further the knowledge of reinnervation of neural tissue in both the PNS and the CNS. This research is capable of making one difficult aspect of neural tissue engineering, functional innervation of neural tissue, more manageable.
CNS
Causes of CNS injury
There are four main causes of CNS injury: stroke, traumatic brain injury (TBI), brain tumors, or developmental complications. Strokes are classified as either hemorrhagic (when a vessel is damaged to the point of bleeding into the brain) or ischemic (when a clot blocks the blood flow through the vessel in the brain). When a hemorrhage occurs, blood seeps into the surrounding tissue, resulting in tissue death, while ischemic hemorrhages result in a lack of blood flow to certain tissues. Traumatic brain injury is caused by external forces impacting the cranium or the spinal cord. Problems with CNS development results in abnormal tissue growth during development, thus decreasing the function of the CNS.
CNS treatments and research
Implantation of stem cells to the injury site
One method to treat CNS injury involves culturing stem cells in vitro and implanting the non-directed stem cells into the brain injury site. Implanting stem cells directly into the injury site prevents glial scar formation and promotes neurogenesis originating from the patient, but also runs the risk of tumor development, inflammation, and migration of the stem cells out of the injury location. Tumorigenesis can occur due to the uncontrolled nature of the stem cell differentiation, inflammation can occur due to rejection of the implanted cells by the host cells, and the highly migratory nature of stem cells results in the cells moving away from the injury site, thus not having the desired effect on the injury site. Other concerns of neural tissue engineering include establishing safe sources of stem cells and getting reproducible results from treatment to treatment.
Alternatively, these stem cells can act as carriers for other therapies, though the positive effects of using stem cells as a delivery mechanism has not been confirmed. Direct stem cell delivery has an increased beneficial effect if they are directed to be neuronal cells in vitro. This way, the risks associated with undirected stem cells are decreased; additionally, injuries that do not have a specific boundary could be treated efficiently.
Delivery of molecules to the injury site
Molecules that promote the regeneration of neural tissue, including pharmaceutical drugs, growth factors known as morphogens, and miRNA can also be directly introduced to the injury site of the damaged CNS tissue. Neurogenesis has been seen in animals that are treated with psychotropic drugs through the inhibition of serotonin reuptake and induction of neurogenesis in the brain. When stem cells are differentiating, the cells secrete morphogens such as growth factors to promote healthy development. These morphogens help maintain homeostasis and neural signaling pathways, and they can be delivered into the injury site to promote the growth of the injured tissues. Currently, morphogen delivery has minimal benefits because of the interactions the morphogens have with the injured tissue. Morphogens that are not innate in the body have a limited effect on the injured tissue due to the physical size and their limited mobility within CNS tissue. To be an effective treatment, the morphogens must be present at the injury site at a specific and constant concentration. miRNA has also been shown to affect neurogenesis by directing the differentiation of undifferentiated neural cells.
Implantation of neural tissue developed in vitro
A third method for treating CNS injuries is to artificially create tissue outside of the body to implant into the injury site. This method could treat injuries that consist of large cavities, where larger amounts of neural tissue needs to be replaced and regenerated. Neural tissue is grown in vitro with neural stem or progenitor cells in a 3D scaffold, forming embryoid bodies (EBs). These EBs consist of a sphere of stem cells, where the inner cells are undifferentiated neural cells, and the surrounding cells are increasingly more differentiated. 3D scaffolds are used to transplant tissue to the injury site and to make the appropriate interface between the artificial and the brain tissue. The scaffolds must be: biocompatible, biodegradable, fit injury site, similar to existing tissue in elasticity and stiffness, and support growing cells and tissues. The combination of using directed stem cells and scaffolds to support the neural cells and tissues increase the survival of the stem cells in the injury site, increasing the efficacy of the treatment.
There are 6 different types of scaffolds that are being researched to use in this method for treating neural tissue injury:
Liquid hydrogels are cross-linked hydrophobic polymer chains, and the neural stem cells are either grown on the surface of the gel or integrated into the gel during cross-linking of the polymer chains. The major drawback of liquid hydrogels is there is limited protection of the cells that are transplanted.
Supportive scaffolds are made from solid bead-shaped or microporous structures, and can act as carriers for the transplanted cells or for the growth factors that the stem cells secrete when they are differentiating. The cells adhere to the surface of the matrix in 2D layers. The supportive scaffolds are easily transplanted into the brain injury site because of the scaffold size. They provide a matrix promoting cell adhesion and aggregation, thus increasing increased healthy cell culture.
Aligning scaffolds can be silk-based, polysaccharide-based, or based on other materials such as a collagen-rich hydrogel. These gels are now enhanced with micro-patterns on the surface for the promotion of neuronal outgrowths. These scaffolds are primarily used for regeneration that needs to occur in a specific orientation, such as in spinal cord injuries.
Integrative scaffolds are mainly used to protect the transplanted cells from mechanical forces that they are exposed to in the process of implantation into the site of the injury. These scaffolds also decrease the likelihood of having the inflammatory cells located at the site of the injury migrate into the scaffold with the stem cells. Blood vessels have been observed to grow through the scaffold, thus the scaffold and cells are being integrated into the host tissue.
A combination of engineered scaffolds presents an option for a 3D scaffold that can have both the necessary patterns for cell adhesion and the flexibility to adapt to the ever changing environment at the injury site. Decellularized ECM scaffolds is an option for scaffolds because they more closely mimc the native tissue, but these scaffolds can only currently be harvested from amputations and cadavers.
These 3D scaffolds can be fabricated using particulate leaching, gas foaming, fiber bonding, solvent casting, or electrospinning techniques; each technique creates a scaffold with different properties than the other techniques.
Incorporation success of 3D scaffolds into the CNS has been shown to depend on the stage at which the cells have differentiated. Later stages provide a more efficient implantation, while earlier staged cells need to be exposed to factors that coerce the cells to differentiate and thus respond appropriately to the signals the cells will receive at the CNS injury site. Brain-derived neurotrophic factor is a potential co-factor to promote functional activation of ES cell-derived neurons into the CNS injury sites.
PNS
Causes of PNS injury
Trauma to the PNS can cause damage as severe as a severance of the nerve, splitting the nerve into a proximal and distal section. The distal nerve degenerates over time due to inactivity, while the proximal end swells over time. The distal end does not degenerate right away, and the swelling of the proximal end does not render it nonfunctional, so methods to reestablish the connection between the two ends of the nerve are being investigated.
PNS treatments and research
Surgical reconnection
One method to treat PNS injury is surgical reconnection of the severed nerve by taking the two ends of the nerve and suturing them together. When suturing the nerves together, the fascicles of the nerve are each reconnected, bridging the nerve back together. Though this method works for severances that create a small gap between the proximal and distal nerve ends, this method does not work over gaps of greater distances due to the tension that must be put on the nerve endings. This tension results in the nerve degeneration, and therefore the nerve cannot regenerate and form a functional neural connection.
Tissue grafts
Tissue grafts utilize nerves or other materials to bridge the two ends of the severed nerve. There are three categories of tissue grafts: autologous tissue grafts, nonautologous tissue grafts, and acellular grafts.
Autologous tissue grafts transplant nerves from a different part of the body of the patient to fill the gap between either end of the injured nerve. These nerves are typically cutaneous nerves, but other nerves have been researched as well with encouraging results. These autologous nerve grafts are the current gold standard for PNS nerve grafting because of the highly biocompatible nature of the autologous nerve graft, but there are issues concerning harvesting the nerve from the patients themselves and being able to store a large amount of autologous grafts for future use.
Nonautologous and acellular grafts (including ECM-based materials) are tissues that do not come from the patient, but instead can be harvested from cadavers (known as allogenic tissue) or animals (known as xenogeneic tissue). While these tissues have an advantage over autologous tissue grafts because the tissue does not need to be taken from the patient, difficulty arises with the potential of disease transmission and thus immunogenic problems. Methods of eliminating the immunogenic cells, thus leaving behind only the ECM-components of the tissue, are currently being investigated to increase the efficacy of nonautologous tissue grafts.
Guidance
Guidance methods of PNS regeneration use nerve guide channels to help axons regrow along the correct path, and may direct growth factors secreted by both ends of the nerve to promote growth and reconnection. Guidance methods reduce scarring of the nerves, increasing the functionality of the nerves to transmit action potentials after reconnection. Two types of materials are used in guidance methods of PNS regeneration: natural-based materials and synthetic materials.
Natural-based materials are modified scaffolds stemming from ECM components and glycosaminoglycans. Laminin, collagen, and fibronectin, which are all ECM components, guide axonal development and promote neural stimulation and activity. Other molecules that have the potential to promote nerve repair are: hyaluronic acid, fibrinogen, fibrin gels, self-assembling peptide scaffolds, alginate, agarose, and chitosan.
Synthetic materials also provide another method for tissue regeneration in which the graft's chemical and physical properties can be controlled. Since the properties of a material may be specified for the situation in which it is being used, synthetic materials are an attractive option for PNS regeneration. The use of synthetic materials come with certain concerns, such as: easy formation of the graft material into the necessary dimensions, biodegradable, sterilizable, tear resistant, easy to operate with, low risk of infection, and low inflammation response due to the material. The material must also maintain the channel during the nerve regeneration. Currently, the materials most commonly researched mainly focus on polyesters, but biodegradable polyurethane, other polymers, and biodegradable glass are also being investigated. Other possibilities for synthetic materials are conducting polymers and polymers biologically modified to promote cell axon growth and maintain the axon channel.
Neuroimmune Enhancement Through EVs
Extracellular vesicles (EVs) are bilayer-bound lipid particles that participate in intercellular communication by releasing a variety of substances, including nucleic acids, lipids, and proteins. Exosomes, macrovesicles, and apoptotic bodies are the three primary forms; each has unique properties. EVs have the potential to be used as therapeutic delivery vehicles and diagnostic biomarkers and play roles in immunological responses, cancer, tissue regeneration, and neurological diseases. Damaged neurons generate neuron-derived exosomes (NDEs), which can influence target cells by transferring a variety of cargos, including the Zika virus. Neurodegenerative illnesses are linked to NDEs. Immune cell exosomes (IEEs) have the potential to be used in immunotherapy and vaccine development since they influence immune responses and interact with other cells. Immune cells such as DCs, macrophages, B cells, and T cells produce IEEs. EVs have been shown to promote neuroimmune crosstalk, allowing for both local and distant tissue and cell communication.
Difficulty of research
Because there are so many factors that contribute to the success or failure of neural tissue engineering, there are many difficulties that arise in using neural tissue engineering to treat CNS and PNS injuries. First, the therapy needs to be delivered to the site of the injury. This means that the injury site needs to be accessed by surgery or drug delivery. Both of these methods have inherent risks and difficulties in themselves, compounding the problems associated with the treatments. A second concern is keeping the therapy at the site of the injury. Stem cells have a tendency to migrate out of the injury site to other sections of the brain, thus the therapy is not as effective as it could be as when the cells stay at the injury site. Additionally, the delivery of stem cells and other morphogens to the site of injury can cause more harm than good if they induce tumorigenesis, inflammation, or other unforeseen effects. Finally, the findings in laboratories may not translate to practical clinical treatments. Treatments are successful in a lab, or even an animal model of the injury, may not be effective in a human patient.
Related research
Modeling brain tissue development in vitro
Two models for brain tissue development are cerebral organoids and corticopoiesis. These models provide an "in vitro" model for normal brain development, but they can be manipulated to represent neural defects. Therefore, the mechanisms behind healthy and malfunctioning development can be studied by researchers using these models. These tissues can be made with either mouse embryonic stem cells (ESC)s or human ESCs. Mouse ESCs are cultured in a protein called Sonic Hedgehog inhibitor to promote the development of dorsal forebrain and study cortical fate. This method has been shown to produce axonal layers that mimic a broad range of cortical layers. Human ESC-derived tissues use pluripotent stem cells to form tissues on scaffold, forming human EBs. These human ESC-derived tissues are formed by culturing human pluripotent EBs in a spinning bioreactor.
Targeted reinnervation
Targeted reinnervation is a method to reinnervate the neural connections in the CNS and PNS, specifically in paralyzed patients and amputees using prosthetic limbs. Currently, devices are being investigated that take in and record the electrical signals that are propagated through neurons in response to a person's intent to move. This research could shed light on how to reinnervate the neural connections between severed PNS nerves and the connections between the transplanted 3D scaffolds into the CNS.
References
Biological engineering
Neurology
Nervous system
Articles containing video clips | Neural tissue engineering | [
"Engineering",
"Biology"
] | 4,866 | [
"Organ systems",
"Biological engineering",
"Nervous system"
] |
11,038,534 | https://en.wikipedia.org/wiki/End-of-life%20care | End-of-life care (EOLC) is health care provided in the time leading up to a person's death. End-of-life care can be provided in the hours, days, or months before a person dies and encompasses care and support for a person's mental and emotional needs, physical comfort, spiritual needs, and practical tasks.
EoLC is most commonly provided at home, in the hospital, or in a long-term care facility with care being provided by family members, nurses, social workers, physicians, and other support staff. Facilities may also have palliative or hospice care teams that will provide end-of-life care services. Decisions about end-of-life care are often informed by medical, financial and ethical considerations.
In most developed countries, medical spending on people in the last twelve months of life makes up roughly 10% of total aggregate medical spending, while those in the last three years of life can cost up to 25%.
Medical
Advanced care planning
Advances in medicine in the last few decades have provided an increasing number of options to extend a person's life and highlighted the importance of ensuring that an individual's preferences and values for end-of-life care are honored. Advanced care planning is the process by which a person of any age is able to provide their preferences and ensure that their future medical treatment aligns with their personal values and life goals.
It is typically a continual process, with ongoing discussions about a patient's current prognosis and conditions as well as conversations about medical dilemmas and options. A person will typically have these conversations with their doctor and ultimately record their preferences in an advance healthcare directive. An advance healthcare directive is a legal document that either documents a person's decisions about desired treatment or indicates who a person has entrusted to make their care decisions for them. The two main types of advanced directives are a living will and durable power of attorney for healthcare. A living will includes a person's decisions regarding their future care, a majority of which address resuscitation and life support but may also delve into a patient's preferences regarding hospitalization, pain control, and specific treatments that they may undergo in the future. The living will will typically take effect when a patient is terminally ill with low chances of recovery. A durable power of attorney for healthcare allows a person to appoint another individual to make healthcare decisions for them under a specified set of circumstances. Combined directives, such as the "Five Wishes", that include components of both the living will and durable power of attorney for healthcare, are being increasingly utilized.
Advanced care planning often includes preferences for CPR initiation, nutrition (tube feeding), as well as decisions about the use of machines to keep a person breathing, or support their heart or kidneys. Many studies have reported benefits to patients who complete advanced care planning, specifically noting the improved patient and surrogate satisfaction with communication and decreased clinician distress. However, there is a notable lack of empirical data about what outcome improvements patients experience, as there are considerable discrepancies in what constitutes as advanced care planning and heterogeneity in the outcomes measured. Advanced care planning remains an underutilized tool for patients. Researchers have published data to support the use of new relationship-based and supported decision making models that can increase the use and maximize the benefit of advanced care planning.
End-of-life care conversations
End-of-life care conversations are part of the treatment planning process for terminally ill patients requiring palliative care involving a discussion of a patient's prognosis, specification of goals of care, and individualized treatment planning. A recent Cochrane review (2022) set forth to review the effectiveness of interpersonal communication interventions during end-of-life care. Research suggest that many patients prioritize proper symptom management, avoidance of suffering, and care that aligns with ethical and cultural standards. Specific conversations can include discussions about cardiopulmonary resuscitation (ideally occurring before the active dying phase as to not force the conversation during a medical crisis/emergency), place of death, organ donation, and cultural/religious traditions. As there are many factors involved in the end-of-life care decision-making process, the attitudes and perspectives of patients and families may vary. For example, family members may differ over whether life extension or life quality is the main goal of treatment. As it can be challenging for families in the grieving process to make timely decisions that respect the patient's wishes and values, having an established advanced care directive in place can prevent over-treatment, under-treatment, or further complications in treatment management.
Patients and families may also struggle to grasp the inevitability of death, and the differing risks and effects of medical and non-medical interventions available for end-of-life care. People might avoid discussing their end-of-life care, and often the timing and quality of these discussions can be poor. For example the conversations regarding end-of-life care between COPD patients and clinicians often occur when a person with COPD has advanced stage disease and occur at a low frequency. To prevent interventions that are not in accordance with the patient's wishes, end-of-life care conversations and advanced care directives can allow for the care they desire, as well as help prevent confusion and strain for family members.
In the case of critically ill babies, parents are able to participate more in decision making if they are presented with options to be discussed rather than recommendations by the doctor. Utilizing this style of communication also leads to less conflict with doctors and might help the parents cope better with the eventual outcomes.
Signs of dying
The National Cancer Institute in the United States advises that the presence of some of the following signs may indicate that death is approaching:
Drowsiness, increased sleep, and/or unresponsiveness (caused by changes in the patient's metabolism).
Confusion about time, place, and/or identity of loved ones; restlessness; visions of people and places that are not present; pulling at bed linen or clothing (caused in part by changes in the patient's metabolism).
Decreased socialization and withdrawal (caused by decreased oxygen to the brain, decreased blood flow, and mental preparation for dying).
Changes in breathing (indicate neurologic compromise and impending death) and accumulation of upper airway secretions (resulting in crackling and gurgling breath sounds)
Decreased need for food and fluids, and loss of appetite (caused by the body's need to conserve energy and its decreasing ability to use food and fluids properly).
Decreased oral intake and impaired swallowing (caused by general physical weakness and metabolic disturbances, including but not limited to hypercalcemia)
Loss of bladder or bowel control (caused by the relaxing of muscles in the pelvic area).
Darkened urine or decreased amount of urine (caused by slowing of kidney function and/or decreased fluid intake).
Skin becoming cool to the touch, particularly the hands and feet; skin may become bluish in color, especially on the underside of the body (caused by decreased circulation to the extremities).
Rattling or gurgling sounds while breathing, which may be loud (death rattle); breathing that is irregular and shallow; decreased number of breaths per minute; breathing that alternates between rapid and slow (caused by congestion from decreased fluid consumption, a buildup of waste products in the body, and/or a decrease in circulation to the organs).
Turning of the head toward a light source (caused by decreasing vision).
Increased difficulty controlling pain (caused by progression of the disease).
Involuntary movements (called myoclonus)
Increased heart rate
Hypertension followed by hypotension
Loss of reflexes in the legs and arms
Symptoms management
The following are some of the most common potential problems that can arise in the last days and hours of a patient's life:
Pain
Typically controlled with opioids, like morphine, fentanyl, hydromorphone or, in the United Kingdom, diamorphine. High doses of opioids can cause respiratory depression, and this risk increases with concomitant use of alcohol and other sedatives. Careful use of opioids is important to improve the patient's quality of life while avoiding overdoses.
Agitation
Delirium, terminal anguish, restlessness (e.g. thrashing, plucking, or twitching). Typically controlled using clonazepam or midazolam,antipsychotics such as haloperidol or levomepromazine may also be used instead of, or concomitantly with benzodiazepines. Symptoms may also sometimes be alleviated by rehydration, which may reduce the effects of some toxic drug metabolites.
Respiratory tract secretions
Saliva and other fluids can accumulate in the oropharynx and upper airways when patients become too weak to clear their throats, leading to a characteristic gurgling or rattle-like sound ("death rattle"). While apparently not painful for the patient, the association of this symptom with impending death can create fear and uncertainty for those at the bedside. The secretions may be controlled using drugs such as hyoscine butylbromide, glycopyrronium, or atropine. Rattle may not be controllable if caused by deeper fluid accumulation in the bronchi or the lungs, such as occurs with pneumonia or some tumours.
Nausea and vomiting
Typically controlled using haloperidol, metoclopramide, ondansetron, cyclizine; or other anti-emetics (sometimes levomepromazine is used as second-line to alleviate both agitation and of nausea and vomiting).
Dyspnea (breathlessness)
Typically controlled with opioids, like morphine, fentanyl or, in the United Kingdom, diamorphine
Constipation
Low food intake and opioid use can lead to constipation which can then result in agitation, pain, and delirium. Laxatives and stool softeners are used to prevent constipation. In patients with constipation, the dose of laxatives will be increased to relieve symptoms. Methylnaltrexone is approved to treat constipation due to opioid use.
Other symptoms that may occur, and may be mitigated to some extent, include cough, fatigue, fever, and in some cases bleeding.
Medication administration
A subcutaneous injection is one preferred route of delivery of medications when it has become difficult for patients to swallow or to take pills orally, and if repeated medication is needed, a syringe driver (or infusion pump in the US) is often likely to be used, to deliver a steady low dose of medication. In some settings, such as the home or hospice, sublingual routes of administration may be used for most prescriptions and medications.
Another means of medication delivery, available for use when the oral route is compromised, is a specialized catheter designed to provide comfortable and discreet administration of ongoing medications via the rectal route. The catheter was developed to make rectal access more practical and provide a way to deliver and retain liquid formulations in the distal rectum so that health practitioners can leverage the established benefits of rectal administration. Its small flexible silicone shaft allows the device to be placed safely and remain comfortably in the rectum for repeated administration of medications or liquids. The catheter has a small lumen, allowing for small flush volumes to get medication to the rectum. Small volumes of medications (under 15mL) improve comfort by not stimulating the defecation response of the rectum and can increase the overall absorption of a given dose by decreasing pooling of medication and migration of medication into more proximal areas of the rectum where absorption can be less effective.
Integrated pathways
Integrated care pathways are an organizational tool used by healthcare professionals to clearly define the roles of each team-member and coordinate how and when care will be provided. These pathways are utilized to ensure best practices are being utilized for end-of-life care, such as evidence-based and accepted health care protocols, and to list the required features of care for a specific diagnosis or clinical problem. Many institutions have a predetermined pathway for end of life care, and clinicians should be aware of and make use of these plans when possible.
In the United Kingdom, end-of-life care pathways are based on the Liverpool Care Pathway. Originally developed to provide evidence based care to dying cancer patients, this pathway has been adapted and used for a variety of chronic conditions at clinics in the UK and internationally. Despite its increasing popularity, the 2016 Cochrane Review, which only analyzed one trial, showed limited evidence in the form of high-quality randomized clinical trials to measure the effectiveness of end-of-life care pathways on clinical outcomes, physical outcomes, and emotional/psychological outcomes.
The BEACON Project group developed an integrated care pathway entitled the Comfort Care Order Set, which delineates care for the last days of life in either a hospice or acute care inpatient setting. This order set was implemented and evaluated in a multisite system throughout six United States Veterans Affairs Medical Centers, and the study found increased orders for opioid medication post-pathway implementation, as well as more orders for antipsychotic medications, more patients undergoing palliative care consultations, more advance directives, and increased sublingual drug administration. The intervention did not, however, decrease the proportion of deaths that occurred in an ICU setting or the utilization of restraints around death.
Home-based end-of-life care
While not possible for every person needing care, surveys of the general public suggest most people would prefer to die at home. In the period from 2003 to 2017, the number of deaths at home in the United States increased from 23.8% to 30.7%, while the number of deaths in the hospital decreased from 39.7% to 29.8%. Home-based end-of-life care may be delivered in a number of ways, including by an extension of a primary care practice, by a palliative care practice, and by home care agencies such as Hospice. High-certainty evidence indicates that implementation of home-based end-of-life care programs increases the number of adults who will die at home and slightly improves their satisfaction at a one-month follow-up. There is low-certainty evidence that there may be very little or no difference in satisfaction of the person needing care for longer term (6 months). The number of people who are admitted to hospital during an end-of-life care program is not known. In addition, the impact of home-based end-of-life care on caregivers, healthcare staff, and health service costs is not clear, however, there is weak evidence to suggest that this intervention may reduce health care costs by a small amount.
Disparities in end-of-life care
Not all groups in society have good access to end-of-life care. A systematic review conducted in 2021 investigated the end of life care experiences of people with severe mental illness, including those with schizophrenia, bipolar disorder, and major depressive disorder. The research found that individuals with a severe mental illness were unlikely to receive the most appropriate end of life care. The review recommended that there needs to be close partnerships and communication between mental health and end of life care systems, and these teams need to find ways to support people to die where they choose. More training, support and supervision needs to be available for professionals working in end of life care; this could also decrease prejudice and stigma against individuals with severe mental illness at the end of life, notably in those who are homeless. In addition, studies have shown that minority patients face several additional barriers to receiving quality end-of-life care. Minority patients are prevented from accessing care at an equitable rate for a variety of reasons including: individual discrimination from caregivers, cultural insensitivity, racial economic disparities, as well as medical mistrust.
Non-medical
Family and friends
Family members are often uncertain as to what they should be doing when a person is dying. Many gentle, familiar daily tasks, such as combing hair, putting lotion on delicate skin, and holding hands, are comforting and provide a meaningful method of communicating love to a dying person.
Family members may be suffering emotionally due to the impending death. Their own fear of death may affect their behavior. They may feel guilty about past events in their relationship with the dying person or feel that they have been neglectful. These common emotions can result in tension, fights between family members over decisions, worsened care, and sometimes (in what medical professionals call the "Daughter from California syndrome") a long-absent family member arrives while a patient is dying to demand inappropriately aggressive care.
Family members may also be coping with unrelated problems, such as physical or mental illness, emotional and relationship issues, or legal difficulties. These problems can limit their ability to be involved, civil, helpful, or present.
Spirituality and religion
Spirituality is thought to be of increased importance to an individual's wellbeing during a terminal illness or toward the end-of-life. Pastoral/spiritual care has a particular significance in end of life care, and is considered an essential part of palliative care by the WHO. In palliative care, responsibility for spiritual care is shared by the whole team, with leadership given by specialist practitioners such as pastoral care workers. The palliative care approach to spiritual care may, however, be transferred to other contexts and to individual practice.
Spiritual, cultural, and religious beliefs may influence or guide patient preferences regarding end-of-life care. Healthcare providers caring for patients at the end of life can engage family members and encourage conversations about spiritual practices to better address the different needs of diverse patient populations. Studies have shown that people who identify as religious also report higher levels of well-being. Religion has also been shown to be inversely correlated with depression and suicide. While religion provides some benefits to patients, there is some evidence of increased anxiety and other negative outcomes in some studies. While spirituality has been associated with less aggressive end-of-life care, religion has been associated with an increased desire for aggressive care in some patients. Despite these varied outcomes, spiritual and religious care remains an important aspect of care for patients. Studies have shown that barriers to providing adequate spiritual and religious care include a lack of cultural understanding, limited time, and a lack of formal training or experience.
Many hospitals, nursing homes, and hospice centers have chaplains who provide spiritual support and grief counseling to patients and families of all religious and cultural backgrounds.
Ageism
The World Health Organization defines ageism as "the stereotypes (how we think), prejudice (how we feel) and discrimination (how we act) towards others or ourselves based on age." A systematic review in 2017 showed that negative attitudes amongst nurses towards older individuals were related to the characteristics of the older adults and their demands. This review also highlighted how nurses who had difficulty giving care to their older patients perceived them as "weak, disabled, inflexible, and lacking cognitive or mental ability". Another systematic review considering structural and individual-level effects of ageism found that ageism led to significantly worse health outcomes in 95.5% of the studies and 74.0% of the 1,159 ageism-health associations examined. Studies have also shown that one's own perception of aging and internalized ageism negatively impacts their health. In the same systematic review, they included this factor as part of their research. It was concluded that 93.4% of their total 142 associations about self-perceptions of aging show significant associations between ageism and worse health.
Attitudes of healthcare professionals
End-of-life care is an interdisciplinary endeavor involving physicians, nurses, physical therapists, occupational therapists, pharmacists and social workers. Depending on the facility and level of care needed, the composition of the interprofessional team can vary. Health professional attitudes about end-of-life care depend in part on the provider's role in the care team.
Physicians generally have favorable attitudes towards Advance Directives, which are a key facet of end-of-life care. Medical doctors who have more experience and training in end-of-life care are more likely to cite comfort in having end-of-life-care discussions with patients. Those physicians who have more exposure to end-of-life care also have a higher likelihood of involving nurses in their decision-making process.
A systematic review assessing end-of-life conversations between heart failure patients and healthcare professionals evaluated physician attitudes and preferences towards end-of-life care conversations. The study found that physicians found difficulty initiating end-of-life conversations with their heart failure patients, due to physician apprehension over inducing anxiety in patients, the uncertainty in a patient's prognosis, and physicians awaiting patient cues to initiate end-of-life care conversations.
Although physicians make official decisions about end-of-life care, nurses spend more time with patients and often know more about patient desires and concerns. In a Dutch national survey study of attitudes of nursing staff about involvement in medical end-of-life decisions, 64% of respondents thought patients preferred talking with nurses than physicians and 75% desired to be involved in end-of-life decision making.
By country
Canada
In 2012, Statistics Canada's General Social Survey on Caregiving and care receiving found that 13% of Canadians (3.7 million) aged 15 and older reported that at some point in their lives they had provided end-of-life or palliative care to a family member or friend. For those in their 50s and 60s, the percentage was higher, with about 20% reporting having provided palliative care to a family member or friend. Women were also more likely to have provided palliative care over their lifetimes, with 16% of women reporting having done so, compared with 10% of men. These caregivers helped terminally ill family members or friends with personal or medical care, food preparation, managing finances or providing transportation to and from medical appointments.
United Kingdom
End of life care has been identified by the UK Department of Health as an area where quality of care has previously been "very variable," and which has not had a high profile in the NHS and social care. To address this, a national end of life care programme was established in 2004 to identify and propagate best practice, and a national strategy document published in 2008. The Scottish Government has also published a national strategy.
In 2006 just over half a million people died in England, about 99% of them adults over the age of 18, and almost two-thirds adults over the age of 75. About three-quarters of deaths could be considered "predictable" and followed a period of chronic illness – for example heart disease, cancer, stroke, or dementia. In all, 58% of deaths occurred in an NHS hospital, 18% at home, 17% in residential care homes (most commonly people over the age of 85), and about 4% in hospices. However, a majority of people would prefer to die at home or in a hospice, and according to one survey less than 5% would rather die in hospital. A key aim of the strategy therefore is to reduce the needs for dying patients to have to go to hospital and/or to have to stay there; and to improve provision for support and palliative care in the community to make this possible. One study estimated that 40% of the patients who had died in hospital had not had medical needs that required them to be there.
In 2015 and 2010, the UK ranked highest globally in a study of end-of-life care. The 2015 study said "Its ranking is due to comprehensive national policies, the extensive integration of palliative care into the National Health Service, a strong hospice movement, and deep community engagement on the issue." The studies were carried out by the Economist Intelligence Unit and commissioned by the Lien Foundation, a Singaporean philanthropic organisation.
The 2015 National Institute for Health and Care Excellence guidelines introduced religion and spirituality among the factors which physicians shall take into account for assessing palliative care needs. In 2016, the UK Minister of Health signed a document which declared people "should have access to personalised care which focuses on the preferences, beliefs and spiritual needs of the individual." As of 2017, more than 47% of the 500,000 deaths in the UK occurred in hospitals.
In 2021 the National Palliative and End of Life Care Partnership published their six ambitions for 2021–26. These include fair access to end of life care for everyone regardless of who they are, where they live or their circumstances, and the need to maximise comfort and wellbeing. Informed and timely conversations are also highlighted.
Research funded by the UK's National Institute for Health and Care Research (NIHR) has addressed these areas of need. Examples highlight inequalities faced by several groups and offers recommendations. These include the need for close partnership between services caring for people with severe mental illness, improved understanding of barriers faced by Gypsy, Traveller and Roma communities, the provision of flexible palliative care services for children from ethnic minorities or deprived areas.
Other research suggests that giving nurses and pharmacists easier access to electronic patient records about prescribing could help people manage their symptoms at home. A named professional to support and guide patients and carers through the healthcare system could also improve the experience of care at home at the end of life. A synthesised review looking at palliative care in the UK created a resource showing which services were available and grouped them according to their intended purpose and benefit to the patient. They also stated that currently in the UK palliative services are only available to patients with a timeline to death, usually 12 months or less. They found these timelines to often be inaccurate and created barriers to patients accessing appropriate services. They call for a more holistic approach to end of life care which is not restricted by arbitrary timelines.
United States
As of 2019, physician-assisted dying is legal in eight states (California, Colorado, Hawaii, Maine, New Jersey, Oregon, Vermont, Washington) and Washington D.C.
Spending on those in the last twelve months accounts for 8.5% of total aggregate medical spending in the United States.
When considering only those aged 65 and older, estimates show that about 27% of Medicare's annual $327 billion budget ($88 billion) in 2006 goes to care for patients in their final year of life. For the over-65s, between 1992 and 1996, spending on those in their last year of life represented 22% of all medical spending, 18% of all non-Medicare spending, and 25 percent of all Medicaid spending for the poor. These percentages appears to be falling over time, as in 2008, 16.8% of all medical spending on the over 65s went on those in their last year of life.
Predicting death is difficult, which has affected estimates of spending in the last year of life; when controlling for spending on patients who were predicted as likely to die, Medicare spending was estimated at 5% of the total.
Belgium
Belgium's first palliative home care team was established in 1987, and the first palliative care unit and hospital care support teams were established in 1991. A strong legal and structural framework for palliative care was established in the 1990s, which divided the country into areas of 30, where palliative care networks were responsible for coordinating palliative services. Home care was provided by palliative support teams, and each hospital and care home recognized to have a palliative support team. In 1999, Belgium ranked second (after the United Kingdom) in the number of palliative care beds per capita. In 2001, there was an active palliative care support team in 72% of hospitals and a specialized nurse or active support team in 50% nursing homes. Government resources for palliative care doubled in 2000, and in 2007 Belgium was ranked third out of 52 countries worldwide in terms of resources for palliative care. (Together with the United Kingdom and Ireland) to raise public awareness under the auspices of EoL 6 According to the Lien Foundation report, Belgium ranks 5th (out of 40 countries worldwide) for the overall level of mortality.
See also
Advance health care directive
Death midwife
Liverpool Care Pathway
My body, my choice
Children's palliative care
Physician assisted suicide
Right to die
Robert Martensen
References
Further reading
External links
The program explores the medical, ethical, and social issues surrounding end-of-life care in America today.
Bioethics
Medical aspects of death
Caregiving | End-of-life care | [
"Technology"
] | 5,900 | [
"Bioethics",
"Ethics of science and technology"
] |
11,039,407 | https://en.wikipedia.org/wiki/NGC%201907 | NGC 1907 is an open star cluster around 4,500 light years from Earth. It contains around 30 stars and is over 500 million years old. With a magnitude of 8.2 it is visible in the constellation Auriga.
Sources
Kopernik.org
Glyphweb.com
External links
Auriga
Open clusters
1907 | NGC 1907 | [
"Astronomy"
] | 69 | [
"Auriga",
"Constellations"
] |
11,039,449 | https://en.wikipedia.org/wiki/Galactose-1-phosphate%20uridylyltransferase%20deficiency | Galactose-1-phosphate uridylyltransferase deficiency (classic galactosemia) is the most common type of galactosemia, an inborn error of galactose metabolism, caused by a deficiency of the enzyme galactose-1-phosphate uridylyltransferase. It is an autosomal recessive metabolic disorder that can cause liver disease and death if untreated. Treatment of galactosemia is most successful if initiated early and includes dietary restriction of lactose intake. Because early intervention is key, galactosemia is included in newborn screening programs in many areas. On initial screening, which often involves measuring the concentration of galactose in blood, classic galactosemia may be indistinguishable from other inborn errors of galactose metabolism, including galactokinase deficiency and galactose epimerase deficiency. Further analysis of metabolites and enzyme activities are needed to identify the specific metabolic error.
Symptoms and signs
In undiagnosed and untreated children, the accumulation of precursor metabolites due to the deficient activity of galactose 1-phosphate uridylyltransferase (GALT) can lead to feeding problems, failure to thrive, liver damage, bleeding, and infections. The first presenting symptom in an infant is often prolonged jaundice. Without intervention in the form of galactose restriction, infants can develop hyperammonemia and sepsis, possibly leading to shock. The accumulation of galactitol and subsequent osmotic swelling can lead to cataracts which are similar to those seen in galactokinase deficiency. Long-term consequences of continued galactose intake can include developmental delay, developmental verbal dyspraxia, and motor abnormalities. Galactosemic females frequently suffer from ovarian failure, regardless of treatment in the form of galactose restriction.
Cause
Lactose is a disaccharide consisting of glucose and galactose. After the ingestion of lactose, most commonly from breast milk for an infant or cow milk and any milk from an animal, the enzyme lactase hydrolyzes the sugar into its monosaccharide constituents, glucose and galactose. In the first step of galactose metabolism, galactose is converted to galactose-1-phosphate (Gal-1-P) by the enzyme galactokinase. Gal-1-P is converted to uridine diphosphate galactose (UDP-galactose) by the enzyme galactose-1-phosphate uridylyltransferase, with UDP-glucose acting as the UDP donor. UDP-galactose can then be converted to lactose, by the enzyme lactose synthase or to UDP-glucose by UDP-galactose epimerase (GALE).
In classic galactosemia, galactose-1-phosphate uridylyltransferase activity is reduced or absent; leading to an accumulation of the precursors, galactose, galactitol, and Gal-1-P. The elevation of precursors can be used to differentiate GALT deficiency from galactokinase deficiency, as Gal-1-P is typically not elevated in galactokinase deficiency.
Genetics
All forms of galactosemia are inherited in an autosomal recessive manner, meaning individuals affected with classic galactosemia must have inherited a mutated copy of the GALT gene from both parents. Each child from two carrier parents would have a 25% chance of being affected, a 50% chance of being a carrier, and a 25% chance of inheriting normal versions of the gene from each parent.
There are several variants in the GALT gene, which have different levels of residual enzyme activity. A patient homozygous for one of the severe mutations in the GALT gene (commonly referred to as G/G) will typically have less than 5% of the enzyme activity expected in an unaffected patient. Duarte galactosemia is caused by mutations that produce an unstable form of the GALT enzyme, with reduced promoter expression. Patients who are homozygous for Duarte mutations (D/D) will have reduced levels of enzyme activity compared to normal controls, but can often maintain a normal diet. Compound heterozygotes (D/G) will often be detected by newborn screening and treatment is based on the extent of residual enzyme activity.
Diagnosis
In most regions, galactosemia is diagnosed as a result of newborn screening, most commonly by determining the concentration of galactose in a dried blood spot. Some regions will perform a second-tier test of GALT enzyme activity on samples with elevated galactose, while others perform both GALT and galactose measurements. While awaiting confirmatory testing for classic galactosemia, the infant is typically fed a soy-based formula, as human and cow milk contains galactose as a component of lactose. Confirmatory testing would include measurement of enzyme activity in red blood cells, determination of Gal-1-P levels in the blood, and mutation testing. The differential diagnosis for elevated galactose concentrations in blood on a newborn screening result can include other disorders of galactose metabolism, including galactokinase deficiency and galactose epimerase deficiency. Enzyme assays are commonly done using fluorometric detection or older radioactively labeled substrates.
Treatment
There is no cure for GALT deficiency, in the most severely affected patients, treatment involves a galactose free diet for life. Early identification and implementation of a modified diet greatly improves the outcome for patients. The extent of residual GALT enzyme activity determines the degree of dietary restriction. Patients with higher levels of residual enzyme activity can typically tolerate higher levels of galactose in their diets. As patients get older, dietary restriction is often relaxed. With the increased identification of patients and their improving outcomes, the management of patients with galactosemia in adulthood is still being understood.
After diagnosis, patients are often supplemented with calcium and vitamin D3. Long-term manifestations of the disease including ovarian failure in females, ataxia, and growth delays are not fully understood. Routine monitoring of patients with GALT deficiency includes determining metabolite levels (galactose 1-phosphate in red blood cells and galactitol in urine) to measure the effectiveness of and adherence to dietary therapy, ophthalmologic examination for the detection of cataracts and assessment of speech, with the possibility of speech therapy if developmental verbal dyspraxia is evident.
Animal models
Gal-1-P is assumed as to be a toxic agent, since the inhibition of the Galactokinase prevents toxicity in disease's models, although this is controversial for Drosophila models. Phosphate depletion as a consequence of Gal-1-P is also proposed as a mechanism of toxicity in yeast models.
References
External links
Inborn errors of carbohydrate metabolism
Autosomal recessive disorders | Galactose-1-phosphate uridylyltransferase deficiency | [
"Chemistry"
] | 1,459 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism"
] |
11,039,790 | https://en.wikipedia.org/wiki/Animal | Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia (). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from to . They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology.
The animal kingdom is divided into five infrakingdoms/superphyla, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the infrakingdom Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large superphyla: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria.
Animals first appear in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Earlier evidence of animals is still controversial; the sponge-like organism Otavia has been dated back to the Tonian period at the start of the Neoproterozoic, but its identity as an animal is heavily contested. Nearly all modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa.
Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports.
Etymology
The word animal comes from the Latin noun of the same meaning, which is itself derived from Latin 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek 'after' (in biology, the prefix meta- stands for 'later') and 'animals', plural of 'animal'.
Characteristics
Animals have several characteristics that set them apart from other living things. Animals are eukaryotic and multicellular. Unlike plants and algae, which produce their own nutrients, animals are heterotrophic, feeding on organic material and digesting it internally. With very few exceptions, animals respire aerobically. All animals are motile (able to spontaneously move their bodies) during at least part of their life cycle, but some animals, such as sponges, corals, mussels, and barnacles, later become sessile. The blastula is a stage in embryonic development that is unique to animals, allowing cells to be differentiated into specialised tissues and organs.
Structure
All animals are composed of cells, surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised, making the formation of complex structures possible. This may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Animal cells uniquely possess the cell junctions called tight junctions, gap junctions, and desmosomes.
With few exceptions—in particular, the sponges and placozoans—animal bodies are differentiated into tissues. These include muscles, which enable locomotion, and nerve tissues, which transmit signals and coordinate the body. Typically, there is also an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians).
Reproduction and development
Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs.
Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding.
Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids.
Ecology
Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges.
Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria.
Animals evolved in the sea. Lineages of arthropods colonised land around the same time as land plants, probably between 510 and 471 million years ago during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above or in the most extreme cold deserts of continental Antarctica.
Diversity
Size
The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown.
Numbers and habitats of major phyla
The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.
Evolutionary origin
Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record.
The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments.
Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do.
Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges.
Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures.
Phylogeny
External phylogeny
Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa.
Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla.
Internal phylogeny
The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex.
The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges):
Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny:
Non-bilaterians
Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food.
The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm.
The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research.
Bilateria
The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below.
Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures.
Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians.
Protostomes and deuterostomes
Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage.
Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm.
The main deuterostome phyla are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals.
The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs.
History of classification
In the classical era, Aristotle divided animals, based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about.
In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes ('a chaotic mess') and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians.
In his 1817 , Georges Cuvier used comparative anatomy to group the animals into four ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860.
In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia.
In human culture
Practical uses
The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined.
Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture.
Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin.
People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts.
A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own.
A wide variety of terrestrial and aquatic animals are hunted for sport.
Symbolic uses
The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul.
Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies.
Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship.
See also
Animal coloration
Ethology
Lists of organisms by population
World Animal Day, observed on 4 October
Notes
References
External links
Tree of Life Project.
Animal Diversity Web – University of Michigan's database of animals
Wildscreen Arkive – multimedia database of endangered/protected species
Animals
Animals
Cryogenian first appearances
Taxa named by Carl Linnaeus
Biology terminology | Animal | [
"Biology"
] | 5,975 | [
"Eukaryotes",
"Animals",
"nan"
] |
11,040,097 | https://en.wikipedia.org/wiki/Calo%20tester | The Calo tester, also known as a ball crater or coating thickness tester, is a simple and inexpensive piece of equipment used to measure the thickness of coatings. Coatings with thicknesses typically between 0.1 to 50 micrometres, such as Physical Vapor Deposition (PVD) coatings or Chemical Vapor Deposition (CVD) coatings, are used in many industries to improve the surface properties of tools and components.
The Calo tester is also used to measure the amount of coating wear after a wear test carried out using a Pin-on-Disc Tester.
The Calo tester consists of a holder for the surface to be tested and a steel sphere of known diameter that is rotated against the surface by a rotating shaft connected to a motor whilst diamond paste is applied to the contact area. The sphere is rotated for a short period of time (less than 20 seconds for a 0.1 to 5 micrometre thickness) but due to the abrasive nature of the diamond paste this is sufficient time to wear a crater through thin coatings.
Calculating coating thickness using the Calo tester
An optical microscope is used to take two measurements across the crater after the Calo test and the coating thickness is calculated using a simple geometrical equation.
Where
t = coating thickness,
d = diameter of the sphere
x = difference between the radius of the crater and radius of the part of the crater at the bottom of the coating
x+y = diameter of the crater
References
External links
www.pvd-coatings.co.uk on Coasting Thickness Tester
Determination of layer thickness with a spherical cap grinder (Calotest)
Industrial processes
Measuring instruments
Physical vapor deposition | Calo tester | [
"Technology",
"Engineering"
] | 343 | [
"Measuring instruments"
] |
11,040,701 | https://en.wikipedia.org/wiki/HAT-P-2b | HAT-P-2b is an extrasolar planet detected by the HATNet Project in May 2007. It orbits a class F star HAT-P-2, (bigger and hotter than the Sun), located about 420 light-years away in the constellation Hercules.
The planet is officially named Magor. The name was selected in the NameExoWorlds campaign by Hungary, during the 100th anniversary of the International Astronomical Union. Magor was a legendary ancestor of the Magyar people and the Hungarian nation, and brother of Hunor (name of the star HAT-P-2).
Physical properties
The planet's mass has been estimated to be 8.7 times that of Jupiter, while its diameter is 1.157 times Jupiter's. Its small size, despite the bloating of the planet's atmosphere, is caused by the strong gravity of the planet. The planetary atmosphere has indeed the smallest scale height, equal to 26km, among exoplanets with measurable atmospheres as of 2021.
This indicates its mean density is twice that of Earth and its surface gravity approximately 24 times that of Earth, almost equal to the Sun.
In addition to heat from its primary star, tidal heating is thought to have played a significant role in this planet's evolution.
Orbit
The planetary orbital period is 5 days 15 hours, and its inclination is such that it crosses directly in front of the star as viewed from Earth. The orbit is very eccentric, ranging from 4.90 million to 15.36 million miles from the star.
As of August 2008, the most recent calculation of HAT-P-2b's Rossiter–McLaughlin effect and so spin-orbit angle was that of Winn in 2007 but Loeillet has in 2008 disputed it. For Winn, this is +1 ± 13 degrees. The study in 2012 determined the planetary orbit is probably aligned with the equatorial plane of the star, with misalignment equal to 9°.
Other planets in the system
It has been suggested that there is a second outer planet perturbing HAT-P-2b. In 2023, the presence of a second planet, HAT-P-2c, was confirmed.
References
External links
The Extrasolar Planets Encyclopaedia
The Israeli Team From Tel Aviv University
Hercules (constellation)
Hot Jupiters
Transiting exoplanets
Giant planets
Exoplanets discovered in 2007
Exoplanets with proper names
Exoplanets discovered by HATNet | HAT-P-2b | [
"Astronomy"
] | 506 | [
"Hercules (constellation)",
"Constellations"
] |
11,040,776 | https://en.wikipedia.org/wiki/Sirius%20visualization%20software | Sirius is a molecular modelling and analysis system developed at San Diego Supercomputer Center. Sirius is designed to support advanced user requirements that go beyond simple display of small molecules and proteins. Sirius supports high quality interactive 3D graphics, structure building, displaying protein or DNA primary sequences, access to remote data sources, and visualizing molecular dynamics trajectories. It can be used for scientific visualization and analysis, and chemistry and biology instruction.
This software is no longer supported as of 2011.
Key features
Sirius supports a variety of applications with a set of features, including:
Building and editing chemical structures using a library of fragments
Protein structure and sequence alignment
Command line interpreter and scripting support fully compatible with extant RasMol scripts
Full support for molecular dynamics trajectory visualizing
BLAST search directly in Protein Data Bank and Uniprot databases
Ability to move parts of the loaded data while freezing the rest
Interactive calculation of hydrogen bonding, steric clashes, Ramachandran plots
Support for all major structure and sequence formats
Bundled POV-Ray for creating photorealistic images
Integrated selection and coloring across individual visualizing components
Sirius is based on molecular graphics code and data structures developed as a part of the Molecular Biology Toolkit.
RasMol-compatible scripting
Sirius features a command line interpreter that can be used to quickly manipulate structure appearance and orientation. The set of commands has been patterned after RasMol, so it's fully compatible with extant scripts. Added commands introduced in Sirius provide support for manipulating multiple structures loaded at the same time, and enable more flexible selection.
Extant RasMol scripts can be imported and run within Sirius to produce high quality representations of encoded molecular scenes. Since RasMol uses a coordinate system that differs from that Sirius, internal conversion is performed when RasMol scripts are imported, so that any orientation changes are shown correctly. Any manually entered commands, however, are executed according to the Sirius coordinate system.
Sirius supports several predefined atom-residue sets and color schemes, allows editing of scripts using the Command Panel interface, and logical operators and parentheses can be used to create complex selection commands.
Visualizing molecular dynamics trajectories
Sirius contains a full-featured molecular dynamics visualizing component. It can read output files from AMBER and CHARMM simulations, including compressed and AMBER out files. RMSD changes along the trajectory can be calculated using user-defined atom subsets and displayed in an interactively updated graph. In order to reduce memory requirements, large multifile simulations may be loaded in a buffered mode. If a simulation involves changes in protein fold, Sirius can be set to track and recompute displayed secondary structure features in real time, which provides a convenient way to observe transformations of the structure. The full trajectory or selected frames can be exported as QuickTime video or a set of POV-Ray scene snapshots that can later be converted to a high quality movie.
Access and download
Sirius is distributed freely from the project website to individuals affiliated with academic and non-profit organizations. Native desktop application installers are available for Windows, Linux, and macOS.
See also
Comparison of software for molecular mechanics modeling
List of molecular graphics systems
Molecule editor
Molecular modelling
Molecular graphics
Molecular dynamics
References
External links
Internet Archive of Official Website
Molecular Biology Toolkit
San Diego Supercomputer Center
University of California San Diego
Molecular modelling software | Sirius visualization software | [
"Chemistry"
] | 670 | [
"Molecular modelling",
"Molecular modelling software",
"Computational chemistry software"
] |
11,040,991 | https://en.wikipedia.org/wiki/HATNet%20Project | The Hungarian Automated Telescope Network (HATNet) project is a network of six small fully automated "HAT" telescopes. The scientific goal of the project is to detect and characterize extrasolar planets using the transit method. This network is used also to find and follow bright variable stars. The network is maintained by the Center for Astrophysics Harvard & Smithsonian.
The HAT acronym stands for Hungarian-made Automated Telescope, because it was developed by a small group of Hungarians who met through the Hungarian Astronomical Association. The project started in 1999 and has been fully operational since May 2001.
Equipment
The prototype instrument, HAT-1 was built from a 180 mm focal length and 65 mm aperture Nikon telephoto lens and a Kodak KAF-0401E chip of 512 × 768, 9 μm pixels. The test period was from 2000 to 2001 at the Konkoly Observatory in Budapest.
HAT-1 was transported from Budapest to the Steward Observatory, Kitt Peak, Arizona, USA, in January 2001. The transportation caused serious damage to the equipment.
Later built telescopes use Canon 11 cm diameter f/1.8L lenses for a wide-field of 8°×8°. It is a fully automated instrument with 2K x 2K Charge-coupled device (CCD) sensors. One HAT instrument operates at the Wise Observatory.
HAT is controlled by a single Linux PC without human supervision. Data are stored in a MySQL database.
HAT-South
From 2009, three other locations joined the HATNet with telescopes of completely new design. The telescopes are deployed to Australia, Namibia and Chile. Each system has eight (2*4) joint-mounted, quasi-parallel Takahashi Epsilon (180 mm diameter, f/2.8) astrographs with Apogee 4k*4k CCDs with overlapping fields of view. The processing computers are Xenomai-based industrial PCs with 10 TB of storage.
Participants in the project
HAT-1 was developed during the undergraduate (and also the first year graduate) studies of Gáspár Bakos (Eötvös Loránd University, now at Princeton University) and at Konkoly Observatory (Budapest), under the supervision of Dr. Géza Kovács. In the development József Lázár, István Papp and Pál Sári also played an important role.
More than 100 people have contributed altogether to the seventy planet discovery papers published or submitted by the project as of Feb 2020. Gáspár Bakos, István Papp, József Lázár, Pál Sári, have contributed to all of the planet discoveries by HAT. Other participants who have contributed to at least 10 discovery papers include: Joel Hartman (62 papers, Princeton), Robert Noyes (55, CfA), David Latham (44, CfA), Zoltán Csubry (43, Princeton), Kaloyan Penev (43, UT Dallas), Géza Kovács (42, Konkoly Observatory), Guillermo Torres (40, CfA), Geoffrey Marcy (38, UC Berkeley), Gilbert Esquerdo (37, CfA), Waqas Bhatti (34, Princeton), Miguel de Val-Borro (34, Goddard Space Flight Center), Lars Buchhave (33, Niels Bohr Institute), Daniel Bayliss (32, University of Warwick), Dimitar Sasselov (32, CfA), Bence Béky (31, CfA), Andrew Howard (31, Caltech), Debra Fischer (30, Yale University), George Zhou (30, CfA), Néstor Espinoza (29, STSCI), Andrés Jordán (29, Adolfo Ibáñez University), Robert Stefanik (29, CfA), Rafael Brahm (28, Pontifical Catholic University of Chile), Thomas Henning (28, MPIA), Luigi Mancini (28, University of Rome Tor Vergata), Markus Rabus (28, Las Cumbres Observatory), Vincent Suc (28, Pontifical Catholic University of Chile), John Johnson (27, CfA), R. Paul Butler (20, Carnegie Institution for Science), Simona Ciceri (19, MPIA), Brian Schmidt (19, ANU), Joao Bento (17, ANU), Thiam-Guan Tan (17, Perth Exoplanet Survey Telescope), Mark Everett (16, NOAO), Sam Quinn (16, CfA), Avi Shporer (16, MIT), Allyson Bieryla (14, CfA), Bun'ei Sato (14, Tokyo Institute of Technology), B.J. Fulton (12, Caltech), Howard Isaacson (12, UC Berkeley), András Pál (12, CfA), Brigitta Sipőcz (12, University of Hertfordshire), Támás Szkelenár (12), Chris Tinney (12, University of New South Wales), Duncan Wright (11, Australian Astronomical Observatory), Jeffrey Crane (10, Carnegie Institution for Science), Emilio Falco (10, CfA), Paula Sarkis (10, MPIA), and Stephen Shectman (10, Carnegie Institution for Science).
Planets discovered
One-hundred-thirty-four extrasolar planets have been discovered so far by the HAT surveys, including a handful of planets that were independently discovered by other groups as well (particularly the WASP survey). Sixty-three of these were found by the northern HATNet project, and seventy-one by the southern HATSouth project. All have been discovered using the transit method. In addition, a few additional planetary companions to the transiting planets were discovered through radial velocity follow-up observations, including HAT-P-13c, which was the first outer planetary or brown-dwarf companion confirmed with a well-characterised orbit for a system with a transiting planet
Light green rows indicate that the planet orbits one of the stars in a binary star system.
North
South
See also
List of extrasolar planets
A subset of HATNet light curves are available at the NASA Exoplanet Archive.
Other extrasolar planet search projects
Trans-Atlantic Exoplanet Survey or TrES
SuperWASP or WASP
XO Telescope or XO
Kilodegree Extremely Little Telescope or KELT
Next-Generation Transit Survey or NGTS
Extrasolar planet searching spacecraft
COROT is a CNES/ESA spacecraft launched in December 2006
The Kepler Mission is a NASA spacecraft launched in March 2009
The Transiting Exoplanet Survey Satellite (TESS) is a NASA spacecraft launched in March 2018
References
External links
The HAT Exoplanet Surveys
The HATNet Exoplanet Survey
The HATSouth Exoplanet Survey
Hungarian Astronomical Association
Wise observatory Hungarian-made Automated Telescope
The Extrasolar Planets Encyclopaedia
Telescopes
Astrometry
Exoplanet search projects by small telescope | HATNet Project | [
"Astronomy"
] | 1,412 | [
"Astrometry",
"Telescopes",
"Astronomical sub-disciplines",
"Astronomical instruments"
] |
11,041,514 | https://en.wikipedia.org/wiki/Mitochondrial%20shuttle | The mitochondrial shuttles are biochemical transport systems used to transport reducing agents across the inner mitochondrial membrane. NADH as well as NAD+ cannot cross the membrane, but it can reduce another molecule like FAD and [QH2] that can cross the membrane, so that its electrons can reach the electron transport chain.
The two main systems in humans are the glycerol phosphate shuttle and the malate-aspartate shuttle. The malate/a-ketoglutarate antiporter functions move electrons while the aspartate/glutamate antiporter moves amino groups. This allows the mitochondria to receive the substrates that it needs for its functionality in an efficient manner.
Shuttles
In humans, the glycerol phosphate shuttle is primarily found in brown adipose tissue, as the conversion is less efficient, thus generating heat, which is one of the main purposes of brown fat. It is primarily found in babies, though it is present in small amounts in adults around the kidneys and on the back of our necks. The malate-aspartate shuttle is found in much of the rest of the body.
The shuttles contains a system of mechanisms used to transport metabolites that lack a protein transporter in the membrane, such as oxaloacetate.
Malate shuttle
The malate shuttle allows the mitochondria to move electrons from NADH without the consumption of metabolites and it uses two antiporters to transport metabolites and keep balance within the mitochondrial matrix and cytoplasm.
On the cytoplasmic side a transaminase enzyme is used to remove an amino group from aspartate which is converted into oxaloacetate, then malate dehydrogenase enzyme uses an NADH cofactor to reduce oxaloacetate to malate which can be transported across the membrane because of the presence of a transporter.
Once the malate is inside the matrix its converted back to oxaloacetate, which is converted to aspartate and can be transported back outside the mitochondria to allow the cycle to continue. The movement of oxaloacetate across the membrane transports electrons and is known as the outer ring. The inner ring primary function is not to move electrons but regenerate the metabolites.
Glycerol phosphate shuttle
The transamination of oxaloacetate to aspartate is achieved through the use of glutamate. Glutamate is transported with aspartate via antiporter, thus as one aspartate leaves the cell, a glutamate enters. Glutamate in the matrix is converted into an a-ketoglutarate which is transported in an antiporter with malate. In the cytoplasmic side a-ketoglutarate is converted back into glutamate when aspartate is converted back to oxaloacetate.
Use against cancer
Most cancer cells cause mutation in the bodies' metabolic activities to increase glucose metabolism in order to rapidly proliferate. Mutations that increase the cells metabolic activity and turn a normal cell into a tumor cell are called oncogenes. Cancer cells are unlike many other cells. They have very little vulnerabilities, but experiments in which the inhibition of transamination of malate-shuttle slowed proliferation due to the fact metabolism of glucose was being slowed.
See also
Mitochondrial carrier
Notes and references
Cellular respiration | Mitochondrial shuttle | [
"Chemistry",
"Biology"
] | 702 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
11,041,607 | https://en.wikipedia.org/wiki/S/2007%20S%202 | S/2007 S 2 is a natural satellite of Saturn. Its discovery was announced by Scott S. Sheppard, David C. Jewitt, Jan Kleyna, and Brian G. Marsden on May 1, 2007, from observations taken between January 18 and April 19, 2007. S/2007 S 2 is about 5 kilometres in diameter, and orbits Saturn at an average distance of 16,054,500 kilometres in 759.2 days, at an inclination of 176.65° to the ecliptic, in a retrograde direction and with an eccentricity of 0.237. According to Denk et al. (2018), it is presumably at high risk of colliding with Phoebe in the future.
The moon was once considered lost in 2007 as it was not seen since its discovery. The moon was later recovered and announced in October 2019.
References
External links
Institute for Astronomy Saturn Satellite Data
MPEC 2007-J09: S/2007 S 2, S/2007 S 3 May 1, 2007 (discovery and ephemeris)
Norse group
Moons of Saturn
Irregular satellites
Discoveries by Scott S. Sheppard
Astronomical objects discovered in 2007
Moons with a retrograde orbit
Recovered astronomical objects | S/2007 S 2 | [
"Astronomy"
] | 243 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
11,041,618 | https://en.wikipedia.org/wiki/Digg | Digg (stylized in lowercase as digg) is an American news aggregator with a curated front page, aiming to select articles specifically for the Internet audience such as science, trending political issues, and viral Internet issues. It was launched in its current form on July 31, 2012, with support for sharing content to other social platforms such as Twitter and Facebook.
Digg was formerly a popular social news website, allowing people to vote user-generated and web content up or down, called digging and burying, respectively. In 2012, Quantcast estimated Digg's monthly U.S. unique visits at 3.8 million. Digg's popularity prompted the creation of similar sites such as Reddit.
In July 2008, the former company took part in advanced acquisition talks with Google for a reported $200 million price tag, but the deal ultimately fell through. After a controversial 2010 redesign and the departure of co-founders Jay Adelson and Kevin Rose, in July 2012 Digg was sold in three parts: the Digg brand, website, and technology were sold to Betaworks for an estimated $500,000; 15 staff were transferred to The Washington Post Companys "SocialCode" for a reported $12 million; and a suite of patents was sold to LinkedIn for about $4 million.
In April 2018, Digg was purchased by BuySellAds, an advertising company, for an undisclosed amount.
It is rumoured that Kevin Rose has purchased Digg back and is relaunching it in 2025. Their Twitter account teases the date March 8, 2025 as "3825" is used in multiple images uploaded in December.
History
Digg started as an experiment in November 2004 by collaborators Kevin Rose, Owen Byrne, Ron Gorodetzky, and Jay Adelson. The original design by Dan Ries was free of advertisements. To monetize, the company originally used Google AdSense but switched to MSN adCenter in 2007.
The site's main function was to let users discover, share and recommend web content. Members of the community could submit a webpage for general consideration. Other members could vote that page up ("digg") or down ("bury"). Although voting took place on digg.com, many websites added "digg" buttons to their pages, allowing users to vote as they browsed the web. The end product was a series of wide-ranging, constantly updated lists of popular and trending content from around the Internet, aggregated by a social network.
Additions and improvements were made throughout the website's first years. Digg v2 was released in July 2005, with a new interface by web design company silverorange. New features included a friends list, and the ability to "digg" a story without being redirected to a success page. One year later, as part of Digg v3, the website added specific categories for technology, science, world and business, videos, entertainment, and gaming, as well as a "view all" section that merged all categories. Further interface adjustments were made in August 2007.
By 2008, Digg's homepage was attracting over 236 million visitors annually, according to a Compete.com survey. Digg had grown large enough that it was thought to affect the traffic of submitted web pages. Some pages experienced a sudden increase in traffic shortly after being submitted; some Digg users refer to this as the "Digg effect".
Redesign
CEO Jay Adelson said in 2010 that the site would go through some major changes. In the interview with Wired magazine, Adelson said that "Every single THING has changed" and that "the entire website has been rewritten." The company changed from MySQL to Cassandra, a distributed database system; in a blog post, VP Engineering John Quinn said that the move was "bold". Adelson summed up the new Digg by saying, "We've got a new backend, a new infrastructure layer, a new services layer, new machines—everything."
Adelson stepped down as CEO on April 5, 2010, to explore entrepreneurial opportunities, months before the launch date of Digg v4. He had been the company's CEO since its inception. Kevin Rose, another original founder, stepped in temporarily as CEO and Chairman.
Digg's v4 release on August 25, 2010, was marred by site-wide bugs and glitches. Digg users reacted with hostile verbal opposition. Beyond the release, Digg faced problems due to so-called "power users" who would manipulate the article recommendation features to only support one another's postings, flooding the site with articles only from these users and making it impossible to have genuine content from non-power users appear on the front page. Frustrations with the system led to dwindling web traffic, exacerbated by heavy competition from Facebook, whose like buttons started to appear on websites next to Digg's. High staff turnover included the departure of head of business development Matt Van Horn, shortly after v4's release.
On September 1, 2010, Matt Williams took over as CEO, ending Rose's troubled tenure as interim chief executive.
In 2013, Andrew McLaughlin took over as CEO after Digg was sold to BetaWorks and re-launched.
In 2015, Gary Liu took over as Digg CEO.
In 2016, Joshua Auerbach took over as interim CEO.
In September 2016, Digg announced that it would begin a data partnership with Gannett. The "seven figure" investment would give Gannett access to real-time trend analysis of Digg's 7.5 million pieces of content.
In 2017, Michael O'Connor took over as CEO, and continues as CEO today.
Sale and relaunch
In July 2012, Digg was sold in three parts:
the Digg brand, website, and technology were sold to Betaworks for $500,000;
15 staff were transferred to The Washington Posts Code3 project for $12 million;
the patent portfolio was sold to LinkedIn for approximately $4 million.
There were reports that Digg had been trying to sell itself to a larger company since 2006. The most notable attempt took place in July 2008, when Google entered talks to buy Digg for around $200 million. Google walked away from negotiations during the deal's due diligence phase, informing Digg on July 25 that it was no longer interested in the purchase. Digg subsequently accepted further venture capital funding, receiving $28.7 million in September 2008 from investors such as Highland Capital Partners to move headquarters and add staff. Several months later, CEO Jay Adelson said Digg was no longer for sale.
On July 20, 2012, new owners Betaworks announced via Twitter that they were rebuilding Digg from scratch, "turning [Digg] back into a start-up". Betaworks gave the project a six-week deadline. Surveys of existing users, collected through the website ReThinkDigg.com, were used to inform the development of a new user interface and user experience.
The "rethought" Digg reset its version number and launched as Digg v1 a day prior to the Betaworks project deadline, on July 31, 2012. It featured an editorially driven front page, more images, and top, popular and upcoming stories. Users could access a new scoring system. There was increased support for sharing content to other social platforms such as Twitter and Facebook. Digg's front page content was selected by editors, instead of users on other communities like Reddit.
Until its sale to BuySellAds.com in 2018, its offices were located at 50 Eldridge Street in New York City's Chinatown.
Features
Digg Reader
In response to the announced shutdown of Google Reader, Digg announced on March 14, 2013 that it was working on its own RSS reader. Digg Reader launched on June 28, 2013 as a web and iOS application. An Android app was released on August 29, 2013. Digg announced that it would shut down Digg Reader on March 26, 2018.
Issues relating to former Digg website
Organized promotion and censorship by users
It was possible for users to have disproportionate influence on Digg, either by themselves or in teams. These users were sometimes motivated to promote or bury pages for political or financial reasons.
Serious attempts by users to game the site began in 2006. A top user was banned after agreeing to promote a story for cash to an undercover Digg sting operation. Another group of users openly formed a 'Bury Brigade' to remove "spam" articles about US politician Ron Paul; critics accused the group of attempting to stifle any mention of Ron Paul on Digg.
Digg hired computer scientist Anton Kast to develop a diversity algorithm that would prevent special interest groups from dominating Digg. During a town hall meeting, Digg executives responded to criticism by removing some features that gave superusers extra weight, but declined to make "buries" transparent.
However, later that year Google increased its page rank for Digg. Shortly afterwards, many 'pay for Diggs' startups were created to profit from the opportunity. According to TechCrunch, one top user charged $700 per story, with a $500 bonus if the story reached the front page.
Digg Patriots was a conservative Yahoo! Groups mailing list, with an associated page on coRank, accused of coordinated, politically motivated behavior on Digg. Progressive blogger Ole Ole Olson wrote in August 2010 that Digg Patriots undertook a year-long effort of organized burying of seemingly liberal articles from Digg's Upcoming module. He also accused leading members of vexatiously reporting liberal users for banning (and those who seemed liberal), and creating "sleeper" accounts in the event of administrators banning their accounts. These and other actions would violate Digg's terms of usage. Olson's post was immediately followed by the disbanding and closure of the DiggPatriots list, and an investigation into the matter by Digg.
AACS encryption key controversy
On May 1, 2007, an article appeared on Digg's homepage that contained the encryption key for the AACS digital rights management protection of HD DVD and Blu-ray Disc. Then Digg, "acting on the advice of its lawyers", removed posting submissions about the secret number from its database and banned several users for submitting it. The removals were seen by many Digg users as a capitulation to corporate interests and an assault on free speech. A statement by Jay Adelson attributed the article's take-down to an attempt to comply with cease and desist letters from the Advanced Access Content System consortium and cited Digg's Terms of Use as justification for taking down the article. Although some users defended Digg's actions, as a whole the community staged a widespread revolt with numerous articles and comments made using the encryption key. The scope of the user response was so great that one of the Digg users referred to it as a "digital Boston Tea Party". The response was also directly responsible for Digg reversing the policy and stating: "But now, after seeing hundreds of stories and reading thousands of comments, you've made it clear. You'd rather see Digg go down fighting than bow down to a bigger company. We hear you, and effective immediately we won't delete stories or comments containing the code and will deal with whatever the consequences might be."
Digg v4
Digg's version 4 release was initially unstable. The site was unreachable or unstable for weeks after its launch on August 25, 2010. Many users, upon finally reaching the site, complained about the new design and the removal of many features (such as bury, favorites, friends submissions, upcoming pages, subcategories, videos and history search). Kevin Rose replied to complaints on his blog, promising to fix the algorithm and restore some features.
Alexis Ohanian, founder of rival site Reddit, said in an open letter to Rose:
Disgruntled users declared a "quit Digg day" on August 30, 2010, and used Digg's own auto-submit feature to fill the front page with content from Reddit. Reddit also temporarily added the Digg shovel to their logo to welcome fleeing Digg users.
Digg's traffic dropped significantly after the launch of version 4, and publishers reported a drop in direct referrals from stories on Digg's front page. New CEO Matt Williams attempted to address some of the users' concerns in a blog post on October 12, 2010, promising to reinstate many of the features that had been removed.
Timeline
See also
Delicious
diggnation
Facebook
Fark
Menéame
Mixx
Propeller.com
Reddit
Slashdot
Social bookmarking
StumbleUpon
Virato Social News
Web 2.0
Wykop.pl
References
External links
News aggregators
American news websites
Social bookmarking websites
Internet properties established in 2004
Social information processing
2004 establishments in California
Companies based in New York City
Censored works
Social software
Shorty Award winners | Digg | [
"Technology"
] | 2,696 | [
"Mobile content",
"Social software"
] |
11,041,672 | https://en.wikipedia.org/wiki/S/2007%20S%203 | S/2007 S 3 is a natural satellite of Saturn. Its discovery was announced by Scott S. Sheppard, David C. Jewitt, Jan Kleyna, and Brian G. Marsden on 1 May 2007 from observations taken between 18 January and 19 April 2007.
S/2007 S 3 is about 5 kilometres in diameter, and orbits Saturn at an average distance of 19,429,000 kilometres in about 1,011 days, at an inclination of 176.6° to the ecliptic, in a retrograde direction and with an eccentricity of 0.143.
This moon was considered lost until its recovery was announced on 12 October 2022.
References
Institute for Astronomy Saturn Satellite Data
MPEC 2007-J09: S/2007 S 2, S/2007 S 3, 1 May 2007 (discovery and ephemeris)
Norse group
Moons of Saturn
Irregular satellites
Discoveries by Scott S. Sheppard
Astronomical objects discovered in 2007
Moons with a retrograde orbit
Recovered astronomical objects | S/2007 S 3 | [
"Astronomy"
] | 200 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
11,042,043 | https://en.wikipedia.org/wiki/Plakohypaphorine | Plakohypaphorines are halogenated indolic non-proteinogenic amino acids named for their similarity to hypaphorine (N,N,N-trimethyltryptophan betaine). First reported in the Caribbean sponge Plakortis simplex in 2003, plakohypaphorines A-C were the first iodine-containing indole alkaloids to be discovered in nature. Plakohypaphorines D-F, also found in P. simplex, were reported in 2004 by a group including the researchers who discovered the original plakohypaphorines.
References
Taglialatela-Scafati Orazio et al., 2003. Plakohypaphorines A-C, Iodine-Containing Alkaloids from the Caribbean Sponge Plakortis simplex. European Journal of Organic Chemistry. 2003(2), pp. 284–287.
Borrelli, Francesca, et al., 2004. Iodinated Indole Alkaloids From Plakortis simplex, New Plakohypaphorines and an Evaluation of Their Antihistamine Activity. European Journal of Organic Chemistry. 2004(15), pp. 3227–3232.
Alpha-Amino acids
Non-proteinogenic amino acids
Halogen-containing alkaloids
Organoiodides
Tryptamines
Zwitterions
Tryptamine alkaloids | Plakohypaphorine | [
"Physics",
"Chemistry"
] | 298 | [
"Tryptamine alkaloids",
"Matter",
"Halogen-containing alkaloids",
"Alkaloids by chemical classification",
"Zwitterions",
"Ions"
] |
11,042,288 | https://en.wikipedia.org/wiki/Neolentinus%20lepideus | Neolentinus lepideus is a basidiomycete mushroom of the genus Neolentinus, until recently also widely known as Lentinus lepideus. Common names for it include scaly sawgill, scaly lentinus and train wrecker.
Description
Neolentinus lepideus fruit bodies are tough, fleshy, agarics of variable size. The cap is at first convex and flattens with maturity while the margin remains enrolled. The cap may grow up to about , while the stem grows to in height. The white, cream to pale-brown cap cuticle is distinctively covered with concentrically arranged dark scales which become denser towards the depressed cap centre.
The gills are white and their attachment to the stem is adnate to subdecurrent or decurrent. The gills and stipe can become dark reddish with age. The white stem is covered in dark scales in the region below the white ring. The odor is somewhat like anise, and the taste is indiscernible. The flesh is tough, increasingly so with maturity.
The spore print is white and the spores are cylindrical in shape. The spore dimensions are 8–12.5 by 3.5–5 μm.
Similar species
Neolentinus ponderosus is similar but has no partial veil, and thus no ring. Pleurotus levis grows on hardwoods, with a more fuzzy cap lacking scales.
Habitat and distribution
The fruiting bodies of Neolentinus lepideus are found singly or in tufts emerging from dead and decaying coniferous wood, favouring pines (Pinus) including old stumps, logs, and timber. It may also be found in gardens, on man-made wooden structures such as old railroad ties, and in such unusual places as coal mines. Less frequently, it is also found on non-coniferous hardwood. The fungus's fruiting season is spring to autumn and it is common in Europe and North America. In the latter, it appears from May to November (slightly shorter in the west). There have also been multiple reports of its occurrence in the Western Cape, South Africa.
Ecology
Neolentinus lepideus has a saprotrophic mode of nutrition and is an important woodland decomposer and a cause of wet rot in building materials. The fungus has shown tolerance of wood treated with creosote and other preservatives, and has been used in experiments to evaluate the efficacy of treatment methods.
Edibility
Some authors consider N. lepideus edible, although it requires cooking to soften. While there have been no recorded poisonings, the fungus may come in contact with hazardous chemicals because its fruiting bodies tend to grow on human-made wooden structures, such as wooden railroad ties smeared with creosote.
References
External links
Rogers Mushrooms species description
Good quality images
Wood-decay fungi
Gloeophyllales
Fungi of Europe
Fungi of North America
Fungi described in 1985
Fungus species | Neolentinus lepideus | [
"Biology"
] | 607 | [
"Fungi",
"Fungus species"
] |
11,042,546 | https://en.wikipedia.org/wiki/Oxygen%20enhancement%20ratio | The oxygen enhancement ratio (OER) or oxygen enhancement effect in radiobiology refers to the enhancement of therapeutic or detrimental effect of ionizing radiation due to the presence of oxygen. This so-called oxygen effect is most notable when cells are exposed to an ionizing radiation dose.
The OER is traditionally defined as the ratio of radiation doses during lack of oxygen compared to no lack of oxygen for the same biological effect. This may give varying numerical values depending on the chosen biological effect. Additionally, OER may be presented in terms of hyperoxic environments and/or with altered oxygen baseline, complicating the significance of this value.
The maximum OER depends mainly on the ionizing density or LET of the radiation. Radiation with higher LET and higher relative biological effectiveness (RBE) have a lower OER in mammalian cell tissues. The value of the maximum OER varies from about 1–4. The maximum OER ranges from about 2–4 for low-LET radiations such as X-rays, beta particles and gamma rays, whereas the OER is unity for high-LET radiations such as low energy alpha particles.
Uses in medicine
The effect is used in medical physics to increase the effect of radiation therapy in oncology treatments. Additional oxygen abundance creates additional free radicals and increases the damage to the target tissue.
In solid tumors the inner parts become less oxygenated than normal tissue and up to three times higher dose is needed to achieve the same tumor control probability as in tissue with normal oxygenation.
Explanation of the Oxygen Effect
The best known explanation of the oxygen effect is the oxygen fixation hypothesis which postulates that oxygen permanently fixes radical-induced DNA damage so it becomes permanent. Recently, it has been posited that the oxygen effect involves radiation exposures of cells causing their mitochondria to produce greater amounts of reactive oxygen species.
See also
Radiation therapy
Radiobiology
Health physics
Hypoxia
Oxygen effect
References
Eric J. Hall and Amato J. Giaccia: Radiobiology for the radiologist, Lippincott Williams & Wilkins, 6th Ed., 2006
Radiation therapy
Nuclear medicine
Radiobiology | Oxygen enhancement ratio | [
"Chemistry",
"Biology"
] | 429 | [
"Radiobiology",
"Radioactivity"
] |
11,042,759 | https://en.wikipedia.org/wiki/Addition-subtraction%20chain | An addition-subtraction chain, a generalization of addition chains to include subtraction, is a sequence a0, a1, a2, a3, ... that satisfies
An addition-subtraction chain for n, of length L, is an addition-subtraction chain such that . That is, one can thereby compute n by L additions and/or subtractions. (Note that n need not be positive. In this case, one may also include a−1 = 0 in the sequence, so that n = −1 can be obtained by a chain of length 1.)
By definition, every addition chain is also an addition-subtraction chain, but not vice versa. Therefore, the length of the shortest addition-subtraction chain for n is bounded above by the length of the shortest addition chain for n. In general, however, the determination of a minimal addition-subtraction chain (like the problem of determining a minimum addition chain) is a difficult problem for which no efficient algorithms are currently known. The related problem of finding an optimal addition sequence is NP-complete (Downey et al., 1981), but it is not known for certain whether finding optimal addition or addition-subtraction chains is NP-hard.
For example, one addition-subtraction chain is: , , , . This is not a minimal addition-subtraction chain for n=3, however, because we could instead have chosen . The smallest n for which an addition-subtraction chain is shorter than the minimal addition chain is n=31, which can be computed in only 6 additions (rather than 7 for the minimal addition chain):
Like an addition chain, an addition-subtraction chain can be used for addition-chain exponentiation: given the addition-subtraction chain of length L for n, the power can be computed by multiplying or dividing by x L times, where the subtractions correspond to divisions. This is potentially efficient in problems where division is an inexpensive operation, most notably for exponentiation on elliptic curves where division corresponds to a mere sign change (as proposed by Morain and Olivos, 1990).
Some hardware multipliers multiply by n using an addition chain described by n in binary:
n = 31 = 0 0 0 1 1 1 1 1 (binary).
Other hardware multipliers multiply by n using an addition-subtraction chain described by n in Booth encoding:
n = 31 = 0 0 1 0 0 0 0 −1 (Booth encoding).
References
Addition chains | Addition-subtraction chain | [
"Mathematics"
] | 533 | [
"Sequences and series",
"Addition chains",
"Mathematical structures"
] |
11,042,777 | https://en.wikipedia.org/wiki/IBM%20airgap | Airgap is a technique invented by IBM for fabricating small pockets of vacuum in between copper interconnects. The technique belongs to a general class of similar techniques that replaces solid low-κ dielectrics with air-filled or vacuum pockets.
Description
By insulating copper interconnects (wires) on an integrated circuit (IC) with vacuum holes, capacitance can be minimized enabling ICs to work faster or draw less power. A vacuum is believed to be the ultimate insulator for wiring capacitance, which occurs when two adjacent wires on an IC draw electrical energy from one another, generating undesirable heat and slowing the speed at which data can move through an IC. IBM estimates that this technology alone can lead to 35% higher speeds in current flow or 15% lower power consumption.
Fabrication techniques
The technique fabricates air gaps on a large scale by exploiting the self-assembly properties of certain polymers. These polymers can be easily integrated into the process modules (a collection of related steps that fabricate a structure on an integrated circuit) used in conventional CMOS fabrication, avoiding the costs of heavily modifying the process technology (the collection of process modules that produces an integrated circuit).
The technique deposits a polymer material over the entire wafer, and removes it at a later stage. When the polymer is removed, it creates trillions of evenly spaced vacuum pockets that are 20 nanometers in diameter. IBM has demonstrated this technique in the laboratory, and has deployed it in its East Fishkill, New York fabrication plant, where prototype POWER6 processors using this technology have been fabricated. The technique was scheduled to be featured in production-ready process technology in 2009, as part of IBM's 45 nm node, after which it would also be available to IBM's clients.
History
Airgap was developed in a collaborative effort between IBM's Almaden Research Center and T.J. Watson Research Center, and the University of Albany, New York.
References
Snowflakes promise faster chips, BBC
IBM Brings Nature to Computer Chip Manufacturing, IBM
IBM's catches air, touts Top Ten list, Ars Technica
IBM computer hardware
Semiconductor device fabrication | IBM airgap | [
"Materials_science"
] | 444 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
11,042,797 | https://en.wikipedia.org/wiki/Dynamic%20simulation | Dynamic simulation (or dynamic system simulation) is the use of a computer program to model the time-varying behavior of a dynamical system. The systems are typically described by ordinary differential equations or partial differential equations. A simulation run solves the state-equation system to find the behavior of the state variables over a specified period of time. The equation is solved through numerical integration methods to produce the transient behavior of the state variables. Simulation of dynamic systems predicts the values of model-system state variables, as they are determined by the past state values. This relationship is found by creating a model of the system.
Overview
Simulation models are commonly obtained from discrete-time approximations of continuous-time mathematical models.
As mathematical models incorporate real-world constraints, like gear backlash and rebound from a hard stop, equations become nonlinear. This requires numerical methods to solve the equations.
A numerical simulation is done by stepping through a time interval and calculating the integral of the derivatives through numerical integration.
Some methods use a fixed step through the interval, and others use an adaptive step that can shrink or grow automatically to maintain an acceptable error tolerance. Some methods can use different time steps in different parts of the simulation model.
There are two types of system models to be simulated: difference-equation models, and differential-equation models. Classical physics is usually based on differential equation models. This is why most old simulation programs are simply differential equation solvers and delegate solving difference-equations to “procedural program segments.”Some dynamic systems are modeled with differential equations that can only be presented in an implicit form. These differential-algebraic-equation systems require special mathematical methods for simulation.
Some complex systems’ behavior can be quite sensitive to initial conditions, which could lead to large errors from the correct values. To avoid these possible errors, a rigorous approach can be applied, where an algorithm is found which can compute the value up to any desired precision. For example, the constant e is a computable number because there is an algorithm that is able to produce the constant up to any given precision.
Applications
The first applications of computer simulations for dynamic systems was in the aerospace industry. Commercial uses of dynamic simulation are many and range from nuclear power, steam turbines, 6 degrees of freedom vehicle modeling, electric motors, econometric models, biological systems, robot arms, mass-spring-damper systems, hydraulic systems, and drug dose migration through the human body to name a few. These models can often be run in real time to give a virtual response close to the actual system. This is useful in process control and mechatronic systems for tuning the automatic control systems before they are connected to the real system, or for human training before they control the real system.
Simulation is also used in computer games and animation and can be accelerated by using a physics engine, the technology used in many powerful computer graphics software programs, like 3ds Max, Maya, Lightwave, and many others to simulate physical characteristics. In computer animation, things like hair, cloth, liquid, fire, and particles can be easily modeled, while the human animator animates simpler objects. Computer-based dynamic animation was first used at a very simple level in the 1989 Pixar short film Knick Knack to move the fake snow in the snowglobe and pebbles in a fish tank.
Example of dynamic simulation
This animation was made with a software system dynamics, with a 3D modeler. The calculated values are associated with parameters of the rod and crank. In this example the crank is driving, we vary both the speed of rotation, its radius, and the length of the rod, the piston follows.
See also
Comparison of system dynamics software — includes packages not listed below
Simulink — A MATLAB-based graphical programming environment for modeling, simulating and analyzing dynamical systems
MSC Adams — A multibody dynamics simulation software
SimulationX— Software for simulating multi-domain dynamic systems
AMESim — Software for simulating multi-domain dynamic systems
AGX Multiphysics — A physics engine for simulating multi-domain dynamic systems
Dymola — Software for simulating multi-domain dynamic systems using the Modelica language
EcosimPro — A simulation tool for modeling continuous-discrete systems
Hopsan — Software for simulating multi-domain dynamic systems
MapleSim — Software for simulating multi-domain dynamic systems
Modelica — A non-proprietary, object-oriented, equation-based language for dynamic simulation
Physics engine
VisSim — A visual language for nonlinear dynamic simulation
PottersWheel — A Matlab toolbox to calibrate parameters of dynamic systems
Simcad Pro — A dynamic and interactive discrete event simulation software
Notes
References
External links
Textbook and lectures on dynamic simulation
Dynamic System Simulation
Computer physics engines
Control theory
Electromechanical engineering
Embedded systems
Gears
Ordinary differential equations
Partial differential equations
Systems engineering
Industrial automation
Simulation software | Dynamic simulation | [
"Mathematics",
"Technology",
"Engineering"
] | 975 | [
"Systems engineering",
"Computer engineering",
"Applied mathematics",
"Control theory",
"Embedded systems",
"Computer systems",
"Automation",
"Industrial engineering",
"Computer science",
"Mechanical engineering by discipline",
"Electromechanical engineering",
"Electrical engineering",
"Indu... |
11,042,923 | https://en.wikipedia.org/wiki/Monolithic%20column | A monolithic column or single-piece column is a large column of which the shaft is made from a single piece of stone instead of in vertical sections. Smaller columns are very often made from single pieces of stone, but are less often described as monolithic, as the term is normally reserved for less common, larger columns made in this way. Choosing to use monolithic columns produces considerable extra difficulties in quarrying and transport, and may be seen as a statement of grandeur and importance in a building.
As an example of this level of choice, Shoghi Effendi cabled Bahá'ís of the world in 1948 about the Shrine of the Báb on December 10, 1948:
Convey to believers the joyful news of the safe delivery on Mt. Carmel of a consignment of thirty-two granite monolith columns, part of the initial shipment of material ordered for construction of the arcade of the Báb's Sepulcher, designed to envelop and preserve the sacred previous structure reared by 'Abdu'l-Bahá. Building operations are soon starting notwithstanding the difficulties of the present situation. I am supplicating the Almighty's guidance and sustaining grace for successive stages of an enterprise envisaged sixty years ago by Bahá'u'lláh, initiated by the Center of His Covenant, designed to culminate as contemplated by Him in erection of a superstructure to be crowned by a golden dome marking the consummation at the heart of the Mountain of God of the momentous undertaking born through the generating influence of the Will of the Founder of our beloved Faith, so dear to the heart of His blessed Son, and dedicated to the memory of the Martyr-Prophet, the immortal Herald of the Bahá'í Dispensation.The Shrine of the Báb's monolithic columns are made of rose granite from Baveno.
Monolithic columns are characteristic of Ancient Egyptian temples, and the examples in the portico of the Pantheon in Rome were also transported from Egypt. Byzantine churches in the Theodosian dynasty (379-457 AD) also show use of monolithic columns. Examples of single-piece columns have also been found in architecture from the Yucatán Peninsula.
In modern architecture, using concrete the situation is different, and the term is less likely to be used in this context.
References
Columns and entablature
Monoliths | Monolithic column | [
"Technology"
] | 482 | [
"Structural system",
"Columns and entablature"
] |
11,043,433 | https://en.wikipedia.org/wiki/Compute%20Against%20Cancer | Compute Against Cancer is an initiative of Parabon Computation, Inc. powered by the Global Grid Exchange. The program provides cancer researchers access to supercomputing capabilities through Parabon’s Frontier Grid Platform. The Compute Against Cancer initiative provides a means for donors to make their spare processing capabilities available to researchers. Donors running Parabon’s Frontier Compute Engine contribute to a large parallel processing network which cancer researchers employ to solve computationally intensive problems.
Research Projects
Since its inception, Compute Against Cancer has partnered with several organizations to carry out grid-powered research initiatives.
National Cancer Institute
The National Cancer Institute’s Genomic and Bioinformatics Group are currently researching microarray gene-expression patterns. Findings in this area will further the ability to analyze cancer-related data.
West Virginia University
Under the supervision of Lead Researcher Dr. Michael Andria, West Virginia University’s Blood and Marrow Transplantation Program and School of Pharmacy are analyzing how various combinations of medications, routes and methods of drug administration affect a chemotherapy patient’s quality of life.
University of Maryland
A team of researchers from the University of Maryland is working with Parabon Computation, Inc. to simulate protein folding. Protein molecules, some of the most basic components of organisms, fold into complex three-dimensional shapes that determine their chemical properties in cells. By replicating this process, the university team hopes to gain a better understanding of protein composition and behavior.
References
Compute Against Cancer
Distributed computing projects
Internet properties established in 2000 | Compute Against Cancer | [
"Engineering"
] | 296 | [
"Distributed computing projects",
"Information technology projects"
] |
11,044,314 | https://en.wikipedia.org/wiki/XOT | XOT (X.25 Over TCP) is a protocol developed by Cisco Systems that enables X.25 packets to be encapsulated and routed through TCP/IP connections instead of LAPB links. In 2012, X.25 tunnelled over TCP/IP using XOT was noted as by then being likely more common in actual use than physical X.25 over LAPB.
References
Network protocols
X.25 | XOT | [
"Technology"
] | 88 | [
"Computing stubs",
"Computer network stubs"
] |
11,044,340 | https://en.wikipedia.org/wiki/Prague%20Quadrennial | Held in Prague once every four years since 1967, the Prague Quadrennial of Performance Design and Space or Prague Quadrennial is the world's largest event in the field of scenography, consisting of a competitive presentation of contemporary work in a variety of performance design disciplines and genres including costume, stage, lighting, sound design, and theatre architecture for dance, opera, drama, site-specific, multi-media performances, and performance art.
History
During the São Paulo Art Biennial in 1959, a special exhibit, designed by František Tröster, illustrated the development of Czech and Slovak stage design and theatre architecture during the period from 1914-1959. The result of the exhibition was a gold medal for Czechoslovakia. Continued Czech success during the next three Biennales led to an offer for Prague to host an international exhibition of stage design in Europe. Since its premiere in 1967, the international exhibition has been held regularly every four years, and has come to be known as the Prague Quadrennial.
Important artists who marked the history of the theater and the scenography participated and exposed at the Prague Quadrennial, such as Salvador Dalí, Josef Svoboda, Oscar Niemayer, Tadeusz Kantor, Guy-Claude François and Ralph Koltai, as well as figures of the contemporary theater, such as Robert Wilson, Heiner Goebbels and Renzo Piano.
Awards
The exhibitions are judged and estimated by an International Jury, attributing the following awards:
Golden Triga for the Best Exposition
Gold Medal for the Best Stage Design
Gold Medal for the Best Theatre Costume
Gold Medal for the Best Realization of a Production
Gold Medal for the Best Work in Theatre Architecture and Performance Space
Gold Medal for the Best Use of Theatre Technology
Gold Medal for the Best Exposition in the Student Section
Gold Medal for the Most Promising Talent in the Student Section
Gold Medal for the Best Curatorial Concept of an Exposition
The Golden Triga was awarded in 1967 to France, in 1971 to the GDR, in 1975 to the USSR, in 1979 to Great Britain, in 1983 to the GDR, in 1987 to the USA, in 1991 to Great Britain, in 1995 to Brazil, in 1999 to the Czech Republic, in 2003 to Great Britain, in 2007 to Russia and in 2011 to Turkey.
The 12th edition of the Prague Quadrennial in 2011
International Competitive exhibition:
Section of Countries and Region
Architecture Section – Now/Next Performance Space at the Crossroads
Student Section
Extreme Costume exhibition
Projects:
Intersection, Intimacy and Spectacle, an undisciplined project
Scenofest. an educational project of OISTAT and the Prague Quadrennial.
Light and Sound project
PQ+, accompanying program
Participating countries in the International Competitive Exhibition include Argentina, Armenia, Australia, Austria, Belarus, Belgium, Brazil, Bulgaria, Canada, Chile, China, Colombia, Croatia, Cuba, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hong Kong, Hungary, Iceland, India, Israel, Italy, Japan, Kazakhstan, Republic of Korea, Latvia, Lebanon, Lithuania, Macedonia, Mongolia, Mexico, Netherlands, New Zealand, Nigeria, Norway, Peru, Philippines, Poland, Portugal, Republic of South Africa, Romania, Russia, Serbia, Singapore, Slovakia, Slovenia, Spain, Sweden, Switzerland, Taiwan, Turkey, United Kingdom, Uruguay, USA, and Venezuela.
In 2011 the Hungarian section of PQ will be represented by Bodza W Mihaly's oeuvre. The section is presented by György Árvai, Anikó B. Nagy, Judit Csanádi, Péter Horgas, Gábor Medvigy.
Work is judged in a variety of categories, including "Architecture," "Costumes," and a "Student" section. But the center is the "Countries and Regions" category, where visitors can immerse themselves in theatrical installations from a record-breaking 62 countries, designed by organizations and individual artists including Ruhr Triennale, SITI Theater Company, Joao Brites, and Yukio Horio.
The upcoming, 13th edition of the Prague Quadrennial in 2015
The next edition of Prague Quadrennial of Performance Design and Space wll be held in June 2015.
The main thematic axis of the event will be centered on a research and artistic project SharedSpace: Music, Weather, Politics which started in May 2013 and runs through 2016.
Further information about Prague Quadrennial 2015 will be released in September 2013.
The Prague Quadrennial online: e-scenography
E-scenography is an online informational community discussing issues connected to scenography thanks to a newsletter, an art school database and an online library.
References
External links
Prague Quadrennial
e-scenography
Prague Quadrennial page at Scenography - The Theatre Design Website
Images from the 2007 Prague Quadrennial
Scenic design
Exhibitions | Prague Quadrennial | [
"Engineering"
] | 1,005 | [
"Scenic design",
"Design"
] |
11,044,599 | https://en.wikipedia.org/wiki/Searching%20the%20conformational%20space%20for%20docking | In molecular modelling, docking is a method which predicts the preferred orientation of one molecule to another when bound together in a stable complex. In the case of protein docking, the search space consists of all possible orientations of the protein with respect to the ligand. Flexible docking in addition considers all possible conformations of the protein paired with all possible conformations of the ligand.
With present computing resources, it is impossible to exhaustively explore these search spaces; instead, there are many strategies which attempt to sample the search space with optimal efficiency. Most docking programs in use account for a flexible ligand, and several attempt to model a flexible protein receptor. Each "snapshot" of the pair is referred to as a pose.
Molecular dynamics (MD) simulations
In this approach, proteins are typically held rigid, and the ligand is allowed to freely explore their conformational space. The generated conformations are then docked successively into the protein, and an MD simulation consisting of a simulated annealing protocol is performed. This is usually supplemented with short MD energy minimization steps, and the energies determined from the MD runs are used for ranking the overall scoring. Although this is a computer-expensive method (involving potentially hundreds of MD runs), it has some advantages: for example, no specialized energy/scoring functions are required. MD force-fields can typically be used to find poses that are reasonable and can be compared with experimental structures.
The Distance Constrained Essential Dynamics method (DCED) has been used to generate multiple structures for docking, called eigenstructures. This approach, although avoiding most of the costly MD calculations, can capture the essential motions involved in a flexible receptor, representing a form of coarse-grained dynamics.
Shape-complementarity methods
The most common technique used in many docking programs, shape-complementarity methods focus on the match between the receptor and the ligand in order to find an optimal pose. Programs include DOCK, FRED, GLIDE, SURFLEX, eHiTS and many more. Most methods describe the molecules in terms of a finite number of descriptors that include structural complementarity and binding complementarity. Structural complementarity is mostly a geometric description of the molecules, including solvent-accessible surface area, overall shape and geometric constraints between atoms in the protein and ligand. Binding complementarity takes into account features like hydrogen bonding interactions, hydrophobic contacts and van der Waals interactions to describe how well a particular ligand will bind to the protein. Both kinds of descriptors are conveniently represented in the form of structural templates which are then used to quickly match potential compounds (either from a database or from the user-given inputs) that will bind well at the active site of the protein. Compared to the all-atom molecular dynamics approaches, these methods are very efficient in finding optimal binding poses for the protein and ligand.
Genetic algorithms
Two of the most used docking programs belong to this class: GOLD and AutoDock. Genetic algorithms allow the exploration of a large conformational space – which is basically spanned by the protein and ligand jointly in this case – by representing each spatial arrangement of the pair as a “gene” with a particular energy. The entire genome thus represents the complete energy landscape which is to be explored. The simulation of the evolution of the genome is carried out by cross-over techniques similar to biological evolution, where random pairs of individuals (conformations) are “mated” with the possibility for a random mutation in the offspring. These methods have proven very useful in sampling the vast state-space while maintaining closeness to the actual process involved.
Although genetic algorithms are quite successful in sampling the large conformational space, many docking programs require the protein to remain fixed, while allowing only the ligand to flex and adjust to the active site of the protein. Genetic algorithms also require multiple runs to obtain reliable answers regarding ligands that may bind to the protein. The time it takes to typically run a genetic algorithm in order to allow a proper pose may be longer, hence these methods may not be as efficient as shape complementarity-based approaches in screening large databases of compounds. Recent improvements in using grid-based evaluation of energies, limiting the exploration of the conformational changes at only local areas (active sites) of interest, and improved tabling methods have significantly enhanced the performance of genetic algorithms and made them suitable for virtual screening applications.
References
Docking
Protein structure
Biochemistry methods
Bioinformatics | Searching the conformational space for docking | [
"Chemistry",
"Engineering",
"Biology"
] | 892 | [
"Biochemistry methods",
"Biological engineering",
"Molecular physics",
"Bioinformatics",
"Theoretical chemistry",
"Molecular modelling",
"Structural biology",
"Biochemistry",
"Protein structure"
] |
11,044,649 | https://en.wikipedia.org/wiki/Scoring%20functions%20for%20docking | In the fields of computational chemistry and molecular modelling, scoring functions are mathematical functions used to approximately predict the binding affinity between two molecules after they have been docked. Most commonly one of the molecules is a small organic compound such as a drug and the second is the drug's biological target such as a protein receptor. Scoring functions have also been developed to predict the strength of intermolecular interactions between two proteins or between protein and DNA.
Utility
Scoring functions are widely used in drug discovery and other molecular modelling applications. These include:
Virtual screening of small molecule databases of candidate ligands to identify novel small molecules that bind to a protein target of interest and therefore are useful starting points for drug discovery
De novo design (design "from scratch") of novel small molecules that bind to a protein target
Lead optimization of screening hits to optimize their affinity and selectivity
A potentially more reliable but much more computationally demanding alternative to scoring functions are free energy perturbation calculations.
Prerequisites
Scoring functions are normally parameterized (or trained) against a data set consisting of experimentally determined binding affinities between molecular species similar to the species that one wishes to predict.
For currently used methods aiming to predict affinities of ligands for proteins the following must first be known or predicted:
Protein tertiary structure – arrangement of the protein atoms in three-dimensional space. Protein structures may be determined by experimental techniques such as X-ray crystallography or solution phase NMR methods or predicted by homology modelling.
Ligand active conformation – three-dimensional shape of the ligand when bound to the protein
Binding-mode – orientation of the two binding partners relative to each other in the complex
The above information yields the three-dimensional structure of the complex. Based on this structure, the scoring function can then estimate the strength of the association between the two molecules in the complex using one of the methods outlined below. Finally the scoring function itself may be used to help predict both the binding mode and the active conformation of the small molecule in the complex, or alternatively a simpler and computationally faster function may be utilized within the docking run.
Classes
There are four general classes of scoring functions:
Force field – affinities are estimated by summing the strength of intermolecular van der Waals and electrostatic interactions between all atoms of the two molecules in the complex using a force field. The intramolecular energies (also referred to as strain energy) of the two binding partners are also frequently included. Finally since the binding normally takes place in the presence of water, the desolvation energies of the ligand and of the protein are sometimes taken into account using implicit solvation methods such as GBSA or PBSA.
Empirical – based on counting the number of various types of interactions between the two binding partners. Counting may be based on the number of ligand and receptor atoms in contact with each other or by calculating the change in solvent accessible surface area (ΔSASA) in the complex compared to the uncomplexed ligand and protein. The coefficients of the scoring function are usually fit using multiple linear regression methods. These interactions terms of the function may include for example:
hydrophobic — hydrophobic contacts (favorable),
hydrophobic — hydrophilic contacts (unfavorable) (Accounts for unmet hydrogen bonds, which are an important enthalpic contribution to binding. One lost hydrogen bond can account for 1–2 orders of magnitude in binding affinity.),
number of hydrogen bonds (favorable contribution to affinity, especially if shielded from solvent, if solvent exposed no contribution),
number of rotatable bonds immobilized in complex formation (unfavorable conformational entropy contribution).
Knowledge-based – based on statistical observations of intermolecular close contacts in large 3D databases (such as the Cambridge Structural Database or Protein Data Bank) which are used to derive statistical "potentials of mean force". This method is founded on the assumption that close intermolecular interactions between certain types of atoms or functional groups that occur more frequently than one would expect by a random distribution are likely to be energetically favorable and therefore contribute favorably to binding affinity.
Machine-learning – Unlike these classical scoring functions, machine-learning scoring functions are characterized by not assuming a predetermined functional form for the relationship between binding affinity and the structural features describing the protein-ligand complex. In this way, the functional form is inferred directly from the data. Machine-learning scoring functions have consistently been found to outperform classical scoring functions at binding affinity prediction of diverse protein-ligand complexes. This has also been the case for target-specific complexes, although the advantage is target-dependent and mainly depends on the volume of relevant data available. When appropriate care is taken, machine-learning scoring functions tend to strongly outperform classical scoring functions at the related problem of structure-based virtual screening. Furthermore, if data specific for the target is available, this performance gap widens These reviews provide a broader overview on machine-learning scoring functions for structure-based drug design. The choice of decoys for a given target is one of the most important factors for training and testing any scoring function.
The first three types, force-field, empirical and knowledge-based, are commonly referred to as classical scoring functions and are characterized by assuming their contributions to binding are linearly combined. Due to this constraint, classical scoring functions are unable to take advantage of large amounts of training data.
Refinement
Since different scoring functions are relatively co-linear, consensus scoring functions may not improve accuracy significantly. This claim went somewhat against the prevailing view in the field, since previous studies had suggested that consensus scoring was beneficial.
A perfect scoring function would be able to predict the binding free energy between the ligand and its target. But in reality both the computational methods and the computational resources put restraints to this goal. So most often methods are selected that minimize the number of false positive and false negative ligands. In cases where an experimental training set of data of binding constants and structures are available a simple method has been developed to refine the scoring function used in molecular docking.
References
Docking
Computational chemistry
Cheminformatics
Protein structure
Bioinformatics | Scoring functions for docking | [
"Chemistry",
"Engineering",
"Biology"
] | 1,249 | [
"Biological engineering",
"Molecular physics",
"Bioinformatics",
"Theoretical chemistry",
"Computational chemistry",
"Molecular modelling",
"Cheminformatics",
"Structural biology",
"nan",
"Protein structure"
] |
11,044,843 | https://en.wikipedia.org/wiki/Hayashi%20limit | The Hayashi limit is a theoretical constraint upon the maximum radius of a star for a given mass. When a star is fully within hydrostatic equilibrium—a condition where the inward force of gravity is matched by the outward pressure of the gas—the star can not exceed the radius defined by the Hayashi limit. This has important implications for the evolution of a star, both during the formulative contraction period and later when the star has consumed most of its hydrogen supply through nuclear fusion.
A Hertzsprung-Russell diagram displays a plot of a star's surface temperature against the luminosity. On this diagram, the Hayashi limit forms a nearly vertical line at about 3,500 K. The outer layers of low temperature stars are always convective, and models of stellar structure for fully convective stars do not provide a solution to the right of this line. Thus in theory, stars are constrained to remain to the left of this limit during all periods when they are in hydrostatic equilibrium, and the region to the right of the line forms a type of "forbidden zone". Note, however, that there are exceptions to the Hayashi limit. These include collapsing protostars, as well as stars with magnetic fields that interfere with the internal transport of energy through convection.
Red giants are stars that have expanded their outer envelope in order to support the nuclear fusion of helium. This moves them up and to the right on the H-R diagram. However, they are constrained by the Hayashi limit not to expand beyond a certain radius. Stars that find themselves across the Hayashi limit have large convection currents in their interior driven by massive temperature gradients. Additionally, those stars states are unstable so the stars rapidly adjust their states, moving in the Hertzprung-Russel diagram until they reach the Hayashi limit.
When lower mass stars in the main sequence start expanding and becoming a red giant the stars revisit the Hayashi track. The Hayashi limit constrains the asymptotic giant branch evolution of stars which is important in the late evolution of stars and can be observed, for example, in the ascending branches of the Hertzsprung–Russell diagrams of globular clusters, which have stars of approximately the same age and composition.
The Hayashi limit is named after Chūshirō Hayashi, a Japanese astrophysicist.
Despite its importance to protostars and late stage main sequence stars, the Hayashi limit was only recognized in Hayashi’s paper in 1961. This late recognition may be because the properties of the Hayashi track required numerical calculations that were not fully developed before.
Derivation of the limit
We can derive the relation between the luminosity, temperature and pressure for a simple model for a fully convective star and from the form of this relation we can infer the Hayashi limit. This is an extremely crude model of what occurs in convective stars, but it has good qualitative agreement with the full model with less complications. We follow the derivation in Kippenhahn, Weigert, and Weiss in Stellar Structure and Evolution.
Nearly all of the interior part of convective stars has an adiabatic stratification (corrections to this are small for fully convective regions), such that
, which holds for an adiabatic expansion of an ideal gas.
We assume that this relation holds from the interior to the surface of the star—the surface is called photosphere. We assume to be constant throughout the interior of the star with value 0.4. However, we obtain the correct distinctive behavior.
For the interior we consider a simple polytropic relation between P and T:
With the index .
We assume the relation above to hold until the photosphere where we assume to have a simple absorption law
Then, we use the hydrostatic equilibrium equation and integrate it with respect to the radius to give us
For the solution in the interior we set ; in the P-T relation and then eliminate pressure of this equation.
Luminosity is given by the Stephan-Boltzmann law applied to a perfect black body:
.
Thus, any value of R corresponds to a certain point in the Hertzsprung–Russell diagram.
Finally, after some algebra this is the equation for the Hayashi limit in the Hertzsprung–Russell diagram:
With coefficients
,
Takeaways from plugin in and for a cool hydrogen ion dominated atmosphere oppacity model ():
The Hayashi limit must be far to the right in the Hertzsprung–Russell diagram which means temperatures have to be low.
The Hayashi limit must be very steep. The gradient of Luminosity with respect to temperature has to be large.
The Hayashi limit shifts slightly to the left in the Hertzsprung–Russell diagram for increasing M.
These predictions are supported by numerical simulations of stars.
What happens when stars cross the limit
Until now we have made no claims on the stability of locale to the left, right or at the Hayashi limit in the Hertzsprung–Russell diagram. To the left of the Hayashi limit, we have and some part of the model is radiative. The model is fully convective at the Hayashi limit with . Models to the right of the Hayashi limit should have .
If a star is formed such that some region in its deep interior has large large convective fluxes with velocities . The convective fluxes of energy cooldown the interior rapidly until and the star has moved to the Hayashi limit. In fact, it can be shown from the mixing length model that even a small excess can transport energy from the deep interior to the surface by convective fluxes. This will happen within the short timescale for the adjustment of convection which is still larger than timescales for non-equilibrium processes in the star such as hydrodynamic adjustment associated with the thermal time scale. Hence, the limit between an “allowed” stable region (left) and a “forbidden” unstable region (right) for stars of given M and composition that are in hydrostatic equilibrium and have a fully adjusted convection is the Hayashi limit.
See also
Eddington limit
References
Concepts in astrophysics
Stellar evolution | Hayashi limit | [
"Physics"
] | 1,276 | [
"Concepts in astrophysics",
"Astrophysics",
"Stellar evolution"
] |
11,045,901 | https://en.wikipedia.org/wiki/Slow%20science | Slow science is part of the broader slow movement. It is based on the belief that science should be a slow, steady, methodical process, and that scientists should not be expected to provide "quick fixes" to society's problems. Slow science supports curiosity-driven scientific research and opposes performance targets. Slow science is a continually developing school of thought in the scientific community. Followers of slow science practices are generally opposed to the current model of research which is seen as constrained by the need for continued funding. The slow science perspective attributes the overinflation of scientific publishing, and rise in fraudulent publishing with the requirement for researchers and institutions to create a justification for continued funding. The term slow science was first popularised in “Another Science is Possible: A Manifesto for Slow Science” by researcher Isabelle Stengers in 2018. The idea of “publish or perish”, which too links limitations in the quality of research to financial constraints, has been around since the early 20th century. The slow science philosophy has been described as both a way to approach scientific research, and a science led movement which acts as a critique of science's function in neoliberal society.
Slow science has developed its key principles through the contribution of many scholars and organisations. Key principles include calls to shift from scientific research which places its value in output of research, research funding reform, and ridding scientific research from coerced political agendas. For especially well known scientists, some have had the freedom to apply slow science principles. Slow Science development has especially gained prevalence in western European scientific communities, in progressive research universities. Slow science as a whole has gradually gained support through individuals and organisational advocacy. Criticism, due to the movement's relatively small impact, has been limited.
Development
The development of slow science is traced back to “publish or perish”, first noted by Clarence Marsh Case in 1923. Slow science, differently from publish or perish, acts as a critique of the concept. Slow science contributions have narrowed down the publish or perish culture into the scientific field. Slow science attributes many of its key principles to publish or perish, and is considered to be the way in which scientists can support a movement for change. Since the rise in prevalence of the term slow science in the 21st century, the two terms have been used as opposites by academics and journalists alike.
Early contributions to the development of slow science came from research universities across western Europe. Before the development of slow science, was the popularisation in university spheres of the term fast science. This was most notable by Dr Joel Candau in 2010, in his open letter to the University of Nice, in which he stated “Fast science, like fast food, favours quantity over quality,”. In 2010 Ruth Muller had coined the term slow science, whilst in her position at the AIIA, helping develop a slow science international network. Most significant in its early development, was the publication of the article “the Slow Science Manifesto” by a group of German scholars known as the slow science academy. This manifesto was the first time the slow science had reached a non academic audience.
Most notably however, was the popularisation of the term slow science in 2018. The publication “Another Science is Possible: A Manifesto for Slow Science” by Isabelle Stengers, is the first book published on the topic of slow science. During this period, there was an increase in media activity surrounding slow science, which helped grow collaboration with other slow movements, such as slow fashion. Isabelle Stengers' contribution was significant as a scholar who could attract media beyond academic spheres. Notability increased as a result of globally recognised psychologist Uta Frith's journal article on slow science. Since the start of the COVID-19 pandemic, publications on slow science have been limited.
Key principles
The key principles of slow science are based on the premise that the most accurate and thorough scientific research is as a result of a lack of financial constraint, absence of an immediate deadline, and no directive from market based influences. The key principles of slow science are a critique of modern scientific funding, and advocate for public knowledge, and changed practices in scientific research.
Slow scientific practice
At the core of the slow science agenda is the desire to shift from scientific research which places its value in output of research, to research which is for the benefit of the public. Slow science advocates for the adoption of slow scientific practices. An example of this is the support of increased cumulative scientific research, which as a collaboration of thorough scientific research, will be more effective in addressing great global issues. In adopting these values into scientific practice, slow science aims to reduce the stress on research by removing funding-linked performance outcomes. Slow scientific practices involve a methodical approach, published with a rigorous peer review system, that slow science researchers believe will be able to reduce fraudulent results and increase research which leads to innovation.
Slow scientific practice aims for quality of work over quantity of output. Therefore, there is a distinct opposition to the overinflating quality of academic research. Amongst the slow science community, phrases such as the “least publishable unit” and “publons”, have been used to satirise the rise in peer-reviewed publications with minimal amounts of depth in its research. A theme within so slow science is a positive outlook on the prospects of scientific advancement, however a necessity to advocate for the publishing of high quality, peer-reviewed study.
Research funding reform
Slow Science proposes a change to the current structure of research funding, especially directed towards the increase in funding sourced from corporate donorship in academic institutions, as well as the rise in private independent research institutes. The slow science movement has called for all government funded research to be for the benefit of broader society, and has also called for an overall system of funding reform. Slow science groups, such as Slow Science Belgium, have called for increased government funding to be a necessity, and for key performance indicator linked targets for results to be removed from grant conditions. Key contributor to the slow science Uta Frith has expressed the idea that as government funding increases, so will risk taking in research, and therefore lead to the potential of greater scientific innovation.
Apolitical driven research
Apolitical research, from a slow science perspective, opposes research which aims to show alternatives to scientific precedent for political benefits. As well as this, the slow science movement extends this to what is described as an inherently political shift in the privatisation of knowledge, that is, scientific research which has moved from publications to private research and development of products. Slow Science believes in ending prolonged research for the benefit of governments, on issues such as resource depletion, global warming and urban overdevelopment, as to doubt scientific consensus for policy benefits. The slow science movement perceives financial pressures from governments or corporations as forms of suppression of broad research, and believes in an inherent link between funding and scientific agenda. Hence, slow scientific practice is inherently a political struggle, to remove political and ideological constraints in research.
In scientific research
Environmental research
Subscribers to slow science principles consider environmental research to be a field of research which has become continually scrutinised by policy makers who search for cost effective and convenient solutions to climate change. Slow science aims to protect climate research through what is considered the separation of scientific judgement from the judgement of social and political issues. Slow scientists advocate for the removal of climatology from the influence of the speculative economy, and the undermining of research by policy makers who control scientific funding. The slow science approach to environmental research is one which is inherently a critique of capitalism, and one which considers current constraints in funding leading to scientists promoting “best possible” outcomes in reporting of climate.
Clinical research
The application of slow science principles are especially dedicated to changes in clinical research as a whole. German psychologist Uta Frith describes her breakthrough 1980's research into the link between dyslexia and phonological processing as one which had the freedom to make errors, and run numerous trials. Uta Firth in her criticism of “Fast Science”, identifies the possible future of clinical research does not allow for numerous and large trials to identify potential errors in trials. For slow science, the rise in “convenience sampling” by academic institutions who provide researchers with pools of students is a cost cutting method which skews the results of data by not producing a diverse sample. Clinical research, from a slow science perspective, should be removed from significant financial constraints which potentially lead to unreliable or fraudulent data.
Interdisciplinary Study
In collaborative study, slow scientists aim for longer periods spent in interdisciplinary studies. Key to this are learning cycles, which aim to provide an understanding of how other fields create rationale for results gained. As well as this, some slow scientists have attributed collaborative success to ethical models, such as ethics of care. These can include general guides to considerate collaboration, or direct models such as Trontos's ethical framework. Slow scientists believe financial constraints rush scientific research, and this creates an overspecialization of researchers. By taking on long term collaborative projects, slow scientists believe greater global issues can be tackled.
Reception
Slow science has been met with some support from the scientific community, through both individuals and community groups. Two of the most notable slow science activists are researcher Isabelle Stengers, and world recognised developmental psychologist Uta Frith. Both have contributed through publications, with the key work of the Slow Science movement; with publications “Another Science is Possible: A Manifesto for Slow Science” and “Fast Lane to Slow Science”, being introductory perspectives into the slow science movement.
In 2010, anthropologist Dr. Joël Candau wrote an appeal to the University of Nice against fast science principles, and received 4000 signatories by scientists. The unexpected support from this open letter indicated a rise in the popularity of the term “fast science”, and the beginning of an evident support base.
Whilst individual publications have been credited towards providing research methods used through a slow science lens, Slow Scientific organisations have developed to act as a union of scientists who advocate for funding reform in scientific research. The German “Slow Science Academy”, and “Slow Science Belgium”, are organisations of scientists which advocate and practise slow science principles in their research.
The slow science movement, due to its only recent notability, has limited direct criticism from university administrators. However, arguments developed by especially university administrations have promoted financial and time constraints as a way to increase positive pressure on academic research. Research institutions also expect researchers to accept certain terms of constraint as to promote professionalism, and drive focused innovation. The most prevalent disagreement between slow science advocates and university administration is on the basis of tenure. University administrations have opposed criticism to advocates for tenureship, as a broader scope of casual academics who can produce a higher amount of “deliverables” for the same amount of funding.
See also
"Publish or perish"
References
Further reading
Science in society
Slow movement
Philosophy of technology | Slow science | [
"Technology"
] | 2,204 | [
"Philosophy of technology",
"Science and technology studies"
] |
11,046,096 | https://en.wikipedia.org/wiki/Shashlik%20%28physics%29 | In high energy physics detectors, shashlik is a layout for a sampling calorimeter. It refers to a stack of alternating slices of absorber (e.g. lead, brass) and scintillator materials (crystal or plastic), which is penetrated by a wavelength shifting fiber running perpendicular to the absorber and scintillator tiles.
The absorber has a small interaction length, so that a particle radiates energy in a short track. The scintillator material produces visible light when transversed by the particle's radiated energy. This occurs with an electromagnetic calorimeter, in the form of photons and/or electron+positron pairs. The energy of the particle may be then measured by the intensity of scintillation light produced by the various scintillator slices. An example detector that uses a shashlik electromagnetic calorimeter is the LHCb detector.
This type of calorimeter was likely named after the shashlik, a popular form of shish kebab sold by street vendors in the former Soviet Union, by the Russian and Ukrainian scientists who first proposed it.
References
Calorimetry
Particle physics | Shashlik (physics) | [
"Physics"
] | 239 | [
"Particle physics stubs",
"Particle physics"
] |
11,047,538 | https://en.wikipedia.org/wiki/Earth%E2%80%93ionosphere%20waveguide | The Earth–ionosphere waveguide is the phenomenon in which certain radio waves can propagate in the space between the ground and the boundary of the ionosphere.
Because the ionosphere contains charged particles, it can behave as a conductor. The earth operates as a ground plane, and the resulting cavity behaves as a large waveguide.
Extremely low frequency (ELF) (< 3 kHz) and very low frequency (VLF)
(3–30 kHz) signals can propagate efficiently in this waveguide. For instance, lightning strikes launch a signal called radio atmospherics, which can travel many thousands of kilometers, because they are confined between the Earth and the ionosphere.
The round-the-world nature of the waveguide produces resonances, like a cavity, which are at ~7 Hz.
Introduction
Radio propagation within the ionosphere depends on frequency, angle of incidence, time of day, season, Earth's magnetic field, and solar activity. At vertical incidence, waves with frequencies larger than the electron plasma frequency ( in Hz)
of the F-layer maximum
( in is the electron density) can propagate through the ionosphere nearly undisturbed. Waves with frequencies smaller than are reflected within the ionospheric D-, E-, and F-layers. is of the order of 8–15 MHz during day time conditions. For oblique incidence, the critical frequency becomes larger.
Very low frequencies (VLF: 3–30 kHz), and extremely low frequencies (ELF: <3 kHz) are reflected at the ionospheric D- and lower E-layer. An exception is whistler propagation of lightning signals along the geomagnetic field
lines.
The wavelengths of VLF waves (10–100 km) are already comparable with the height of the ionospheric D-layer (about 70 km during the day, and 90 km during the night). Therefore, ray theory is only applicable for propagation over short distances, while mode theory must be used for larger distances. The region between Earth's surface and the ionospheric D-layer behaves thus like a waveguide for VLF- and ELF-waves.
In the presence of the ionospheric plasma and the geomagnetic field, electromagnetic waves exist for frequencies which are larger than the gyrofrequency of the ions (about 1 Hz). Waves with frequencies smaller than the gyrofrequency are called hydromagnetic waves. The geomagnetic pulsations with periods of seconds to minutes as well as Alfvén waves belong to that type of waves.
Transfer function
The prototype of a short vertical rod antenna is a vertical electric Hertz dipole in which electric alternating currents of frequency f flow. Its radiation of electromagnetic waves within the Earth-ionospheric waveguide can be described by a transfer function T(ρ,ω):
where Ez is the vertical component of the electric field at the receiver in a distance ρ from the transmitter, Eo is the electric field of a Hertzian dipole in free space, and the angular frequency. In free space, it is . Evidently, the Earth–ionosphere waveguide is dispersive because the transfer function depends on frequency. This means that phase- and group velocity of the waves are frequency dependent.
Ray theory
In the VLF range, the transfer function is the sum of a ground wave which arrives directly at the receiver and multihop sky waves reflected at the ionospheric D-layer (Figure 1).
For the real Earth's surface, the ground wave becomes dissipated and depends on the orography along the ray path. For VLF waves at shorter distances, this effect is, however, of minor importance, and the reflection factor of the Earth is , in a first approximation.
At shorter distances, only the first hop sky wave is of importance. The D-layer can be simulated by a magnetic wall () with a fixed boundary at a virtual height h, which means a phase jump of 180° at the reflection point. In reality, the electron density of the D-layer increases with altitude, and the wave is bounded as shown in Figure 2.
The sum of ground wave and first hop wave displays an interference pattern with interference minima if the difference between the ray paths of ground and first sky wave is half a wavelength (or a phase difference of 180°). The last interference minimum on the ground (z = 0) between the ground wave and the first sky wave is at a horizontal distance of
with c the velocity of light. In the example of Figure 3, this is at about 500 km distance.
Wave mode theory
The theory of ray propagation of VLF waves breaks down at larger distances because in the sum of these waves successive multihop sky waves are involved, and the sum diverges. In addition, it becomes necessary to take into account the spherical Earth. Mode theory
which is the sum of eigen-modes in the Earth–ionosphere waveguide is valid in this range of distances. The wave modes have fixed vertical structures of their vertical electric field components with maximum amplitudes at the bottom and zero amplitudes at the top of the waveguide.
In the case of the fundamental first mode, it is a quarter wavelength. With decreasing frequency, the eigenvalue becomes imaginary at the cutoff frequency, where the mode changes to an evanescent wave. For the first mode, this happens at
below which that mode will not propagate (Figure 4).
The attenuation of the modes increases with wavenumber n. Therefore, essentially only the first two modes are involved in the wave propagation The first interference minimum between these two modes is at the same distance as that of the last interference minimum of ray theory () indicating the equivalence of both theories
As seen in Figure 3, the spacing between the mode interference minima is constant and about 1000 km in this example. The first mode becomes dominant at distances greater than about 1500 km, because the second mode is more strongly attenuated than the first mode.
In the range of ELF waves, only mode theory is appropriate. The fundamental mode is the zeroth mode (Figure 4). The D-layer becomes here an electric wall (Ri = 1). Its vertical structure is simply a vertical electric field constant with altitude.
In particular, a resonance zeroth mode exists for waves which are an integral part of the Earth's circumference and has the frequency
with the Earth's radius. The first resonance peaks are at 7.5, 15, and 22,5 Hz. These are the Schumann resonances. The spectral signals from lightning are amplified at those frequencies.
Waveguide characteristics
The above discussion merely illustrates a simple picture of mode and ray theory. More detailed treatments require a large computer program. In particular, it is difficult to
solve the problem of the horizontal and vertical inhomogeneities of the waveguide. The effect of the Earth's curvature is, that near the antipode the field strength slightly increases. Due to the influence of the Earth' magnetic field, the medium becomes anisotropic so that the ionospheric reflection factor in reality is a matrix. This means that a vertically polarized incident wave after reflection at the ionospheric D-layer converses to a vertically and a horizontally polarized wave. Moreover, the geomagnetic field gives rise to a nonreciprocity of VLF waves. Waves propagating from east to west are more strongly attenuated than vice versa. There appears a phase slipping near the distance of the deep interference minimum of . During the times of sunrise and/or sunset, there is sometimes a phase gain or loss of 360° because of the irreversible behavior of the first sky wave.
The dispersion characteristics of the Earth-ionospheric waveguide can be used for locating thunderstorm activity by measurements of the difference of the group time delay of lightning signals (sferics) at adjacent frequencies up to distances of 10000 km. The Schumann resonances allow to determine the global lightning activity.
See also
Alfvén resonator
Atmospheric duct
Shortwave radio
Skywave
Tropospheric ducting
References and notes
Notes
Citations
Electromagnetism
Ionosphere
Radio frequency propagation | Earth–ionosphere waveguide | [
"Physics"
] | 1,687 | [
"Physical phenomena",
"Electromagnetism",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves",
"Fundamental interactions"
] |
3,100,179 | https://en.wikipedia.org/wiki/Locally%20nilpotent | In the mathematical field of commutative algebra, an ideal I in a commutative ring A is locally nilpotent at a prime ideal p if Ip, the localization of I at p, is a nilpotent ideal in Ap.
In non-commutative algebra and group theory, an algebra or group is locally nilpotent if and only if every finitely generated subalgebra or subgroup is nilpotent. The subgroup generated by the normal locally nilpotent subgroups is called the Hirsch–Plotkin radical and is the generalization of the Fitting subgroup to groups without the ascending chain condition on normal subgroups.
A locally nilpotent ring is one in which every finitely generated subring is nilpotent: locally nilpotent rings form a radical class, giving rise to the Levitzki radical.
References
Commutative algebra | Locally nilpotent | [
"Mathematics"
] | 185 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
3,100,319 | https://en.wikipedia.org/wiki/Aerodynamic%20force | In fluid mechanics, an aerodynamic force is a force exerted on a body by the air (or other gas) in which the body is immersed, and is due to the relative motion between the body and the gas.
Force
There are two causes of aerodynamic force:
the normal force due to the pressure on the surface of the body
the shear force due to the viscosity of the gas, also known as skin friction.
Pressure acts normal to the surface, and shear force acts parallel to the surface. Both forces act locally. The net aerodynamic force on the body is equal to the pressure and shear forces integrated over the body's total exposed area.
When an airfoil moves relative to the air, it generates an aerodynamic force determined by the velocity of relative motion, and the angle of attack. This aerodynamic force is commonly resolved into two components, both acting through the center of pressure:
drag is the force component parallel to the direction of relative motion,
lift is the force component perpendicular to the direction of relative motion.
In addition to these two forces, the body may experience an aerodynamic moment.
The force created by propellers and jet engines is called thrust, and is also an aerodynamic force (since it acts on the surrounding air). The aerodynamic force on a powered airplane is commonly represented by three vectors: thrust, lift and drag.
The other force acting on an aircraft during flight is its weight, which is a body force and not an aerodynamic force.
See also
Fluid dynamics
References
Force | Aerodynamic force | [
"Chemistry",
"Engineering"
] | 299 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
3,100,378 | https://en.wikipedia.org/wiki/Ian%20G.%20Macdonald | Ian Grant Macdonald (11 October 1928 – 8 August 2023) was a British mathematician known for his contributions to symmetric functions, special functions, Lie algebra theory and other aspects of algebra, algebraic combinatorics, and combinatorics.
Early life and education
Born in London, he was educated at Winchester College and Trinity College, Cambridge, graduating in 1952.
Career
He then spent five years as a civil servant. He was offered a position at Manchester University in 1957 by Max Newman, on the basis of work he had done while outside academia. In 1960 he moved to the University of Exeter, and in 1963 became a Fellow of Magdalen College, Oxford. Macdonald became Fielden Professor at Manchester in 1972, and professor at Queen Mary College, University of London, in 1976.
He worked on symmetric products of algebraic curves, Jordan algebras and the representation theory of groups over local fields. In 1972 he proved the Macdonald identities, after a pattern known to Freeman Dyson. His 1979 book Symmetric Functions and Hall Polynomials has become a classic. Symmetric functions are an old theory, part of the theory of equations, to which both K-theory and representation theory lead. His was the first text to integrate much classical theory, such as Hall polynomials, Schur functions, the Littlewood–Richardson rule, with the abstract algebra approach. It was both an expository work and, in part, a research monograph, and had a major impact in the field. The Macdonald polynomials are now named after him. The Macdonald conjectures from 1982 also proved most influential.
Macdonald was elected a Fellow of the Royal Society in 1979. He was an invited speaker in 1970 at the International Congress of Mathematicians (ICM) in Nice and a plenary speaker in 1998 at the ICM in Berlin. In 1991 he received the Pólya Prize of the London Mathematical Society. In 2002 he received an honorary doctorate from the University of Amsterdam. He was awarded the 2009 Steele Prize for Mathematical Exposition. In 2012 he became a fellow of the American Mathematical Society.
Personal life and death
Ian G. Macdonald died on 8 August 2023, at the age of 94.
Selected publications
Macdonald, I. G. Affine Hecke Algebras and Orthogonal Polynomials. Cambridge Tracts in Mathematics, 157. Cambridge University Press, Cambridge, 2003. x+175 pp.
Macdonald, I. G. Symmetric Functions and Hall Polynomials. Second edition. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1995. x+475 pp.
Macdonald, I. G. Symmetric Functions and Orthogonal Polynomials. Dean Jacqueline B. Lewis Memorial Lectures presented at Rutgers University, New Brunswick, New Jersey. University Lecture Series, 12. American Mathematical Society, Providence, Rhode Island, 1998. xvi+53 pp.
Atiyah, M. F.; Macdonald, I. G. Introduction to Commutative Algebra. Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1969. ix+128 pp. ; 1994 pbk edition
References
External links
Biographical notice
1928 births
2023 deaths
People educated at Winchester College
Alumni of Trinity College, Cambridge
Mathematicians from London
Symmetric functions
Academics of Queen Mary University of London
Academics of the University of Manchester
Academics of the University of Exeter
Fellows of Magdalen College, Oxford
Fellows of the American Mathematical Society
Fellows of the Royal Society
20th-century British mathematicians
21st-century British mathematicians
Grand Officers of the Order of Liberty | Ian G. Macdonald | [
"Physics",
"Mathematics"
] | 698 | [
"Algebra",
"Symmetric functions",
"Symmetry"
] |
3,100,521 | https://en.wikipedia.org/wiki/Radiation%20chemistry | Radiation chemistry is a subdivision of nuclear chemistry which studies the chemical effects of ionizing radiation on matter. This is quite different from radiochemistry, as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide.
Radiation interactions with matter
As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system.
Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation.
An important factor that distinguishes different radiation types from one another is the linear energy transfer (LET), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species (β particles, positrons) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usually greater in mass than one electron, for example α particles, and lose energy rapidly resulting in a cluster of ionization events in close proximity to one another. Consequently, the heavy particle travels a relatively short distance from its origin.
Areas containing a high concentration of reactive species following absorption of energy from radiation are referred to as spurs. In a medium irradiated with low LET radiation, the spurs are sparsely distributed across the track and are unable to interact. For high LET radiation, the spurs can overlap, allowing for inter-spur reactions, leading to different yields of products when compared to the same medium irradiated with the same energy of low LET radiation.
Reduction of organics by solvated electrons
A recent area of work has been the destruction of toxic organic compounds by irradiation; after irradiation, "dioxins" (polychlorodibenzo-p-dioxins) are dechlorinated in the same way as PCBs can be converted to biphenyl and inorganic chloride. This is because the solvated electrons react with the organic compound to form a radical anion, which decomposes by the loss of a chloride anion. If a deoxygenated mixture of PCBs in isopropanol or mineral oil is irradiated with gamma rays, then the PCBs will be dechlorinated to form inorganic chloride and biphenyl. The reaction works best in isopropanol if potassium hydroxide (caustic potash) is added. The base deprotonates the hydroxydimethylmethyl radical to be converted into acetone and a solvated electron, as the result the G value (yield for a given energy due to radiation deposited in the system) of chloride can be increased because the radiation now starts a chain reaction, each solvated electron formed by the action of the gamma rays can now convert more than one PCB molecule. If oxygen, acetone, nitrous oxide, sulfur hexafluoride or nitrobenzene is present in the mixture, then the reaction rate is reduced. This work has been done recently in the US, often with used nuclear fuel as the radiation source.
In addition to the work on the destruction of aryl chlorides, it has been shown that aliphatic chlorine and bromine compounds such as perchloroethylene, Freon (1,1,2-trichloro-1,2,2-trifluoroethane) and halon-2402 (1,2-dibromo-1,1,2,2-tetrafluoroethane) can be dehalogenated by the action of radiation on alkaline isopropanol solutions. Again a chain reaction has been reported.
In addition to the work on the reduction of organic compounds by irradiation, some work on the radiation induced oxidation of organic compounds has been reported. For instance, the use of radiogenic hydrogen peroxide (formed by irradiation) to remove sulfur from coal has been reported. In this study it was found that the addition of manganese dioxide to the coal increased the rate of sulfur removal. The degradation of nitrobenzene under both reducing and oxidizing conditions in water has been reported.
Reduction of metal compounds
In addition to the reduction of organic compounds by the solvated electrons it has been reported that upon irradiation a pertechnetate solution at pH 4.1 is converted to a colloid of technetium dioxide. Irradiation of a solution at pH 1.8 soluble Tc(IV) complexes are formed. Irradiation of a solution at pH 2.7 forms a mixture of the colloid and the soluble Tc(IV) compounds. Gamma irradiation has been used in the synthesis of nanoparticles of gold on iron oxide (Fe2O3).
It has been shown that the irradiation of aqueous solutions of lead compounds leads to the formation of elemental lead. When an inorganic solid such as bentonite and sodium formate are present then the lead is removed from the aqueous solution.
Polymer modification
Another key area uses radiation chemistry to modify polymers. Using radiation, it is possible to convert monomers to polymers, to crosslink polymers, and to break polymer chains. Both man-made and natural polymers (such as carbohydrates) can be processed in this way.
Water chemistry
Both the harmful effects of radiation upon biological systems (induction of cancer and acute radiation injuries) and the useful effects of radiotherapy involve the radiation chemistry of water. The vast majority of biological molecules are present in an aqueous medium; when water is exposed to radiation, the water absorbs energy, and as a result forms chemically reactive species that can interact with dissolved substances (solutes). Water is ionized to form a solvated electron and H2O+, the H2O+ cation can react with water to form a hydrated proton (H3O+) and a hydroxyl radical (HO.). Furthermore, the solvated electron can recombine with the H2O+ cation to form an excited state of the water. This excited state then decomposes to species such as hydroxyl radicals (HO.), hydrogen atoms (H.) and oxygen atoms (O.). Finally, the solvated electron can react with solutes such as solvated protons or oxygen molecules to form hydrogen atoms and dioxygen radical anions, respectively. The fact that oxygen changes the radiation chemistry might be one reason why oxygenated tissues are more sensitive to irradiation than the deoxygenated tissue at the center of a tumor. The free radicals, such as the hydroxyl radical, chemically modify biomolecules such as DNA, leading to damage such as breaks in the DNA strands. Some substances can protect against radiation-induced damage by reacting with the reactive species generated by the irradiation of the water.
It is important to note that the reactive species generated by the radiation can take part in following reactions; this is similar to the idea of the non-electrochemical reactions which follow the electrochemical event which is observed in cyclic voltammetry when a non-reversible event occurs. For example, the SF5 radical formed by the reaction of solvated electrons and SF6 undergo further reactions which lead to the formation of hydrogen fluoride and sulfuric acid.
In water, the dimerization reaction of hydroxyl radicals can form hydrogen peroxide, while in saline systems the reaction of the hydroxyl radicals with chloride anions forms hypochlorite anions.
The action of radiation upon underground water is responsible for the formation of hydrogen which is converted by bacteria into methane.
Equipment
Radiation chemistry applied in industrial processing equipment
To process materials, either a gamma source or an electron beam can be used. The international type IV (wet storage) irradiator is a common design, of which the JS6300 and JS6500 gamma sterilizers (made by 'Nordion International', which used to trade as 'Atomic Energy of Canada Ltd') are typical examples. In these irradiation plants, the source is stored in a deep well filled with water when not in use. When the source is required, it is moved by a steel wire to the irradiation room where the products which are to be treated are present; these objects are placed inside boxes which are moved through the room by an automatic mechanism. By moving the boxes from one point to another, the contents are given a uniform dose. After treatment, the product is moved by the automatic mechanism out of the room. The irradiation room has very thick concrete walls (about 3 m thick) to prevent gamma rays from escaping. The source consists of 60Co rods sealed within two layers of stainless steel. The rods are combined with inert dummy rods to form a rack with a total activity of about 12.6PBq (340kCi).
Research equipment
While it is possible to do some types of research using an irradiator much like that used for gamma sterilization, it is common in some areas of science to use a time resolved experiment where a material is subjected to a pulse of radiation (normally electrons from a LINAC). After the pulse of radiation, the concentration of different substances within the material are measured by emission spectroscopy or Absorption spectroscopy, hence the rates of reactions can be determined. This allows the relative abilities of substances to react with the reactive species generated by the action of radiation on the solvent (commonly water) to be measured. This experiment is known as pulse radiolysis which is closely related to flash photolysis.
In the latter experiment the sample is excited by a pulse of light to examine the decay of the excited states by spectroscopy; sometimes the formation of new compounds can be investigated. Flash photolysis experiments have led to a better understanding of the effects of halogen-containing compounds upon the ozone layer.
Chemosensor
The SAW chemosensor is nonionic and nonspecific. It directly measures the total mass of each chemical compound as it exits the gas chromatography column and condenses on the crystal surface, thus causing a change in the fundamental acoustic frequency of the crystal. Odor concentration is directly measured with this integrating type of detector. Column flux is obtained from a microprocessor that continuously calculates the derivative of the SAW frequency.
See also
Radiolysis
Milton Burton
References
Nuclear chemistry | Radiation chemistry | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,324 | [
"Physical phenomena",
"Nuclear chemistry",
"Materials science",
"Radiation",
"Condensed matter physics",
"nan",
"Nuclear physics",
"Radiation effects"
] |
3,100,545 | https://en.wikipedia.org/wiki/Activity-based%20management | Activity-based management (ABM) is a method of identifying and evaluating activities that a business performs, using activity-based costing to carry out a value chain analysis or a re-engineering initiative to improve strategic and operational decisions in an organization.
Activity-based costing
Activity-based costing establishes relationships between overhead costs and activities so that costs can be more precisely allocated to products, services, or customer segments.
Activity-based management focuses on managing activities to reduce costs and improve customer value.
Kaplan and Cooper divide ABM into operational and strategic:
Operational ABM is about doing things right, using ABC information to improve efficiency. Those activities which add value to the product can be identified and improved. Activities that don't add value need to be reduced to cut costs without reducing product value.
Strategic ABM is about doing the right things, using ABC information to decide which products to develop and which activities to use. This can also be used for customer profitability analysis, identifying which customers are the most profitable and focusing on them more.
One of the key benefits for the use of ABM is how it enables managers to understand product and customer profitability, the cost business processes and how to improve them (Alireza 2017).
Risks
A risk with ABM is that some activities have an implicit value, not necessarily reflected in a financial value added to any product. For instance, a particularly pleasant workplace can help attract and retain the best staff, but may not be identified as adding value in operational ABM. A customer who represents a loss based on committed activities, but who opens up leads in a new market, may be identified as a low value customer by a strategic ABM process.
Managers should interpret these values and use ABM as a "common, yet neutral, ground … this provides the basis for negotiation". ABM can give middle managers an understanding of costs to other teams to help them make decisions that benefit the whole organization, not just their activities' bottom line.
External links
McGuire, B. L. et al. (1998). "Implementing activity-based management in the banking industry", Journal of Bank Cost & Management Accounting].
Activity Based Management Advanced Implementation Group
References
Management accounting
Engineering management | Activity-based management | [
"Engineering"
] | 446 | [
"Engineering economics",
"Engineering management"
] |
3,100,560 | https://en.wikipedia.org/wiki/Ore%20condition | In mathematics, especially in the area of algebra known as ring theory, the Ore condition is a condition introduced by Øystein Ore, in connection with the question of extending beyond commutative rings the construction of a field of fractions, or more generally localization of a ring. The right Ore condition for a multiplicative subset S of a ring R is that for and , the intersection . A (non-commutative) domain for which the set of non-zero elements satisfies the right Ore condition is called a right Ore domain. The left case is defined similarly.
General idea
The goal is to construct the right ring of fractions R[S−1] with respect to a multiplicative subset S. In other words, we want to work with elements of the form as−1 and have a ring structure on the set R[S−1]. The problem is that there is no obvious interpretation of the product (as−1)(bt−1); indeed, we need a method to "move" s−1 past b. This means that we need to be able to rewrite s−1b as a product b1s1−1. Suppose then multiplying on the left by s and on the right by s1, we get . Hence we see the necessity, for a given a and s, of the existence of a1 and s1 with and such that .
Application
Since it is well known that each integral domain is a subring of a field of fractions (via an embedding) in such a way that every element is of the form rs−1 with s nonzero, it is natural to ask if the same construction can take a noncommutative domain and associate a division ring (a noncommutative field) with the same property. It turns out that the answer is sometimes "no", that is, there are domains which do not have an analogous "right division ring of fractions".
For every right Ore domain R, there is a unique (up to natural R-isomorphism) division ring D containing R as a subring such that every element of D is of the form rs−1 for r in R and s nonzero in R. Such a division ring D is called a ring of right fractions of R, and R is called a right order in D. The notion of a ring of left fractions and left order are defined analogously, with elements of D being of the form s−1r.
It is important to remember that the definition of R being a right order in D includes the condition that D must consist entirely of elements of the form rs−1. Any domain satisfying one of the Ore conditions can be considered a subring of a division ring, however this does not automatically mean R is a left order in D, since it is possible D has an element which is not of the form s−1r. Thus it is possible for R to be a right-not-left Ore domain. Intuitively, the condition that all elements of D be of the form rs−1 says that R is a "big" R-submodule of D. In fact the condition ensures RR is an essential submodule of DR. Lastly, there is even an example of a domain in a division ring which satisfies neither Ore condition (see examples below).
Another natural question is: "When is a subring of a division ring right Ore?" One characterization is that a subring R of a division ring D is a right Ore domain if and only if D is a flat left R-module .
A different, stronger version of the Ore conditions is usually given for the case where R is not a domain, namely that there should be a common multiple
c = au = bv
with u, v not zero divisors. In this case, Ore's theorem guarantees the existence of an over-ring called the (right or left) classical ring of quotients.
Examples
Commutative domains are automatically Ore domains, since for nonzero a and b, ab is nonzero in . Right Noetherian domains, such as right principal ideal domains, are also known to be right Ore domains. Even more generally, Alfred Goldie proved that a domain R is right Ore if and only if RR has finite uniform dimension. It is also true that right Bézout domains are right Ore.
A subdomain of a division ring which is not right or left Ore: If F is any field, and is the free monoid on two symbols x and y, then the monoid ring does not satisfy any Ore condition, but it is a free ideal ring and thus indeed a subring of a division ring, by .
Multiplicative sets
The Ore condition can be generalized to other multiplicative subsets, and is presented in textbook form in and . A subset S of a ring R is called a right denominator set if it satisfies the following three conditions for every a, b in R, and s, t in S:
st in S; (The set S is multiplicatively closed.)
aS ∩ sR is not empty; (The set S is right permutable.)
If , then there is some u in S with ; (The set S is right reversible.)
If S is a right denominator set, then one can construct the ring of right fractions RS−1 similarly to the commutative case. If S is taken to be the set of regular elements (those elements a in R such that if b in R is nonzero, then ab and ba are nonzero), then the right Ore condition is simply the requirement that S be a right denominator set.
Many properties of commutative localization hold in this more general setting. If S is a right denominator set for a ring R, then the left R-module RS−1 is flat. Furthermore, if M is a right R-module, then the S-torsion, is an R-submodule isomorphic to , and the module is naturally isomorphic to a module MS−1 consisting of "fractions" as in the commutative case.
Notes
References
External links
PlanetMath page on Ore condition
PlanetMath page on Ore's theorem
PlanetMath page on classical ring of quotients
Ring theory | Ore condition | [
"Mathematics"
] | 1,299 | [
"Fields of abstract algebra",
"Ring theory"
] |
3,100,586 | https://en.wikipedia.org/wiki/Ore%27s%20theorem | Ore's theorem is a result in graph theory proved in 1960 by Norwegian mathematician Øystein Ore. It gives a sufficient condition for a graph to be Hamiltonian, essentially stating that a graph with sufficiently many edges must contain a Hamilton cycle. Specifically, the theorem considers the sum of the degrees of pairs of non-adjacent vertices: if every such pair has a sum that at least equals the total number of vertices in the graph, then the graph is Hamiltonian.
Formal statement
Let be a (finite and simple) graph with vertices. We denote by the degree of a vertex in , i.e. the number of incident edges in to . Then, Ore's theorem states that if
then is Hamiltonian.
Proof
It is equivalent to show that every non-Hamiltonian graph does not obey condition (∗). Accordingly, let be a graph on vertices that is not Hamiltonian, and let be formed from by adding edges one at a time that do not create a Hamiltonian cycle, until no more edges can be added. Let and be any two non-adjacent vertices in . Then adding edge to would create at least one new Hamiltonian cycle, and the edges other than in such a cycle must form a Hamiltonian path in with and . For each index in the range , consider the two possible edges in from to and from to . At most one of these two edges can be present in , for otherwise the cycle would be a Hamiltonian cycle. Thus, the total number of edges incident to either or is at most equal to the number of choices of , which is . Therefore, does not obey property (∗), which requires that this total number of edges () be greater than or equal to . Since the vertex degrees in are at most equal to the degrees in , it follows that also does not obey property (∗).
Algorithm
describes the following simple algorithm for constructing a Hamiltonian cycle in a graph meeting Ore's condition.
Arrange the vertices arbitrarily into a cycle, ignoring adjacencies in the graph.
While the cycle contains two consecutive vertices vi and vi + 1 that are not adjacent in the graph, perform the following two steps:
Search for an index j such that the four vertices vi, vi + 1, vj, and vj + 1 are all distinct and such that the graph contains edges from vi to vj and from vj + 1 to vi + 1
Reverse the part of the cycle between vi + 1 and vj (inclusive).
Each step increases the number of consecutive pairs in the cycle that are adjacent in the graph, by one or two pairs (depending on whether vj and vj + 1 are already adjacent), so the outer loop can only happen at most n times before the algorithm terminates, where n is the number of vertices in the given graph. By an argument similar to the one in the proof of the theorem, the desired index j must exist, or else the nonadjacent vertices vi and vi + 1 would have too small a total degree. Finding i and j, and reversing part of the cycle, can all be accomplished in time O(n). Therefore, the total time for the algorithm is O(n2), matching the number of edges in the input graph.
Related results
Ore's theorem is a generalization of Dirac's theorem that, when each vertex has degree at least , the graph is Hamiltonian. For, if a graph meets Dirac's condition, then clearly each pair of vertices has degrees adding to at least .
In turn Ore's theorem is generalized by the Bondy–Chvátal theorem. One may define a closure operation on a graph in which, whenever two nonadjacent vertices have degrees adding to at least , one adds an edge connecting them; if a graph meets the conditions of Ore's theorem, its closure is a complete graph. The Bondy–Chvátal theorem states that a graph is Hamiltonian if and only if its closure is Hamiltonian; since the complete graph is Hamiltonian, Ore's theorem is an immediate consequence.
found a version of Ore's theorem that applies to directed graphs. Suppose a digraph G has the property that, for every two vertices u and v, either there is an edge from u to v or the outdegree of u plus the indegree of v equals or exceeds the number of vertices in G. Then, according to Woodall's theorem, G contains a directed Hamiltonian cycle. Ore's theorem may be obtained from Woodall by replacing every edge in a given undirected graph by a pair of directed edges. A closely related theorem by states that an n-vertex strongly connected digraph with the property that, for every two nonadjacent vertices u and v, the total number of edges incident to u or v is at least 2n − 1 must be Hamiltonian.
Ore's theorem may also be strengthened to give a stronger conclusion than Hamiltonicity as a consequence of the degree condition in the theorem. Specifically, every graph satisfying the conditions of Ore's theorem is either a regular complete bipartite graph or is pancyclic .
References
.
.
.
.
.
Extremal graph theory
Theorems in graph theory
Articles containing proofs
Hamiltonian paths and cycles | Ore's theorem | [
"Mathematics"
] | 1,081 | [
"Graph theory",
"Theorems in discrete mathematics",
"Mathematical relations",
"Extremal graph theory",
"Articles containing proofs",
"Theorems in graph theory"
] |
3,100,687 | https://en.wikipedia.org/wiki/Quasiregular%20representation | This article addresses the notion of quasiregularity in the context of representation theory and topological algebra. For other notions of quasiregularity in mathematics, see the disambiguation page quasiregular.
In mathematics, quasiregular representation is a concept of representation theory, for a locally compact group G and a homogeneous space G/H where H is a closed subgroup.
In line with the concepts of regular representation and induced representation, G acts on functions on G/H. If however Haar measures give rise only to a quasi-invariant measure on G/H, certain 'correction factors' have to be made to the action on functions, for
L2(G/H)
to afford a unitary representation of G on square-integrable functions. With appropriate scaling factors, therefore, introduced into the action of G, this is the quasiregular representation or modified induced representation.
Unitary representation theory
Topological groups | Quasiregular representation | [
"Mathematics"
] | 193 | [
"Topological spaces",
"Space (mathematics)",
"Topological groups"
] |
3,100,703 | https://en.wikipedia.org/wiki/Dongtan%2C%20Shanghai | Dongtan was a planned development described as an eco-city on the island of Chongming in Shanghai, China that was never built. Design began in 2005, and by 2010 the development had stalled. Adjacent to booming Shanghai, designers claimed Dongtan would be the world's first truly sustainable new urban development. Dongtan was presented at the United Nations World Urban Forum by China as an example of a purpose-built eco-city.
Reasons for the project's closure include its proposed location in a highly-value wetlands area, tensions between its development partners (Arup, a British engineering company, and Shanghai Industrial Investment, a state-owned developer), and loss of political support (due to the jailing of Dongtan's top political backer, former Shanghai Communist Party chief Chen Liangyu, on corruption charges in 2008).
The project has been described as a failure because it was not built. However, as an example of design it has inspired and informed other cities worldwide. Ideas from Dongtan were incorporated into the renovation of the Chongming District as a net zero island. Dongtan became a model for a subsequently planned eco-city outside Tianjin.
Proposed Design
Dongtan was to be located at the east end of Chongming Island, adjacent to the sensitive wetlands of the Chongming Dongtan National Nature Reserve, near the mouth of the Yangtze River and just north of Shanghai. Dongtan's first phase, a marina village of 20,000 inhabitants, was supposed to be unveiled at the 2010 World Expo in Shanghai. Some questioned the proposed city's potential effects on the surrounding wetlands. The director of the project, Peter Head, insisted it would not affect the wetlands. "First of all, water usually discharged into the river will be collected, treated, and recycled within the city boundaries," he said. "There will be a 2-mile buffer zone of eco-farm between city development and the wetlands." While farming is water intensive, relatively small amounts of water reach the plants themselves. Head said Dongtan "will capture and recycle water in the city and use recycled water to grow green vegetables hydroponically. This makes the whole water cycle much more efficient".
The developers planned to create a fully built city, with 80,000 residents by 2020.
London-based Arup and the Shanghai Industrial Investment Corporation (SIIC), the city's investment branch, originally partnered to create a master plan for Dongtan, an area three quarters the size of Manhattan. Their brief called for integrated sustainable urban planning and design to create a city as close to carbon-neutral as possible within economic constraints. Project planners estimated a population of 10,000 by 2010 and 500,000 by 2050.
Energy-efficient construction, waste-to-energy systems, and wind power were all part of the original plan.
As a strategic partner, Arup was to be responsible for a range of services, including urban design, sustainable energy management, waste management, renewable energy process implementation, architecture, infrastructure, and even the planning of communities and social structures. Peter Head, director of Arup's sustainable urban design, led the project for the firm from its London's office (during design, Arup claims to have offset the emissions of its team's travel to and from the site in cooperation with emissions brokerage firm CO2e). "Renewable energy will be used to reduce particulate emissions. Transport vehicles will run on batteries or hydrogen-fuel cells and not use any diesel or petrol, creating a relatively quiet city," according to Head's original plan. Other priorities included recycling organic waste to reduce landfills and generate clean energy. Planners in Dongtan planned to put meters in each house to display energy use.
History
McKinsey & Company was involved in developing the initial vision for the project. The British engineering consultancy firm Arup was contracted in 2005 by the developer, the Shanghai Industrial Investment Company (SIIC), to design and masterplan Dongtan as the first of a planned series of eco-cities.
The 2008 conviction of prominent supporter Chen Liangyu contributed to the project's failure.
Reaction
The reaction to Dongtan has been mixed. Former Mayor of London Ken Livingstone praised Dongtan as pioneering work leading to a more sustainable future. His sentiments were echoed by other prominent British politicians, including Gordon Brown and Tony Blair.
Critics have argued that Dongtan will not have a big impact on existing Chinese cities, which will still house the majority of the population.
The main designer, Thomas V. Harwood III, is also taking part in many environmentally less friendly projects in China, including airports and office blocks. In 2008, Arup received the "Greenwasher of the Year Award" from Ethical Corporation magazine.
Several sources described the project as a Potemkin village.
Transport
Dongtan station (Shanghai Metro) (东滩站), future metro station on Chongming line of Shanghai Metro, opening in 2026
See also
Julie Sze, Fantasy Islands: Chinese Dreams and Ecological Fears in an Age of Climate Crisis, 2015, Univ of California Press,
Herbert Girardet and Zhao Yan, Shanghai Dongtan: An Eco-City, SIIC, 2006,
Huangbaiyu
Masdar City
Eco-Cities in China
References
External links
IEEE Spectrum article 2007-07
Biz China Update - Chinese Cities Add "Eco-Franchise" to Urban Planning Wish List
China Economic Review - Dongtan: Eco-Potemkin
Dongtan – The line changes on the greenwash eco city in China
Shanghaiist - Whatever happened to Dongtan?
Building - Corruption scandal delays Dongtan by two years
Whatever happened to the Dongtan eco-city?
China's pioneering eco-city of Dongtan stalls Daily Telegraph
- In China, overambition reins in eco-city plans - Christian Science Monitor
Dongtan, China's Flagship Ecocity Project, R.I.P. - Treehugger.com
- Environment 360 - China's Grand Plans for Eco-Cities Now Lie Abandoned
- Fail: Behind China's Pop-up City Flop
- Plans Shrivel for Chinese Eco-City
"Pop-Up Cities: China Builds a Bright Green Metropolis". IFCE, 24 March 2007. (4,500 words)
Proposed buildings and structures in Shanghai
Neighbourhoods of Shanghai
Energy in China
Sustainable transport
Proposed populated places
Environmental issues in China
Chongming District | Dongtan, Shanghai | [
"Physics"
] | 1,289 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
3,100,839 | https://en.wikipedia.org/wiki/Anaplasia | Anaplasia () is a condition of cells with poor cellular differentiation, losing the morphological characteristics of mature cells and their orientation with respect to each other and to endothelial cells. The term also refers to a group of morphological changes in a cell (nuclear pleomorphism, altered nuclear-cytoplasmic ratio, presence of nucleoli, high proliferation index) that point to a possible malignant transformation.
Such loss of structural differentiation is especially seen in most, but not all, malignant neoplasms. Sometimes, the term also includes an increased capacity for multiplication. Lack of differentiation is considered a hallmark of aggressive malignancies (for example, it differentiates leiomyosarcomas from leiomyomas). The term anaplasia literally means "to form backward". It implies dedifferentiation, or loss of structural and functional differentiation of normal cells. It is now known, however, that at least some cancers arise from stem cells in tissues; in these tumors failure of differentiation, rather than dedifferentiation of specialized cells, account for undifferentiated tumors.
Anaplastic cells display marked pleomorphism (variability). The nuclei are characteristically extremely hyperchromatic (darkly stained) and large. The nuclear-cytoplasmic ratio may approach 1:1 instead of the normal 1:4 or 1:6. Giant cells that are considerably larger than their neighbors may be formed and possess either one enormous nucleus or several nuclei (syncytia). Anaplastic nuclei are variable and bizarre in size and shape. The chromatin is coarse and clumped, and nucleoli may be of astounding size. More important, mitoses are often numerous and distinctly atypical; anarchic multiple spindles may be seen and sometimes appear as tripolar or quadripolar forms. Also, anaplastic cells usually fail to develop recognizable patterns of orientation to one another (i.e., they lose normal polarity). They may grow in sheets, with total loss of communal structures, such as gland formation or stratified squamous architecture. Anaplasia is the most extreme disturbance in cell growth encountered in the spectrum of cellular proliferations.
See also
Pleomorphism
List of biological development disorders
References
Oncology
Induced stem cells | Anaplasia | [
"Biology"
] | 473 | [
"Induced stem cells",
"Stem cell research"
] |
3,100,908 | https://en.wikipedia.org/wiki/Vector%20fields%20on%20spheres | In mathematics, the discussion of vector fields on spheres was a classical problem of differential topology, beginning with the hairy ball theorem, and early work on the classification of division algebras.
Specifically, the question is how many linearly independent smooth nowhere-zero vector fields can be constructed on a sphere in -dimensional Euclidean space. A definitive answer was provided in 1962 by Frank Adams. It was already known, by direct construction using Clifford algebras, that there were at least such fields (see definition below). Adams applied homotopy theory and topological K-theory to prove that no more independent vector fields could be found. Hence is the exact number of pointwise linearly independent vector fields that exist on an ()-dimensional sphere.
Technical details
In detail, the question applies to the 'round spheres' and to their tangent bundles: in fact since all exotic spheres have isomorphic tangent bundles, the Radon–Hurwitz numbers determine the maximum number of linearly independent sections of the tangent bundle of any homotopy sphere. The case of odd is taken care of by the Poincaré–Hopf index theorem (see hairy ball theorem), so the case even is an extension of that. Adams showed that the maximum number of continuous (smooth would be no different here) pointwise linearly-independent vector fields on the ()-sphere is exactly .
The construction of the fields is related to the real Clifford algebras, which is a theory with a periodicity modulo 8 that also shows up here. By the Gram–Schmidt process, it is the same to ask for (pointwise) linear independence or fields that give an orthonormal basis at each point.
Radon–Hurwitz numbers
The Radon–Hurwitz numbers occur in earlier work of Johann Radon (1922) and Adolf Hurwitz (1923) on the Hurwitz problem on quadratic forms. For written as the product of an odd number and a power of two , write
.
Then
.
The first few values of are (from ):
2, 4, 2, 8, 2, 4, 2, 9, 2, 4, 2, 8, 2, 4, 2, 10, ...
For odd , the value of the function is one.
These numbers occur also in other, related areas. In matrix theory, the Radon–Hurwitz number counts the maximum size of a linear subspace of the real matrices, for which each non-zero matrix is a similarity transformation, i.e. a product of an orthogonal matrix and a scalar matrix. In quadratic forms, the Hurwitz problem asks for multiplicative identities between quadratic forms. The classical results were revisited in 1952 by Beno Eckmann. They are now applied in areas including coding theory and theoretical physics.
References
Differential topology
Theorems in topology | Vector fields on spheres | [
"Mathematics"
] | 582 | [
"Mathematical theorems",
"Theorems in topology",
"Topology",
"Differential topology",
"Mathematical problems"
] |
3,101,343 | https://en.wikipedia.org/wiki/Pick-up%20line | A pick-up line or chat-up line is a conversation opener with the intent of engaging a person for romance or dating. As overt and sometimes humorous displays of romantic interest, pick-up lines advertise the wit of their speakers to their target listeners.
Pick-up lines range from straightforward conversation openers such as introducing oneself, providing information about oneself, or asking someone about their likes and common interests, to more elaborate attempts including flattery or humour.
Novices are advised to avoid standardised and hackneyed lines (particularly those resembling country songs) and to put their opening in an interrogative form, if possible.
See also
Flirting
Limerence
Seduction community
Romance (love)
Wit
References
139 Best Funny Pick Up Lines To Make Her Laugh
Further reading
Ovid, The Art of Love (2 ad)
Interpersonal relationships
Seduction community | Pick-up line | [
"Biology"
] | 171 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
3,101,703 | https://en.wikipedia.org/wiki/Diethyl%20sulfoxide | Diethyl sulfoxide, C4H10OS, is a sulfur-containing organic compound.
References
Sulfoxides | Diethyl sulfoxide | [
"Chemistry"
] | 27 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
3,102,474 | https://en.wikipedia.org/wiki/Stationary%20ergodic%20process | In probability theory, a stationary ergodic process is a stochastic process which exhibits both stationarity and ergodicity. In essence this implies that the random process will not change its statistical properties with time and that its statistical properties (such as the theoretical mean and variance of the process) can be deduced from a single, sufficiently long sample (realization) of the process.
Stationarity is the property of a random process which guarantees that its statistical properties, such as the mean value, its moments and variance, will not change over time. A stationary process is one whose probability distribution is the same at all times. For more information see stationary process.
An ergodic process is one which conforms to the ergodic theorem. The theorem allows the time average of a conforming process to equal the ensemble average. In practice this means that statistical sampling can be performed at one instant across a group of identical processes or sampled over time on a single process with no change in the measured result.
A simple example of a violation of ergodicity is a measured process which is the superposition of two underlying processes,
each with its own statistical properties. Although the measured process may be stationary in the long term, it is not appropriate to consider the sampled distribution to be the reflection of a single (ergodic) process: The ensemble average is meaningless. Also see ergodic theory and ergodic process.
See also
Measure-preserving dynamical system
References
Peebles, P. Z., 2001, Probability, Random Variables and Random Signal Principles, McGraw-Hill Inc, Boston,
Ergodic theory
pl:Proces ergodyczny | Stationary ergodic process | [
"Mathematics"
] | 339 | [
"Ergodic theory",
"Dynamical systems"
] |
3,102,518 | https://en.wikipedia.org/wiki/Welder%20certification | Welder certification, (also known as welder qualification) is a process which examines and documents a welder's capability to create welds of acceptable quality following a well defined welding procedure.
Method
Welder certification is based on specially designed tests to determine a welder's skill and ability to deposit sound weld metal. The main part of the welder's test consists of welding one or more test coupons which are then examined using non-destructive and destructive methods. The extent of certification is described by a number of variables, which include the specific welding process, type of deposited metal, thickness, joint design, position, backing, and others. Most often, the test is conducted in accordance with a particular code. Depending on product requirements the test can be administered under the auspices of a national or international organization, such as the American Welding Society (AWS), or American Society of Mechanical Engineers (ASME), but manufacturers may specify their own standards and requirements as well. Most certifications expire after a certain time limit, and have different requirements for renewal or extension of the certification.
In the USA, welder qualification is performed according to AWS D1.1, ASME Section IX and API 1104 standards, which are also used in some other countries.
Some States have their own Welder Qualifications that supersede AWS Qualifications, but most defer to AWS, ASME or API.
In Canada, welder qualification is carried out according to CSA Standards and ASME. The ASME code is typically used for pressure vessel and pressure piping applications, and CSA Standards are used for structural, general manufacturing and non-pressure applications. There are 3 major CSA Standards to which welders may be qualified: CSA W47.1 for steels (including stainless steels), CSA W47.2 for aluminum, and CSA W186 for reinforcing bars. Under these CSA standards, welder qualification testing is carried out every 2 years by the Canadian Welding Bureau to ensure ongoing competence.
In Europe, the European Committee for Standardization (CEN) has adopted the ISO standards on welder qualification (ISO 9606), to replace the old European EN 287 series. Operators of automated welding systems are certified according to EN 1418. In Europe welders are often certified by third party Personnel Certification Bodies, like The Welding Institute (TWI/CSWIP).
Welders involved in the manufacture of equipment that falls within the scope of the Pressure Equipment Directive must be approved by a competent third party which may be either a notified body or a third-party organization recognized by a Member State.
Once a welder passes a test (or a series of tests), their employer or third party involved will certify their ability to pass the test, and the limitations or extent they are qualified to weld, as a written document (welder qualification test record, or WQTR). Normally this document is valid for a limited period (usually for two years), after which the welder must be retested. However some Qualifications are only valid for a single project, while others are unlimited as long as welders do not go beyond a specified length of time without performing that specific type of welding (this period is typically 6 months). Welders must maintain a log to demonstrate they have maintained their Qualifications.
Welding inspector certification
In addition to welders and welding machine operators, there are also schemes to independently certify welding inspectors and related specialities. The duties of the welding inspector are described in ISO 14731; however the requirement for inspector certification are not standardized, so there are differences in requirement between the various schemes. Some notable schemes established by personnel certification bodies are those of the American Welding Society, of the British Institute of Non-Destructive Testing (PCN), of The Welding Institute (CSWIP) and of the Canadian Welding Bureau (CSA W178.2).
The American Welding Society offers the following programs:
Certified Associate Welding Inspector
Certified Welding Inspector
Senior Certified Welding Inspector
Certified Radiographic Interpreter
The British Institute of Non-Destructive Testing offers three levels of certification:
PCN Level 1
PCN Level 2 Weld Inspection
PCN Level 3 Weld Inspection with radiographic interpretation
The Welding Institute (TWI) in the United Kingdom offers the following certification scheme:
CSWIP 3.0 (Level 1): Visual Welding Inspector
CSWIP 3.1 (Level 2): Welding Inspector
CSWIP 3.2 (Level 3): Senior Welding Inspector; with or without radiographic interpretation (3.2.1 or 3.2.2 respectively)
The Canadian Welding Bureau offers the following programs:
Level 1 Certified Welding Inspector
Level 2 Certified Welding Inspector
Level 3 Certified Welding Inspector
The BINDT/PCN and TWI/CSWIP schemes are accredited by UKAS under ISO/IEC 17024. There are many other general schemes, as well as sector specific schemes.
In 2008, the American Petroleum Institute introduced the API 577 Advanced Welding Inspection and Metallurgy programme of certification. Certification is issued following the successful completion of a multiple choice exam which is based on the recommended practice document API 577. Certification identifies the candidate as a 'Welding Inspection and Metallurgy Professional', as opposed to a certified welding inspector under other programmes.
See also
Welding
Welding Procedure Specification
List of welding codes
References
External links and further reading
Canadian Welding Bureau (CWB) home page
American Welding Society (AWS) home page
Canadian Welding Association (CWA) home page
TWI Certification Ltd (UK) CSWIP scheme
Certification | Welder certification | [
"Engineering"
] | 1,137 | [
"Welding",
"Mechanical engineering"
] |
3,102,609 | https://en.wikipedia.org/wiki/Mean%20effective%20pressure | The mean effective pressure (MEP) is a quantity relating to the operation of a reciprocating engine and is a measure of an engine's capacity to do work that is independent of engine displacement. Despite having the dimension of pressure, MEP cannot be measured. When quoted as an indicated mean effective pressure (IMEP), it may be thought of as the average pressure acting on a piston during the different portions of its cycle. When friction losses are subtracted from the IMEP, the result is the brake mean effective pressure (BMEP).
Derivation
Let:
= power output in watt;
= mean effective pressure in megapascal;
= displacement volume in cubic centimetre;
= number of cycles per revolution (for a 4-stroke engine, , for a 2-stroke engine, );
= number of revolutions per second;
angular velocity, i.e. ;
= torque in newton-metre.
Then, BMEP may be used to determine an engine's power output as follows:
Since we know that power is:
We now see that, BMEP is a measure of expressing torque per displacement:
And thus, the equation for BMEP in terms of torque is:
Speed has dropped out of the equation, and the only variables are the torque and displacement volume. Since the range of maximum brake mean effective pressures for good engine designs is well established, we now have a displacement-independent measure of the torque-producing capacity of an engine design a specific torque of sorts. This is useful for comparing engines of different displacements. Mean effective pressure is also useful for initial design calculations; that is, given a torque, standard MEP values can be used to estimate the required engine displacement. However, mean effective pressure does not reflect the actual pressures inside an individual combustion chamber although the two are certainly related and serves only as a convenient measure of performance.
Brake mean effective pressure (BMEP) is calculated from measured dynamometer torque. Net indicated mean effective pressure (IMEP) is calculated using the indicated power; i.e., the pressure volume integral in the work per cycle equation. Sometimes the term FMEP (friction mean effective pressure) is used as an indicator of the mean effective pressure lost to friction (or friction torque) and is just the difference between IMEP and BMEP.
Examples
MEP from torque and displacement
A four-stroke engine produces 159 N·m of torque, and displaces 2000 cm3
Power from MEP and crankshaft speed
If we know the crankshaft speed, we can also determine the engine's power output from the MEP figure:
In our example, the engine puts out 159 N·m of torque at 3600 min−1 (=60 s−1):
Thus:
As piston engines usually have their maximum torque at a lower rotating speed than the maximum power output, the BMEP is lower at full power (at higher rotating speed). If the same engine is rated 72 kW at 5400 min−1 = 90 s−1, and its BMEP is 0.80 MPa, we get the following equation:
Then:
Types of mean effective pressures
Mean effective pressure (MEP) is defined by the location measurement and method of calculation, some commonly used MEPs are given here:
Brake mean effective pressure (BMEP, ) - Mean effective pressure calculated from measured brake torque.
Indicated mean effective pressure (IMEP, ) - Mean effective pressure calculated from in-cylinder pressure over the complete engine cycle (720° in a four-stroke, 360° in a two-stroke). IMEP may be determined by planimetering the area in an engine's pV-diagram. Since naturally aspirated four-stroke engines must perform pumping work to suck the charge into the cylinder, and to remove the exhaust from the cylinder, IMEP may be split into the high-pressure, gross mean effective pressure (GMEP, ) and the pumping mean effective pressure (PMEP, ). In naturally aspirated engines, PMEP is negative, and in super- or turbocharged engines, it is usually positive. IMEP may be derived from PMEP and GMEP: .
Friction mean effective pressure (FMEP, ) - Theoretical mean effective pressure required to overcome engine friction, can be thought of as mean effective pressure lost due to friction: . FMEP rises with an increase in engine speed.
BMEP typical values
See also
Compression ratio
Notes and references
Notes
References
External links
Brake Mean Effective Pressure (bmep), Power and Torque, Factory Pipe
All About Mean Effective Pressure, Harleyc.com
Tiddler steam engine
Piston engines
Engine technology | Mean effective pressure | [
"Technology"
] | 944 | [
"Engine technology",
"Piston engines",
"Engines"
] |
3,102,750 | https://en.wikipedia.org/wiki/Alpha%20Sagittae | Alpha Sagittae, formally named Sham , is a single star in the northern constellation of Sagitta. Alpha Sagittae is the Bayer designation, which is latinized from α Sagittae and abbreviated Alpha Sge or α Sge. It is visible to the naked eye as a yellow-hued star with an apparent visual magnitude of +4.38. Despite the name, this is not the brightest star in the constellation – that distinction belongs to Gamma Sagittae. Based upon parallax measurements, Alpha Sagittae is approximately 382 light-years from the Sun. It is moving further away from the Earth with a heliocentric radial velocity of 1.7 km/s.
This is an evolved bright giant with a stellar classification of G1 II. It is 151 million years old with 4 times the mass of the Sun and has expanded to around 21 times the Sun's radius. It is radiating 340 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 5,333 K. There is an X-ray source within of these coordinates.
The evolutionary state of Alpha Sagittae is unclear. Its temperature and luminosity place it within the Hertzsprung gap, a region of the H-R diagram where stars more massive than the sun are evolving rapidly away from the main sequence towards becoming red giants. However, the chemical composition of its surface indicates that it has already experienced the first dredge-up of fusion products that occurs soon after a star reaches the red giant branch. It also lies within the Cepheid instability strip, but is not a Cepheid variable. It belongs to a small group of known stars that have been called carbon-deficient red giants and may have experienced binary mass exchanges.
Nomenclature
This star bore the traditional name Sham (or Alsahm), which derives from the Arabic word سهم sahm, meaning "arrow", the name formerly having been applied to the whole constellation. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Sham for this star on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Left Flag, refers to an asterism consisting of Alpha Sagittae, Beta Sagittae, Delta Sagittae, Zeta Sagittae, Gamma Sagittae, 13 Sagittae, 11 Sagittae, 14 Sagittae and Rho Aquilae. Consequently, the Chinese name for Alpha Sagittae itself is (, ).
References
Sagittae, Alpha
Sagitta
G-type bright giants
Sham
Sagittae, 05
096757
7479
185758
BD+17 4042 | Alpha Sagittae | [
"Astronomy"
] | 602 | [
"Sagitta",
"Constellations"
] |
3,103,326 | https://en.wikipedia.org/wiki/Vector%20area | In 3-dimensional geometry and vector calculus, an area vector is a vector combining an area quantity with a direction, thus representing an oriented area in three dimensions.
Every bounded surface in three dimensions can be associated with a unique area vector called its vector area. It is equal to the surface integral of the surface normal, and distinct from the usual (scalar) surface area.
Vector area can be seen as the three dimensional generalization of signed area in two dimensions.
Definition
For a finite planar surface of scalar area and unit normal , the vector area is defined as the unit normal scaled by the area:
For an orientable surface composed of a set of flat facet areas, the vector area of the surface is given by
where is the unit normal vector to the area .
For bounded, oriented curved surfaces that are sufficiently well-behaved, we can still define vector area. First, we split the surface into infinitesimal elements, each of which is effectively flat. For each infinitesimal element of area, we have an area vector, also infinitesimal.
where is the local unit vector perpendicular to . Integrating gives the vector area for the surface.
Properties
The vector area of a surface can be interpreted as the (signed) projected area or "shadow" of the surface in the plane in which it is greatest; its direction is given by that plane's normal.
For a curved or faceted (i.e. non-planar) surface, the vector area is smaller in magnitude than the actual surface area. As an extreme example, a closed surface can possess arbitrarily large area, but its vector area is necessarily zero. Surfaces that share a boundary may have very different areas, but they must have the same vector area—the vector area is entirely determined by the boundary. These are consequences of Stokes' theorem.
The vector area of a parallelogram is given by the cross product of the two vectors that span it; it is twice the (vector) area of the triangle formed by the same vectors. In general, the vector area of any surface whose boundary consists of a sequence of straight line segments (analogous to a polygon in two dimensions) can be calculated using a series of cross products corresponding to a triangularization of the surface. This is the generalization of the Shoelace formula to three dimensions.
Using Stokes' theorem applied to an appropriately chosen vector field, a boundary integral for the vector area can be derived:
where is the boundary of , i.e. one or more oriented closed space curves. This is analogous to the two dimensional area calculation using Green's theorem.
Applications
Area vectors are used when calculating surface integrals, such as when determining the flux of a vector field through a surface. The flux is given by the integral of the dot product of the field and the (infinitesimal) area vector. When the field is constant over the surface the integral simplifies to the dot product of the field and the vector area of the surface.
Projection of area onto planes
The projected area onto a plane is given by the dot product of the vector area S and the target plane unit normal :
For example, the projected area onto the -plane is equivalent to the -component of the vector area, and is also equal to
where is the angle between the plane normal and the -axis.
See also
Bivector, representing an oriented area in any number of dimensions
De Gua's theorem, on the decomposition of vector area into orthogonal components
Cross product
Surface normal
Surface integral
Notes
Area
Vectors (mathematics and physics)
Analytic geometry | Vector area | [
"Physics",
"Mathematics"
] | 721 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Wikipedia categories named after physical quantities",
"Area"
] |
3,103,463 | https://en.wikipedia.org/wiki/Storage%20service%20provider | A Storage service provider (SSP) is any company that provides computer storage space and related management services. SSPs may also offer periodic backup and archiving.
Advantages of managed storage are that more space can be ordered as required. Depending upon your SSP, backups may also be managed.
Faster data access can be ordered as required. Also, maintenance costs may be reduced, particularly for larger organizations who store a large or increasing volumes of data. Another advantage is that best practices are likely to be followed. Disadvantages are that the cost may be prohibitive, for small organizations or individuals who deal with smaller amounts or static volumes of data and that there's less control of data systems.
Types of managed storage
Data owners normally access managed storage via a network (LAN), or through a series of networks (Internet). However, managed storage may be directly attached to a workstation or server, which is not managed by SSP.
Managed Storage generally falls into one of the following categories:
locally managed storage
remotely managed storage
Locally managed storage
Advantages of this type of storage include a high-speed access to data and greater control over data availability. A disadvantage is that additional space is required at a local site to store the data, as well as limitations of the on-site area.
Remotely managed storage
Advantages of this type of storage are that it may be used an off site backup, it offers global access (depending upon configuration) and adding storage will not require additional space at the local site. However, if the network providing connectivity to the remote data is interrupted, there will be data availability issues, unless distributed file systems are in use.
In cloud computing, Storage as a Service (SaaS) involves the provision of off-site storage for data and information. This approach may offer greater reliability, but at a higher cost.
See also
Application service provider
Internet service provider
Image hosting service
Comparison of file hosting services
File hosting service
References
Web hosting | Storage service provider | [
"Technology"
] | 391 | [
"Computing stubs",
"Computer company stubs"
] |
3,103,465 | https://en.wikipedia.org/wiki/Shore%20facility | A shore facility is one of the facilities located on shore used for receiving ships and transferring cargo and people to them. Ports and marinas constitute a collection of shore facilities. Shore facilities are designed for the efficient intermodal transportation of goods and people inland by trains, surface vehicles, and/or pipelines.
A shore facility may include magazine buildings or warehouses for storage of goods, fuel storage tanks or refrigerated storage.
It may include loading cranes, equipment laydown areas, dry docks and custom houses. It may have quays, wharfs, jetties, or slipways with cranes or ramps.
It may also have breakwaters, piers, or mooring dolphins.
References
Coastal construction | Shore facility | [
"Engineering"
] | 143 | [
"Construction",
"Coastal construction"
] |
3,103,801 | https://en.wikipedia.org/wiki/Nuclear%20fuel%20bank | A nuclear fuel bank is reserve of low enriched uranium (LEU) for countries that need a backup source of LEU to fuel their nuclear reactors. Countries that do have enrichment technology would donate enriched fuel to a "bank", from which countries not possessing enrichment technology would obtain fuel for their power reactors.
LEU banks are meant to be nuclear fuel providers "in the event of unforeseen, non-commercial disruption" to the supplies, and are regarded an important international effort to "prevent nuclear proliferation and dissuading countries from building uranium enrichment facilities by guaranteeing access to LEU for fuel use should other sources fail."
The concept of providing an assured supply of nuclear fuel, and thus avoiding the need for countries to build indigenous nuclear fuel production capabilities, has long been proposed as a way to curb the proliferation of nuclear weapons and, eventually, eliminate them altogether. Austria, Russia, the European Union, the United States, and others have supported various concepts of an international fuel bank. Many non-nuclear-weapon states have been reluctant to embrace any of these proposals for varying reasons.
Overview
Enrichment technology is primarily used to create enriched nuclear fuel, but it can also be used to create weapons-grade nuclear material. The main goal of a fuel bank is therefore to minimize the risk of further nuclear weapons proliferation by removing the need for countries to possess enrichment technology.
The proposed fuel bank would assure a back-up supply for power reactors throughout the world on a non-discriminatory, non-political basis, reducing the need for countries to develop their own uranium enrichment technologies which could also be used for nuclear weapons development. The IAEA's former Director General Dr. ElBaradei confirmed this, saying that the importance of nuclear fuel banks is "by providing reliable access to fuel at competitive market prices, we remove the need for countries to develop indigenous fuel cycle capabilities. In so doing, we could go a long way towards addressing current concerns about the dissemination of sensitive fuel cycle technologies."
Advocates
Dr. ElBaradei, who was director of the International Atomic Energy Agency from 1997 to 2009, called for the establishment of a nuclear fuel bank to provide peaceful access to nuclear energy without raising questions about dual-use technology. "Every country that would like to get the fuel, that would like to get the technology, the reactor, will get that, but not necessarily developing their own enrichment facility. And assurance of supply mechanism should be reliable, should be apolitical, should be based solely on non-proliferation criteria," he said. In his acceptance of the 2005 Nobel Peace Prize, Dr. ElBaradei also said the establishment of an international fuel bank would remove the incentive for each country to develop its own fuel cycle. "We should then be able to agree on a moratorium on new national facilities, and to begin work on multinational arrangements for enrichment, fuel production, waste disposal and reprocessing," he said.
Joseph Cirincione, the director of the nonproliferation program at the Carnegie Endowment for International Peace, said in 2006 that an international fuel bank could start reforms at the international level. "If we handle it properly, Iran might be the trigger for resolving this problem that troubles all nations relying on nuclear power. Iran, ironically, could be the catalyst for creating a fundamentally new system of how we produce and sell nuclear fuel," he said. The Iran nuclear deal framework of 2015 did not include the idea of a fuel bank.
U.S. Senators Richard Lugar and Evan Bayh, both of Indiana, have also advocated for a nuclear fuel bank. In an op-ed published in the Chicago Tribune, they wrote that what is needed is "a new international non-proliferation standard that prevents countries from using the guise of nuclear energy to develop nuclear weapons". Lugar and Bayh argued this was imperative because "the coming surge in demand for nuclear power will lead more and more nations to seek their own enrichment facilities", and jointly called for the establishment of an International Nuclear Fuel Bank, controlled by the International Atomic Energy Agency.
In 2009, the formation of a nuclear fuel bank was endorsed by U.S. President Barack Obama in a speech in Prague: "And we should build a new framework for civil nuclear cooperation, including an International Fuel Bank, so that countries can access peaceful power without increasing the risk of proliferation. That must be the right of every national that renounces nuclear weapons, especially for developing countries embarking on peaceful programs."
On 29/08/2017, An International Atomic Energy Agency (IAEA) project aimed at providing confidence to countries about the availability of nuclear power fuel reached a key milestone on Tuesday towards its establishment, with the inauguration in Kazakhstan of a facility where low-enriched uranium (LEU) will be stored.
Controversy
Developing nations, including a number in the Non-Aligned Movement, have expressed reservations about mechanisms for assurance of supply and have been critical of additional criteria for accessing the fuel banks.
Some reasons that non-nuclear weapon states have been reluctant to embrace these proposals include:
a perception that the commercial or strategic interests of nuclear weapon states motivated the proposals
a perception that the proposals produce a dependency on a limited number of nuclear fuel suppliers
a perception that the proposal restricts their unalienable right to nuclear energy for peaceful purposes.
One example of such a feared political-cutoff came after the 1979 Iranian Revolution. Germany halted construction of the Iranian Bushehr reactor, the United States cut off the supply of highly enriched fuel for the Tehran Research Reactor, and Iran never received uranium from France which it asserted it was entitled to. Russia also agreed not to provide an enrichment plant and terminated cooperation on several other nuclear-related technologies, including laser isotope separation. China terminated several nuclear projects in return in part for entry into force of a U.S.-China civil nuclear cooperation agreement. Ukraine agreed not to provide the turbine for the Bushehr reactor. These combined experiences contributed to an Iranian belief that foreign nuclear supplies are potentially subject to being interrupted. An international nuclear fuel bank would have to overcome this perception.
History
1940–1969
The Report on the International Control of Atomic Energy, commonly known as the Acheson–Lilienthal Report, was written by a United States committee in 1946 and discussed possible methods for the international control of nuclear weapons and the avoidance of future nuclear warfare. The report was produced by the Committee on Atomic Energy, headed by Dean Acheson and David Lilienthal, and was mostly written by scientist Robert Oppenheimer. The report recommended that an international body, such as the United Nations, have control over both atomic materials and the means of producing nuclear energy.
Bruno Pellaud, the IAEA's former deputy director-general for safeguards, says the fuel bank idea was developed as far back as 1957.
The resulting Baruch Plan was a 1946 proposal by the United States government, written largely by political consultant Bernard Baruch. It was presented to the United Nations Atomic Energy Commission (UNAEC) at its first meeting in June 1946. The plan proposed to:
extend between all nations the exchange of basic scientific information for peaceful ends;
implement control of atomic energy to the extent necessary to ensure its use only for peaceful purposes;
eliminate from national armaments both atomic weapons and all other major weapons adaptable to mass destruction; and
establish effective safeguards, by way of inspection and other means, to protect complying States against the hazards of violations and evasions
The plan clearly announced that the United States would maintain its nuclear weapons monopoly until every aspect of the proposal was in effect. The Soviets subsequently rejected the Baruch Plan, and the United States then rejected a Soviet counter-proposal for a ban on all nuclear weapons.
In 1953, the U.S. proposed its Atoms for Peace plan. In a speech to the UN General Assembly in New York City on December 8, 1953, U.S. President Dwight D. Eisenhower called on the United States with the Soviet Union "to make joint contributions from their stockpiles of normal uranium and fissionable materials to an international Atomic Energy Agency" that would then "devise methods whereby this fissionable material would be allocated to serve the peaceful pursuits of mankind." The plan also proposed a new International Atomic Energy Agency and “uranium bank” as simple steps to establish international trust and start a cooperative arms control dialogue.
On July 29, 1957, the International Atomic Energy Agency (IAEA) was established. However, the concept that the IAEA would serve as a bank of nuclear materials, drawing down US and Soviet stocks below the level where either could launch a knock-out blow against the other, languished. During this period the U.S. Congress preferred to supply nuclear material directly to foreign partners in bilateral agreements, thus bypassing the IAEA and applying U.S. safeguards to the transaction instead. It became clear that having the IAEA serve as a nuclear material bank had not succeeded in the Cold War.
1970–1989
Through the 1970s and 1980s, further options were examined for developing a "proliferation-resistant" fuel cycle. The managerial aspects of the nuclear fuel cycle were also explored during this time and a number of unsuccessful proposals were advanced. Some of the proposals included initiatives on:
Technical or physical modification of the fuel cycle to restrict access to sensitive nuclear materials
Multilateral fuel cycle centers for a small number of States to pool their resources in to a single centre
Multinational spent fuel centers as a way to handle separated plutonium;
An international nuclear fuel authority to guarantee nuclear fuel to Non-Nuclear Weapon States which renounced national reprocessing or enrichment plants
International plutonium storage to help implement Article XII. A.5 of the IAEA Statute
In the late 1970s, the International Nuclear Fuel Cycle Evaluation conducted a study into the management of spent nuclear fuel. The IAEA's "Working Group 6" issued a report on spent fuel management which identified interim storage of spent fuel as an important step in the nuclear fuel cycle. An earlier IAEA study on Regional Fuel Cycle Centre from 1977 has also pointed out the importance of spent fuel. The fuel cycle evaluation was formally launched in October 1977 by the Carter Administration, with more than 500 experts from 46 nations participating.
The IAEA Board of Governors subsequently established the Committee on Assurances of Supply (CAS) in 1980 to address similar concerns. The Committee examined the issue of multinationalization of the fuel cycle, but was unable to reach a consensus and went into formal abeyance in 1987.
IAEA Expert Group
Former IAEA Director General Dr. ElBaradei gave an international expert group the task of coming up with possible multilateral approaches to better control the sensitive parts of the nuclear fuel cycle. Dr. ElBaradei said that in recent years the nuclear non-proliferation regime has been under tremendous stress and noted the key is fairness and recognizing the interests of all parties. He also said that a lack of progress in confronting the growing risk of nuclear proliferation "could lead to self-destruction".
The group made the following recommendations to strengthen controls over fuel enrichment, reprocessing of fuel, spent fuel repositories and spent fuel storage. They were:
Reinforcing existing commercial market mechanisms on a case-by-case basis
Developing and implementing international supply guarantees with IAEA participation
Promoting voluntary conversion of existing facilities to multilateral nuclear approaches
Creating, through voluntary agreements and contracts, multinational, and in particular regional, MNAs for new facilities
The scenario of a further expansion of nuclear energy around the world might call for the development of a nuclear fuel cycle with stronger multilateral arrangements
The Expert Group included representatives from 26 countries. Bruno Pellaud, the Group's Chairman and former Head of IAEA Safeguards, said “a joint nuclear facility with multinational staff puts all participants under a greater scrutiny from peers and partners, a fact that strengthens non-proliferation and security…Moreover, they have the potential to facilitate the continued use of nuclear energy for peaceful purposes."
Fuel banks and International Enrichment Center
As of March 2011 the IAEA Board of Governors has approved the creation of two separate fuel banks. The first, formally established by the IAEA and the Russian government in March 2010, is owned, operated, and paid for by the Russian Federation and located near the Siberian city of Angarsk. The reserve has been fully stocked and became operational on 1 December 2010. The Board of Governors approved a second fuel reserve in December 2010, which will be owned and operated by the IAEA itself, but this fuel bank is not yet operational. Kazakhstan has offered to host a nuclear fuel bank under IAEA auspices as a way to curb nuclear proliferation. In May 2015, the IAEA approved a Host-State Agreement with Kazakhstan, paving the way for Kazakhstan to host the LEU Bank. The plans call for having Russia enrich the uranium before it is stored.
Kazatomprom proposed at the 2013 IAEA General Conference that Kazakhstan host the LEU bank at its Ulba Metallurgical Plant. Kazakhstan's Foreign Minister Erlan Idrissov said in an interview that the bank posed no environment threats and highlights Kazakhstan's contribution into the global efforts of nonproliferation and peaceful use of nuclear power. The LEU bank is under construction in Ust-Kamenogorsk, Kazakhstan under the aegis of the IAEA.
In addition to the two fuel banks, the Russian Federation has also established an International Uranium Enrichment Center (IUEC) located at Angarsk. The IUEC is set up as a joint stock company between Russia's Rosatom Corporation holding 80% of the shares, Kazakhstani and Ukrainian corporations, while the government of Armenia slated to join the company in the future. The IUEC differs from the two fuel banks in several important ways. First, it is a for-profit entity owned by state-backed companies. As a result, unlike the fuel banks, it is preferential and exclusive in its provision of enrichment services. The IUEC gives preferential treatment to its shareholders when selling enrichment services, and is only available to states that do not have domestic enrichment capabilities and perform their obligations under the Nuclear Non-Proliferation Treaty. Like the fuel banks, it provides an alternative to the expensive start-up costs of an indigenous enrichment capacity, but unlike the fuel banks it provides a further financial incentive to its shareholders in the form of dividends.
Proposals
A fuel bank would act as a back-up supply for nuclear power reactors throughout the world on a non-discriminatory and apolitical basis, reducing the need for countries to develop their own uranium enrichment technologies at a time when concerns about nuclear proliferation are growing. Government and industry experts agree the fuel market functions well in meeting current demand, and a fuel bank would be designed inherently in a way not to disrupt the existing commercial market in nuclear fuels. Mohamed ElBaradei, director general of the IAEA from 1997 to 2009, has also suggested that no State should be required to give up its rights under the Non-Proliferation Treaty regarding any parts of the nuclear fuel cycle.
In March 2008, an IAEA magazine outlined 12 proposals for a multilateral approach that had been put forward. The proposals ranged from providing backup assurances of supply to establishing an IAEA-controlled low-enriched uranium reserve or setting up international enrichment centers.
In May 2009, three proposals were recommended as the form an international fuel bank should take. Two proposals were for Russian and IAEA fuel banks to provide supply of last resort and were initially proposed by Dr. ElBaradei in 2003. Germany put forward a complementary proposal which advocated the creation of an internationally governed nuclear fuel production plant where production would be done in an extraterritorial site inside an unspecified country.
Over 60 states, many of them developing nations, have informed the International Atomic Energy Agency that they have an interest in launching nuclear energy programs. However, some of these states have expressed concerns that they would lose all access to enrichment and reprocessing technology if they were to accept fuel from a fuel bank.
Kazakhstani Proposal
Kazakhstan offered the Ulba Metallurgical Plant as a site for an IAEA administered fuel bank. The IAEA Board of Governors approved the plan to set up the fuel bank in December 2010, but made no choice of the site at that time. The topic of a nuclear fuel bank was briefly mentioned by some members attending the Nuclear Security Summit (2010), a summit being held in Washington, D.C. in April 2010 focusing on how to better safeguard weapons-grade plutonium and uranium to prevent nuclear terrorism. Robert J. Einhorn, Special US Advisor on Non-Proliferation and Arms Control, said the Obama Administration has supported international fuel banks but that "this issue will come up at the May NPT Review Conference, but this is not the focus of" the Nuclear Security Summit. Despite the focus on nuclear terrorism, Kazakhstan's president Nursultan Nazarbayev sought the U.S.'s backing to house a nuclear fuel bank while he was in Washington for the event and Prime Minister of Pakistan Yousaf Raza Gillani issued a statement saying Pakistan would like to act as a provider and "participate in any non-discriminatory nuclear fuel cycle assurance mechanism". The UAE also reconfirmed its $10 million pledge to the IAEA Nuclear Fuel Bank and its policy of foregoing domestic enrichment and spent fuel reprocessing.
In May 2015, the IAEA approved a Host-State Agreement, paving the way for Kazakhstan to host the LEU Bank.
German proposal
Germany has proposed the creation of a multilateral uranium enrichment center with extraterritorial status, which would operate on a commercial basis as a new supplier in the market. The center would operate under Agency control and providing enrichment services to its customer. Customers could then obtain nuclear fuel for civilian use under strict supervision. Germany has also proposed a “Multilateral Enrichment Sanctuary Project” for an international enrichment center by a group of interested States, on an extraterritorial basis in a host State.
NTI funding proposal
The Washington-based Nuclear Threat Initiative's proposal for an international fuel bank announced an initial $50 million grant to the IAEA contingent upon an additional $100 million from other sources. The United States has offered $50 million, the United Arab Emirates has pledged $10 million, Norway has promised $5 million and the European Union agreed to provide up to 25 million euros (about $32 million in May 2009). The fuel bank therefore reached its initial funding target in March 2009.
Former Senator Sam Nunn, a co-chairman of the Nuclear Threat Initiative, said in a speech announcing the NTI pledge that "we envision that this stockpile will be available as a last-resort fuel reserve for nations that have made the sovereign choice to develop their nuclear energy based on foreign sources of fuel supply services-and therefore have no indigenous enrichment facilities." Warren Buffett, a key NTI advisor, is financially backing and enabling the commitment. "This pledge is an investment in a safer world," Buffett said.
Countries that already enrich uranium favor the fuel bank because it keeps competitors from entering the market. Some emerging-market countries on the International Atomic Energy Agency Board of Governors have resisted the proposal. “It simply seems a real battlefield,” Iran's IAEA ambassador, Aliasghar Soltanieh, said. “It’s an issue of trust, and there’s been a lot of fraying of relationships between IAEA states,” said Curtis, Deputy Secretary of Energy under President Bill Clinton. "We think that (Kazakh President) Nursultan Nazarbayev's idea to host a nuclear fuel bank is a very good proposal," Iranian President Mahmoud Ahmadinejad has said. Iran has said it may stop sensitive uranium enrichment if guaranteed a supply of nuclear fuel from abroad, but has also insisted on its right to master the complete nuclear fuel cycle for what it says are peaceful purposes. Mahmoud Ahmadinejad has said Tehran places importance on international nuclear cooperation, including "Iran's presence in the global fuel bank." Iran has resisted sending its low-enriched uranium abroad and has proposed the IAEA supervise uranium enrichment inside the country.
Russian proposal
Sergey Kiriyenko, head of Russian nuclear corporation Rosatom, has told the IAEA General Conference that Russia planned to put under IAEA control a reserve of $300 million worth of low enriched uranium. The fuel would be stored at a multinational uranium-enrichment facility in the Siberian city of Angarsk and would be sufficient for two reactor-loads of low-enriched uranium. “We should carry out the preparatory work required for the IAEA Director-General to propose to the IAEA Board of Governors that they consider Russia’s plans for establishing guaranteed nuclear fuel reserves in the first half of 2008,” Kiriyenko said. He further said that Russia is ready to process 4,000 tons of Australian uranium a year.
Russia established its International Uranium Enrichment Center (IUEC) at the Angarsk Electrolysis Chemical Combine “to provide guaranteed access to uranium enrichment capabilities to the Centre’s participating organizations”. On 10 May 2007 the first agreement in the framework of the IUEC was signed by the Russian Federation and the Republic of Kazakhstan. In November 2009, the IAEA Board of Governors approved the Russian proposal to set up a low enriched uranium reserve available to Member States upon request from the Agency. Russia and the IAEA agreed on March 30, 2010, to set up the world's first nuclear fuel bank, at Angarsk in Siberia.
Six-Country Concept
The "Six-Country Concept" was proposed by France, Germany, the Netherlands, Russia, the UK, and the U.S. to provide reliable access to nuclear fuel. In 2008, all six of these nations had enrichment facilities. The proposal would require customer states to forego sensitive indigenous nuclear facilities, and if a supply disruption were to occur the recipient country would be able to approach the IAEA to facilitate new arrangements with other suppliers as long as nonproliferation conditions had been met – these conditions would include implementation of the Additional Protocol and safety and protection standards being satisfied.
The six nations proposed two levels of enrichment assurance beyond the normal market. At the “basic assurances” level, suppliers would agree to substitute for each other to cover certain supply interruptions to customers. At the “reserves” assurance level, participating governments could provide physical or virtual reserves of low-enriched uranium that would be made available if the “basic assurances” were not met.
Other proposals
Other proposals for a nuclear fuel bank have included:
U.S. – "Reserve of nuclear fuel", September 2005
Russia – "Statement on the Peaceful Use of Nuclear Energy", February 2006
U.S. – "Global Nuclear Energy Partnership", February 2006
World Nuclear Association, "Ensuring Security of Supply in the International Fuel Cycle", 2006
Japan – "IAEA Standby Arrangements System for the Assurance of Nuclear Fuel Supply", September 2006
UK – "Enrichment Bonds", June 2007
European Union – "Nuclear Fuel Cycle", June 2007
Low Enriched Uranium Bank in Kazakhstan
On August 27, 2015, an agreement to locate the IAEA LEU Bank in Kazakhstan was signed in the country's capital Astana. The bank was inaugurated on August 29, 2017, and is located at Ulba Metallurgical Plant. It is a physical reserve of up to 90 metric tons of LEU, sufficient to run a 1,000 MWe light-water reactor. Such a reactor can power a large city for three years. The facility was wholly funded by IAEA member states and other contributions for a total of $150 million. This is expected to cover costs for 20 years.
In October 2019, the Ulba bank received its first shipment of 32 canisters of LEU fuel.
See also
Global Nuclear Energy Partnership
Nuclear proliferation
Nuclear reactor technology
Nuclear renaissance
References
External links
IAEA
International Atomic Energy Agency INFCIRC/640: "Multilateral Approaches to the Nuclear Fuel Cycle: Expert Group Report submitted to the Director General of the IAEA". 22 February 2005.
International Atomic Energy Agency INFCIRC/704: "German proposal on the Multilateralization of the Nuclear Fuel Cycle". 4 May 2007.
International Atomic Energy Agency INFCIRC/708: "On the Establishment, Structure and Operation of the International Uranium Enrichment Centre". 8 June 2007.
Others
"Internationalization of the Nuclear Fuel Cycle: Goals, Strategies, and Challenges" (2009). Nuclear and Radiation Studies Board (NRSB). The National Academies Press.
Bulletin of the Atomic Scientists: "The realities of nuclear fuel supply guarantees." Podvig, Pavel. 18 April 2008.
World Nuclear Association: "Ensuring Security of Supply in the International Nuclear Fuel Cycle." 12 May 2006.
Proceedings of MIT’s Workshop on Internationalizing Uranium Enrichment Facilities (October 2008)
Anti–nuclear weapons movement
Arms control
Energy policy
International Atomic Energy Agency
Nuclear fuels
Nuclear proliferation | Nuclear fuel bank | [
"Environmental_science"
] | 5,065 | [
"Environmental social science",
"Energy policy"
] |
3,103,836 | https://en.wikipedia.org/wiki/Hypholoma%20capnoides | Hypholoma capnoides is a mushroom in the family Strophariaceae. Found in both the Old and New World, it grows on decaying wood and is edible, though may resemble some poisonous species.
Description
The cap is up to in diameter with yellow-to-orange-brownish or matt yellow colour, sometimes viscid. It is convex then flattens in age. The stipe is yellowish, somewhat rust-brown below. The mushroom grows to tall. The flesh is yellow. The taste is mild, compared to most Hypholomas which are bitter.
The gills are initially pale orangish-yellow, pale grey when mature, later darker purple/brown. The spore print is dark burgundy to brown.
Similar species
The poisonous sulphur tuft is more common in many areas. H. capnoides has greyish gills due to the dark color of its spores, whereas sulphur tuft has greenish gills. It could also perhaps be confused with the deadly Galerina marginata or the good edible Kuehneromyces mutabilis.
Distribution and habitat
Like its poisonous relative H. fasciculare ('sulphur tuft'), H. capnoides grows in clusters on decaying wood, for example in tufts on old tree stumps, in North America, Europe, and Asia.
Edibility
Though edible when cooked, it could be confused with some poisonous species.
References
Edible fungi
capnoides
Fungi described in 1818
Fungi of Europe
Taxa named by Elias Magnus Fries
Fungus species | Hypholoma capnoides | [
"Biology"
] | 314 | [
"Fungi",
"Fungus species"
] |
3,103,995 | https://en.wikipedia.org/wiki/IBM%20WebFountain | WebFountain is an Internet analytical engine implemented by IBM for the study of unstructured data on the World Wide Web. IBM describes WebFountain as:
. . . a set of research technologies that collect, store and analyze massive amounts of unstructured and semi-structured text. It is built on an open, extensible platform that enables the discovery of trends, patterns and relationships from data.
The project represents one of the first comprehensive attempts to catalog and interpret the unstructured data of the Web in a continuous fashion. To this end its supporting researchers at IBM have investigated new systems for the precise retrieval of subsets of the information on the Web, real-time trend analysis, and meta-level analysis of the available information of the Web.
Factiva, an information retrieval company owned by Dow Jones and Reuters, licensed WebFountain in September 2003, and has been building software which utilizes the WebFountain engine to gauge corporate reputation. Factiva reportedly offers yearly subscriptions to the service for $200,000. Factiva has since decided to explore other technologies, and has severed its relationship with WebFountain.
WebFountain is developed at IBM's Almaden research campus in the Bay Area of California.
IBM has developed software, called UIMA for Unstructured Information Management Architecture, that can be used for analysis of unstructured information. It can perhaps help perform trend analysis across documents, determine the theme and gist of documents, allow fuzzy searches on unstructured documents.
References
External links
IBM Almaden Research Center WebFountain overview
WebFountain on John Battelle's Searchblog
Zdnet article "Drinking from the Fire Hydrant"
IBM sets out to make sense of the Web, February 5, 2004
IBM Joins Corporate Monitoring Space with Release of Public Image Monitoring Solution, Search Engine Watch, November 9, 2005
WebFountain | IBM WebFountain | [
"Technology"
] | 390 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
3,104,018 | https://en.wikipedia.org/wiki/Thomson%20problem | The objective of the Thomson problem is to determine the minimum electrostatic potential energy configuration of electrons constrained to the surface of a unit sphere that repel each other with a force given by Coulomb's law. The physicist J. J. Thomson posed the problem in 1904 after proposing an atomic model, later called the plum pudding model, based on his knowledge of the existence of negatively charged electrons within neutrally-charged atoms.
Related problems include the study of the geometry of the minimum energy configuration and the study of the large behavior of the minimum energy.
Mathematical statement
The electrostatic interaction energy occurring between each pair of electrons of equal charges (, with the elementary charge of an electron) is given by Coulomb's law,
where is the electric constant and is the distance between each pair of electrons located at points on the sphere defined by vectors and , respectively.
Simplified units of and (the Coulomb constant) are used without loss of generality. Then,
The total electrostatic potential energy of each N-electron configuration may then be expressed as the sum of all pair-wise interaction energies
The global minimization of over all possible configurations of N distinct points is typically found by numerical minimization algorithms.
Thomson's problem is related to the 7th of the eighteen unsolved mathematics problems proposed by the mathematician Steve Smale — "Distribution of points on the 2-sphere".
The main difference is that in Smale's problem the function to minimise is not the electrostatic potential but a logarithmic potential given by A second difference is that Smale's question is about the asymptotic behaviour of the total potential when the number N of points goes to infinity, not for concrete values of N.
Example
The solution of the Thomson problem for two electrons is obtained when both electrons are as far apart as possible on opposite sides of the origin, , or
Known exact solutions
Mathematically exact minimum energy configurations have been rigorously identified in only a handful of cases.
For N = 1, the solution is trivial. The single electron may reside at any point on the surface of the unit sphere. The total energy of the configuration is defined as zero because the charge of the electron is subject to no electric field due to other sources of charge.
For N = 2, the optimal configuration consists of electrons at antipodal points. This represents the first one-dimensional solution.
For N = 3, electrons reside at the vertices of an equilateral triangle about any great circle. The great circle is often considered to define an equator about the sphere and the two points perpendicular to the plane are often considered poles to aid in discussions about the electrostatic configurations of many-N electron solutions. Also, this represents the first two-dimensional solution.
For N = 4, electrons reside at the vertices of a regular tetrahedron. Of interest, this represents the first three-dimensional solution.
For N = 5, a mathematically rigorous computer-aided solution was reported in 2018 with electrons residing at vertices of a triangular dipyramid. Of interest, it is impossible for any N solution with five or more electrons to exhibit global equidistance among all pairs of electrons.
For N = 6, electrons reside at vertices of a regular octahedron. The configuration may be imagined as four electrons residing at the corners of a square about the equator and the remaining two residing at the poles.
For N = 12, electrons reside at the vertices of a regular icosahedron.
Geometric solutions of the Thomson problem for N = 4, 6, and 12 electrons are Platonic solids whose faces are all congruent equilateral triangles. Numerical solutions for N = 8 and 20 are not the regular convex polyhedral configurations of the remaining two Platonic solids, the cube and dodecahedron respectively.
Generalizations
One can also ask for ground states of particles interacting with arbitrary potentials.
To be mathematically precise, let f be a decreasing real-valued function, and define the energy functional
Traditionally, one considers also known as Riesz -kernels. For integrable Riesz kernels see the 1972 work of Landkof. For non-integrable Riesz kernels, the Poppy-seed bagel theorem holds, see the 2004 work of Hardin and Saff. Notable cases include:
α = ∞, the Tammes problem (packing);
α = 1, the Thomson problem;
α = 0, to maximize the product of distances, latterly known as Whyte's problem;
α = −1 : maximum average distance problem.
One may also consider configurations of N points on a sphere of higher dimension. See spherical design.
Solution algorithms
Several algorithms have been applied to this problem. The focus since the millennium has been on local optimization methods applied to the energy function, although random walks have made their appearance:
constrained global optimization (Altschuler et al. 1994),
steepest descent (Claxton and Benson 1966, Erber and Hockney 1991),
random walk (Weinrach et al. 1990),
genetic algorithm (Morris et al. 1996)
While the objective is to minimize the global electrostatic potential energy of each N-electron case, several algorithmic starting cases are of interest.
Continuous spherical shell charge
The energy of a continuous spherical shell of charge distributed across its surface is given by
and is, in general, greater than the energy of every Thomson problem solution. Note: Here N is used as a continuous variable that represents the infinitely divisible charge, Q, distributed across the spherical shell. For example, a spherical shell of represents the uniform distribution of a single electron's charge, , across the entire shell.
Randomly distributed point charges
The expected global energy of a system of electrons distributed in a purely random manner across the surface of the sphere is given by
and is, in general, greater than the energy of every Thomson problem solution.
Here, N is a discrete variable that counts the number of electrons in the system. As well, .
Charge-centered distribution
For every Nth solution of the Thomson problem there is an th configuration that includes an electron at the origin of the sphere whose energy is simply the addition of N to the energy of the Nth solution. That is,
Thus, if is known exactly, then is known exactly.
In general, is greater than , but is remarkably closer to each th Thomson solution than and . Therefore, the charge-centered distribution represents a smaller "energy gap" to cross to arrive at a solution of each Thomson problem than algorithms that begin with the other two charge configurations.
Relations to other scientific problems
The Thomson problem is a natural consequence of J. J. Thomson's plum pudding model in the absence of its uniform positive background charge.
Though experimental evidence led to the abandonment of Thomson's plum pudding model as a complete atomic model, irregularities observed in numerical energy solutions of the Thomson problem have been found to correspond with electron shell-filling in naturally occurring atoms throughout the periodic table of elements.
The Thomson problem also plays a role in the study of other physical models including multi-electron bubbles and the surface ordering of liquid metal drops confined in Paul traps.
The generalized Thomson problem arises, for example, in determining arrangements of protein subunits that comprise the shells of spherical viruses. The "particles" in this application are clusters of protein subunits arranged on a shell. Other realizations include regular arrangements of colloid particles in colloidosomes, proposed for encapsulation of active ingredients such as drugs, nutrients or living cells, fullerene patterns of carbon atoms, and VSEPR theory. An example with long-range logarithmic interactions is provided by Abrikosov vortices that form at low temperatures in a superconducting metal shell with a large monopole at its center.
Configurations of smallest known energy
In the following table is the number of points (charges) in a configuration, is the energy, the symmetry type is given in Schönflies notation (see Point groups in three dimensions), and are the positions of the charges. Most symmetry types require the vector sum of the positions (and thus the electric dipole moment) to be zero.
It is customary to also consider the polyhedron formed by the convex hull of the points. Thus, is the number of vertices where the given number of edges meet, is the total number of edges, is the number of triangular faces, is the number of quadrilateral faces, and is the smallest angle subtended by vectors associated with the nearest charge pair. Note that the edge lengths are generally not equal. Thus, except in the cases N = 2, 3, 4, 6, 12, and the geodesic polyhedra, the convex hull is only topologically equivalent to the figure listed in the last column.
According to a conjecture, if is the polyhedron formed by the convex hull of the solution configuation to the Thomson Problem for electrons and is the number of quadrilateral faces of , then has edges.
References
Notes
.
.
Configurations reprinted in
. Configurations reproduced in
This webpage contains many more electron configurations with the lowest known energy: https://www.hars.us.
Electrostatics
Electron
Circle packing
Unsolved problems in mathematics | Thomson problem | [
"Chemistry",
"Mathematics"
] | 1,876 | [
"Electron",
"Geometry problems",
"Unsolved problems in mathematics",
"Molecular physics",
"Packing problems",
"Circle packing",
"Mathematical problems"
] |
3,104,065 | https://en.wikipedia.org/wiki/Ring-closing%20metathesis | Ring-closing metathesis (RCM) is a widely used variation of olefin metathesis in organic chemistry for the synthesis of various unsaturated rings via the intramolecular metathesis of two terminal alkenes, which forms the cycloalkene as the E- or Z- isomers and volatile ethylene.
The most commonly synthesized ring sizes are between 5-7 atoms; however, reported syntheses include 45- up to 90- membered macroheterocycles. These reactions are metal-catalyzed and proceed through a metallacyclobutane intermediate. It was first published by Dider Villemin in 1980 describing the synthesis of an Exaltolide precursor, and later become popularized by Robert H. Grubbs and Richard R. Schrock, who shared the Nobel Prize in Chemistry, along with Yves Chauvin, in 2005 for their combined work in olefin metathesis. RCM is a favorite among organic chemists due to its synthetic utility in the formation of rings, which were previously difficult to access efficiently, and broad substrate scope. Since the only major by-product is ethylene, these reactions may also be considered atom economic, an increasingly important concern in the development of green chemistry.
There are several reviews published on ring-closing metathesis.
History
The first example of ring-closing metathesis was reported by Dider Villemin in 1980 when he synthesized an Exaltolide precursor using a WCl6/Me4Sn catalyzed metathesis cyclization in 60-65% yield depending on ring size (A). In the following months, Jiro Tsuji reported a similar metathesis reaction describing the preparation of a macrolide catalyzed by WCl6 and dimethyltitanocene (Cp2TiMe2) in a modest 17.9% yield (B). Tsuji describes the olefin metathesis reaction as “…potentially useful in organic synthesis” and addresses the need for the development of a more versatile catalyst to tolerate various functional groups.
In 1987, Siegfried Warwel and Hans Kaitker published a synthesis of symmetric macrocycles through a cross-metathesis dimerization of starting cycloolefins to afford C14, C18, and C20 dienes in 58-74% yield, as well as C16 in 30% yield, using Re2O7 on Al2O3 and Me4Sn for catalyst activation.
After a decade since its initial discovery, Grubbs and Fu published two influential reports in 1992 detailing the synthesis of O- and N- heterocycles via RCM utilizing Schrock’s molybdenum alkylidene catalysts, which had proven more robust and functional group tolerant than the tungsten chloride catalysts. The synthetic route allowed access to dihydropyrans in high yield (89-93%) from readily available starting materials. In addition, synthesis of substituted pyrrolines, tetrahydropyridines, and amides were illustrated in modest to high yield (73-89% ). The driving force for the cyclization reaction was attributed to entropic favorability by forming two molecules per one molecule of starting material. The loss of the second molecule, ethylene, a highly volatile gas, drives the reaction in the forward direction according to Le Châtelier's principle.
In 1993, Grubbs and others not only published a report on carbocycle synthesis using a molybdenum catalyst, but also detailed the initial use of a novel ruthenium carbene complex for metathesis reactions, which later became a popular catalyst due to its extraordinary utility. The ruthenium catalysts are not sensitive to air and moisture, unlike the molybdenum catalysts. The ruthenium catalysts, known better as the Grubbs Catalysts, as well as molybdenum catalysts, or Schrock’s Catalysts, are still used today for many metathesis reactions, including RCM. Overall, it was shown that metal-catalyzed RCM reactions were very effective in C-C bond forming reactions, and would prove of great importance in organic synthesis, chemical biology, materials science, and various other fields to access a wide variety of unsaturated and highly functionalized cyclic analogues.
Mechanism
General mechanism
The mechanism for transition metal-catalyzed olefin metathesis has been widely researched over the past forty years. RCM undergoes a similar mechanistic pathway as other olefin metathesis reactions, such as cross metathesis (CM), ring-opening metathesis polymerization (ROMP), and acyclic diene metathesis (ADMET). Since all steps in the catalytic cycle are considered reversible, it is possible for some of these other pathways to intersect with RCM depending on the reaction conditions and substrates. In 1971, Chauvin proposed the formation of a metallacyclobutane intermediate through a [2+2] cycloaddition which then cycloeliminates to either yield the same alkene and catalytic species (a nonproductive pathway), or produce a new catalytic species and an alkylidene (a productive pathway). This mechanism has become widely accepted among chemists and serves as the model for the RCM mechanism.
Initiation occurs through substitution of the catalyst’s alkene ligand with substrate. This process occurs via formation of a new alkylidene through one round of [2+2] cycloaddition and cycloelimination. Association and dissociation of a phosphine ligand also occurs in the case of Grubbs catalysts. In an RCM reaction, the alkylidene undergoes an intramolecular [2+2] cycloaddition with the second reactive terminal alkene on the same molecule, rather than an intermolecular addition of a second molecule of starting material, a common competing side reaction which may lead to polymerization Cycloelimination of the metallacyclobutane intermediate forms the desired RCM product along with a [M]=CH2, or alkylidene, species which reenters the catalytic cycle. While the loss of volatile ethylene is a driving force for RCM, it is also generated by competing metathesis reactions and therefore cannot be considered the only driving force of the reaction.
Thermodynamics
The reaction can be under kinetic or thermodynamic control depending on the exact reaction conditions, catalyst, and substrate. Common rings, 5- through 7-membered cycloalkenes, have a high tendency for formation and are often under greater thermodynamic control due to the enthalpic favorability of the cyclic products, as shown by Illuminati and Mandolini on the formation of lactone rings. Smaller rings, between 5 and 8 atoms, are more thermodynamically favored over medium to large rings due to lower ring strain. Ring strain arises from abnormal bond angles resulting in a higher heat of combustion relative to the linear counterpart. If the RCM product contains a strained olefin, polymerization becomes preferable through ring-opening metathesis polymerization of the newly formed olefin. Medium rings in particular have greater ring strain, in part due to greater transannular interactions from opposing sides of the ring, but also the inability to orient the molecule in such a way to prevent penalizing gauche interactions. RCM may be considered to have a kinetic bias if the products cannot reenter the catalytic cycle or interconvert through an equilibrium. A kinetic product distribution could lead to mostly RCM products or may lead to oligomers and polymers, which are most often disfavored.
Equilibrium
With the advent of more reactive catalysts, equilibrium RCM is observed quite often which may lead to a greater product distribution. The mechanism can be expanded to include the various competing equilibrium reactions as well as indicate where various side-products are formed along the reaction pathway, such as oligomers.
Although the reaction is still under thermodynamic control, an initial kinetic product, which may be dimerization or oligomerization of the starting material, is formed at the onset of the reaction as a result of higher catalyst reactivity. Increased catalyst activity also allows for the olefin products to reenter the catalytic cycle via non-terminal alkene addition onto the catalyst. Due to additional reactivity in strained olefins, an equilibrium distribution of products is observed; however, this equilibrium can be perturbed through a variety of techniques to overturn the product ratios in favor of the desired RCM product.
Since the probability for reactive groups on the same molecule to encounter each other is inversely proportional to the ring size, the necessary intramolecular cycloaddition becomes increasingly difficult as ring size increases. This relationship means that the RCM of large rings is often performed under high dilution (0.05 - 100 mM) (A) to reduce intermolecular reactions; while the RCM of common rings can be performed at greater concentrations, even neat in rare cases. The equilibrium reaction can be driven to the desired thermodynamic products by increasing temperature (B), to decrease viscosity of the reaction mixture and therefore increase thermal motion, as well as increasing or decreasing reaction time (C).
Catalyst choice (D) has also been shown to be critical in controlling product formation. A few of the catalysts commonly used in ring-closing metathesis are shown below.
Reaction scope
Alkene substrate
Ring-closing Metathesis has shown utility in the synthesis of 5-30 membered rings, polycycles, and heterocycles containing atoms such as N, O, S, P, and even Si. Due to the functional group tolerance of modern RCM reactions, the synthesis of structurally complex compounds containing a range of functional groups such as epoxides, ketones, alcohols, ethers, amines, amides, and many others can be achieved more easily than previous methods. Oxygen and nitrogen heterocycles dominate due to their abundance in natural products and pharmaceuticals. Some examples are shown below (the red alkene indicates C-C bond formed through RCM).
In addition to terminal alkenes, tri- and tetrasubstituted alkenes have been used in RCM reactions to afford substituted cyclic olefin products. Ring-closing metathesis has also been used to cyclize rings containing an alkyne to produce a new terminal alkene, or even undergo a second cyclization to form bicycles. This type of reaction is more formally known as enyne ring-closing metathesis.
E/Z selectivity
In RCM reactions, two possible geometric isomers, either E- or Z-isomer, may be formed. Stereoselectivity is dependent on the catalyst, ring strain, and starting diene. In smaller rings, Z-isomers predominate as the more stable product reflecting ring-strain minimization. In macrocycles, the E-isomer is often obtained as a result of the thermodynamic bias in RCM reactions as E-isomers are more stable compared to Z-isomers. As a general trend, ruthenium NHC (N-heterocyclic carbene) catalysts favor E selectivity to form the trans isomer. This in part due to the steric clash between the substituents, which adopt a trans configuration as the most stable conformation in the metallacyclobutane intermediate, to form the E-isomer. The synthesis of stereopure Z- isomers were previously achieved via ring-closing alkyne metathesis. However, in 2013 Grubbs reported the use of a chelating ruthenium catalyst to afford Z macrocycles in high selectivity. The selectivity is attributed to the increased steric clash between the catalyst ligands and the metallacyclobutane intermediate that is formed. The increased steric interactions in the transition state lead to the Z olefin rather than the E olefin, because the transition state required to form the E- isomer is highly disfavored.
Cocatalyst
Additives are also used to overturn conformational preferences, increase reaction concentration, and chelate highly polar groups, such as esters or amides, which can bind to the catalyst. Titanium isopropoxide (Ti(OiPr)4) is commonly used to chelate polar groups to prevent catalyst poisoning and in the case of an ester, the titanium Lewis acid binds the carbonyl oxygen. Once the oxygen is chelated with the titanium it can no longer bind to the ruthenium metal of the catalyst, which would result in catalyst deactivation. This also allows the reaction to be run at a higher effective concentration without dimerization of starting material.
Another classic example is the use of a bulky Lewis acid to form the E-isomer of an ester over the preferred Z-isomer for cyclolactonization of medium rings. In one study, the addition of aluminum tris(2,6-diphenylphenoxide) (ATPH) was added to form a 7-membered lactone. The aluminum binds with the carbonyl oxygen forcing the bulky diphenylphenoxide groups in close proximity to the ester compound. As a result, the ester adopts the E-isomer to minimize penalizing steric interactions. Without the Lewis acid, only the 14-membered dimer ring was observed.
By orienting the molecule in such a way that the two reactive alkenes are in close proximity, the risk of intermolecular cross-metathesis is minimized.
Limitations
Many metathesis reactions with ruthenium catalysts are hampered by unwanted isomerization of the newly formed double bond, and it is believed that ruthenium hydrides that form as a side reaction are responsible. In one study it was found that isomerization is suppressed in the RCM reaction of diallyl ether with specific additives capable of removing these hydrides. Without an additive, the reaction product is 2,3-dihydrofuran (2,3-DHF) and not the expected 2,5-dihydrofuran (2,5-DHF) together with the formation of ethylene gas. Radical scavengers, such as TEMPO or phenol, do not suppress isomerization; however, additives such as 1,4-benzoquinone or acetic acid successfully prevent unwanted isomerization. Both additives are able to oxidize the ruthenium hydrides which may explain their behavior.
Another common problem associated with RCM is the risk of catalyst degradation due to the high dilution required for some cyclizations. High dilution is also a limiting factor in industrial applications due to the large amount of waste generated from large-scale reactions at a low concentration. Efforts have been made to increase reaction concentration without compromising selectivity.
Synthetic applications
Ring-closing metathesis has been used historically in numerous organic syntheses and continues to be used today in the synthesis of a variety of compounds. The following examples are only representative of the broad utility of RCM, as there are numerous possibilities. For additional examples see the many review articles.
Ring-closing metathesis is important in total synthesis. One example is its use in the formation of the 12-membered ring in the synthesis of the naturally occurring cyclophane floresolide. Floresolide B was isolated from an ascidian of the genus Apidium and showed cytotoxicity against KB tumor cells. In 2005, K. C. Nicolaou and others completed a synthesis of both isomers through late-stage ring-closing metathesis using the 2nd Generation Grubbs catalyst to afford a mixture of E- and Z- isomers (1:3 E/Z) in 89% yield. Although one prochiral center is present the product is racemic. Floresolide is an atropisomer as the new ring forms (due to steric constraints in the transition state) passing through the front of the carbonyl group in and not the back. The carbonyl group then locks the ring permanently in place. The E/Z isomers were then separated and then the phenol nitrobenzoate protective group was removed in the final step by potassium carbonate to yield the final product and the unnatural Z-isomer.
In 1995, Robert Grubbs and others highlighted the stereoselectivity possible with RCM. The group synthesized a diene with an internal hydrogen bond forming a β-turn. The hydrogen bond stabilized the macrocycle precursor placing both dienes in close proximity, primed for metathesis. After subjecting a mixture of diastereomers to the reaction conditions, only one diastereomer of the olefin β-turn was obtained. The experiment was then repeated with (S,S,S) and (R,S,R) peptides. Only the (S,S,S) diastereomer was reactive illustrating the configuration needed for ring-closing to be possible. The olefin product’s absolute configuration mimics that of Balaram’s disulfide peptide.
The ring strain in 8-11 atom rings has proven to be challenging for RCM; however, there are many cases where these cyclic systems have been synthesized. In 1997, Fürstner reported a facile synthesis to access jasmine ketolactone (E/Z) through a final RCM step. At the time, no previous 10-membered ring had been formed through RCM, and previous syntheses were often lengthy, involving a macrolactonization to form the decanolide. By adding the diene and catalyst over a 12-hour period to refluxing toluene, Fürstner was able to avoid oligomerization and obtain both E/Z isomers in 88% yield. CH2Cl2 favored the formation of the Z-isomer in 1:2.5 (E/Z) ratio, whereas, toluene only afforded a 1:1.4 (E/Z) mixture.
In 2000, Alois Fürstner reported an eight step synthesis to access (−)-balanol using RCM to form a 7-member heterocycle intermediate. Balanol is a metabolite isolated from erticiullium balanoides and shows inhibitory action towards protein kinase C (PKC). In the ring closing metathesis step, a ruthenium indenylidene complex was used as the precatalyst to afford the desired 7-member ring in 87% yield.
In 2002, Stephen F. Martin and others reported the 24-step synthesis of manzamine A with two ring-closing metathesis steps to access the polycyclic alkaloid. The natural product was isolated from marine sponges off the coast of Okinawa. Manzamine is a good target due to its potential as an antitumor compound. The first RCM step was to form the 13-member D ring as solely the Z-isomer in 67% yield, a unique contrast to the usual favored E-isomer of metathesis. After further transformations, the second RCM was used to form the 8-member E ring in 26% yield using stoichiometric 1st Generation Grubbs catalyst. The synthesis highlights the ability for functional group tolerance metathesis reactions as well as the ability to access complex molecules of varying ring sizes.
In 2003, Danishefsky and others reported the total synthesis of (+)-migrastatin, a macrolide isolated from Streptomyces which inhibited tumor cell migration. The macrolide contains a 14-member heterocycle that was formed through RCM. The metathesis reaction yielded the protected migrastatin in 70% yield as only the (E,E,Z) isomer. It is reported that this selectivity arises from the preference for the ruthenium catalyst to add to the less hindered olefin first then cyclize to the most accessible olefin. The final deprotection of the silyl ether yielded (+)-migrastatin.
Overall, ring-closing metathesis is a highly useful reaction to readily obtain cyclic compounds of varying size and chemical makeup; however, it does have some limitations such as high dilution, selectivity, and unwanted isomerization.
See also
Olefin Metathesis
Ring-opening metathesis polymerization
Alkane metathesis
Alkyne metathesis
Enyne metathesis
References
External links
Ring-Closing Metathesis at organic-chemistry.org
Sigma-Aldrich Ring-Closing Metathesis at sigmaaldrich.com
The Olefin Metathesis Reaction Andrew Myers’ Group Notes
Rearrangement reactions
Organometallic chemistry
Carbon-carbon bond forming reactions
Homogeneous catalysis | Ring-closing metathesis | [
"Chemistry"
] | 4,335 | [
"Catalysis",
"Carbon-carbon bond forming reactions",
"Rearrangement reactions",
"Organic reactions",
"Homogeneous catalysis",
"Organometallic chemistry",
"Ring forming reactions"
] |
3,104,166 | https://en.wikipedia.org/wiki/3-Phosphoglyceric%20acid | 3-Phosphoglyceric acid (3PG, 3-PGA, or PGA) is the conjugate acid of 3-phosphoglycerate or glycerate 3-phosphate (GP or G3P). This glycerate is a biochemically significant metabolic intermediate in both glycolysis and the Calvin-Benson cycle. The anion is often termed as PGA when referring to the Calvin-Benson cycle. In the Calvin-Benson cycle, 3-phosphoglycerate is typically the product of the spontaneous scission of an unstable 6-carbon intermediate formed upon CO2 fixation. Thus, two equivalents of 3-phosphoglycerate are produced for each molecule of CO2 that is fixed. In glycolysis, 3-phosphoglycerate is an intermediate following the dephosphorylation (reduction) of 1,3-bisphosphoglycerate.
Glycolysis
In the glycolytic pathway, 1,3-bisphosphoglycerate is dephosphorylated to form 3-phosphoglyceric acid in a coupled reaction producing two ATP via substrate-level phosphorylation. The single phosphate group left on the 3-PGA molecule then moves from an end carbon to a central carbon, producing 2-phosphoglycerate. This phosphate group relocation is catalyzed by phosphoglycerate mutase, an enzyme that also catalyzes the reverse reaction.
Calvin-Benson cycle
In the light-independent reactions (also known as the Calvin-Benson cycle), two 3-phosphoglycerate molecules are synthesized. RuBP, a 5-carbon sugar, undergoes carbon fixation, catalyzed by the rubisco enzyme, to become an unstable 6-carbon intermediate. This intermediate is then cleaved into two, separate 3-carbon molecules of 3-PGA. One of the resultant 3-PGA molecules continues through the Calvin-Benson cycle to be regenerated into RuBP while the other is reduced to form one molecule of glyceraldehyde 3-phosphate (G3P) in two steps: the phosphorylation of 3-PGA into 1,3-bisphosphoglyceric acid via the enzyme phosphoglycerate kinase (the reverse of the reaction seen in glycolysis) and the subsequent catalysis by glyceraldehyde 3-phosphate dehydrogenase into G3P. G3P eventually reacts to form the sugars such as glucose or fructose or more complex starches.
Amino acid synthesis
Glycerate 3-phosphate (formed from 3-phosphoglycerate) is also a precursor for serine, which, in turn, can create cysteine and glycine through the homocysteine cycle.
Measurement
3-phosphoglycerate can be separated and measured using paper chromatography as well as with column chromatography and other chromatographic separation methods. It can be identified using both gas-chromatography and liquid-chromatography mass spectrometry and has been optimized for evaluation using tandem MS techniques.
See also
2-Phosphoglyceric acid
Calvin-Benson cycle
Photosynthesis
Ribulose 1,5-bisphosphate
References
Carboxylate anions
Organophosphates
Photosynthesis
Glycolysis
Metabolic intermediates
Biomolecules | 3-Phosphoglyceric acid | [
"Chemistry",
"Biology"
] | 751 | [
"Carbohydrate metabolism",
"Natural products",
"Biochemistry",
"Glycolysis",
"Photosynthesis",
"Organic compounds",
"Metabolic intermediates",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Metabolism"
] |
3,104,264 | https://en.wikipedia.org/wiki/1%2C3-Bisphosphoglyceric%20acid | 1,3-Bisphosphoglyceric acid (1,3-Bisphosphoglycerate or 1,3BPG) is a 3-carbon organic molecule present in most, if not all, living organisms. It primarily exists as a metabolic intermediate in both glycolysis during respiration and the Calvin cycle during photosynthesis. 1,3BPG is a transitional stage between glycerate 3-phosphate and glyceraldehyde 3-phosphate during the fixation/reduction of CO2. 1,3BPG is also a precursor to 2,3-bisphosphoglycerate which in turn is a reaction intermediate in the glycolytic pathway.
Biological structure and role
1,3-Bisphosphoglycerate is the conjugate base of 1,3-bisphosphoglyceric acid. It is phosphorylated at the number 1 and 3 carbons. The result of this phosphorylation gives 1,3BPG important biological properties such as the ability to phosphorylate ADP to form the energy storage molecule ATP.
In glycolysis
As previously mentioned 1,3BPG is a metabolic intermediate in the glycolytic pathway. It is created by the exergonic oxidation of the aldehyde in G3P. The result of this oxidation is the conversion of the aldehyde group into a carboxylic acid group which drives the formation of an acyl phosphate bond. This is incidentally the only step in the glycolytic pathway in which NAD+ is converted into NADH. The formation reaction of 1,3BPG requires the presence of an enzyme called glyceraldehyde-3-phosphate dehydrogenase.
The high-energy acyl phosphate bond of 1,3BPG is important in respiration as it assists in the formation of ATP. The molecule of ATP created during the following reaction is the first molecule produced during respiration. The reaction occurs as follows;
1,3-bisphosphoglycerate + ADP ⇌ 3-phosphoglycerate + ATP
The transfer of an inorganic phosphate from the carboxyl group on 1,3BPG to ADP to form ATP is reversible due to a low ΔG. This is as a result of one acyl phosphate bond being cleaved whilst another is created. This reaction is not naturally spontaneous and requires the presence of a catalyst. This role is performed by the enzyme phosphoglycerate kinase. During the reaction phosphoglycerate kinase undergoes a substrate induced conformational change similar to another metabolic enzyme called hexokinase.
Because two molecules of glyceraldehyde-3-phosphate are formed during glycolysis from one molecule of glucose, 1,3BPG can be said to be responsible for two of the ten molecules of ATP produced during the entire process. Glycolysis also uses two molecules of ATP in its initial stages as a committed and irreversible step. For this reason glycolysis is not reversible and has a net produce of 2 molecules of ATP and two of NADH. The two molecules of NADH themselves go on to produce approximately 3 molecules of ATP each.
In the Calvin cycle
1,3-BPG has a very similar role in the Calvin cycle to its role in the glycolytic pathway. For this reason both reactions are said to be analogous. However the reaction pathway is effectively reversed. The only other major difference between the two reactions is that NADPH is used as an electron donor in the calvin cycle whilst NAD+ is used as an electron acceptor in glycolysis. In this reaction cycle 1,3BPG originates from 3-phosphoglycerate and is made into glyceraldehyde 3-phosphate by the action of specific enzymes.
Contrary to the similar reactions of the glycolytic pathway, 1,3BPG in the Calvin cycle does not produce ATP but instead uses it. For this reason it can be considered to be an irreversible and committed step in the cycle. The outcome of this section of the cycle is an inorganic phosphate is removed from 1,3BPG as a hydrogen ion and two electrons are added to the compound+.
In complete reverse of the glycolytic pathway reaction, the enzyme phosphoglycerate kinase catalyses the reduction of the carboxyl group of 1,3BPG to form an aldehyde instead. This reaction also releases an inorganic phosphate molecule which is subsequently used as energy for the donation of electrons from the conversion of NADPH to NADP+. Overseeing this latter stage of the reaction is the enzyme glyceraldehyde-phosphate dehydrogenase.
In oxygen transfer
During normal metabolism in humans approximately 20% of the 1,3BPG produced does not go any further in the glycolytic pathway. It is instead shunted through an alternate pathway involving the reduction of ATP in the red blood cells. During this alternate pathway it is made into a similar molecule called 2,3-bisphosphoglyceric acid (2,3BPG). 2,3BPG is used as a mechanism to oversee the efficient release of oxygen from hemoglobin. Levels of this 1,3BPG will raise in a patient's blood when oxygen levels are low as this is one of the mechanisms of acclimatization. Low oxygen levels trigger a rise in 1,3BPG levels which in turn raises the level of 2,3BPG which alters the efficiency of oxygen dissociation from hemoglobin.
References
External links
1,3BPG in Glycolysis and Fermentation
Medical Dictionary reference for 1,3BPG
1,3BPG enzyme mechanisms
Photosynthesis
Biomolecules
Cellular respiration
Organophosphates
Glycolysis
Metabolic intermediates | 1,3-Bisphosphoglyceric acid | [
"Chemistry",
"Biology"
] | 1,246 | [
"Carbohydrate metabolism",
"Cellular respiration",
"Natural products",
"Glycolysis",
"Photosynthesis",
"Organic compounds",
"Metabolic intermediates",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Metabolism",
"Molecular biology"
] |
3,104,368 | https://en.wikipedia.org/wiki/Glossary%20of%20architecture | This page is a glossary of architecture.
A
B
C
D
E
In historical gardening, an estrade plant was pruned and trained with the main stem bare in sections, to achieve an appearance often likened to a "wedding cake".
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z
See also
Outline of architecture
List of classical architecture terms
Classical order
List of architectural vaults
List of structural elements
Glossary of engineering
Notes
References
Page has search box.
Building engineering
Architecture
Architecture
Architectural elements
architecture
Wikipedia glossaries using description lists | Glossary of architecture | [
"Technology",
"Engineering"
] | 114 | [
"Building engineering",
"Architectural elements",
"Civil engineering",
"Architecture lists",
"Components",
"Architecture"
] |
3,104,373 | https://en.wikipedia.org/wiki/Disk%20pack | Disk packs and disk cartridges were early forms of removable media for computer data storage, introduced in the 1960s.
Disk pack
A disk pack is a layered grouping of hard disk platters (circular, rigid discs coated with a magnetic data storage surface). A disk pack is the core component of a hard disk drive. In modern hard disks, the disk pack is permanently sealed inside the drive. In many early hard disks, the disk pack was a removable unit, and would be supplied with a protective canister featuring a lifting handle.
The protective cover consisted of two parts, a plastic shell, with a handle in the center, that enclosed the top and sides of the disks and a separate bottom that completed the sealed package. To remove the disk pack, the drive would be taken off line and allowed to spin down. Its access door could then be opened and an empty shell inserted and twisted to unlock the disk platter from the drive and secure it to the shell. The assembly would then be lifted out and the bottom cover attached. A different disk pack could then be inserted by removing the bottom and placing the disk pack with its shell into the drive. Turning the handle would lock the disk pack in place and free the shell for removal.
The first removable disk pack was invented in 1961 by IBM engineers R. E. Pattison as part of the LCF (Low Cost File) project headed by Jack Harker. The 14-inch (356 mm) diameter disks introduced by IBM became a de facto standard, with many vendors producing disk drives using 14-inch disks in disk packs and cartridges into the 1980s.
Examples of disk drives that employed removable disk packs include the IBM 1311, IBM 2311, and the Digital RP04.
Disk cartridge
An early disk cartridge was a single hard disk platter encased in a protective plastic shell. When the removable cartridge was inserted into the cartridge drive peripheral device, the read/write heads of the drive could access the magnetic data storage surface of the platter through holes in the shell. The disk cartridge was a direct evolution from the disk pack drive, or the early hard drive. As the storage density improved, even a single platter would provide a useful amount of data storage space, with the benefit being easier to handle than a removable disk pack. An example of a cartridge drive is the IBM 2310, used on the IBM 1130. Disk cartridges were made obsolete by floppy disks.
Alignment
Disk drives with exchangeable disk packs or disk cartridges generally required the data heads to be aligned to allow packs formatted on one drive to be read and written on another compatible drive. Alignment required a special alignment pack, an oscilloscope, an alignment tool that moved the read/write heads, and patience. The pattern generated on the scope looks like a row of alternating C and E characters on their backs. Head alignment needed to be performed after head replacement, and in any case on a periodic basis as part of the routine maintenance required by the drives.
The alignment pack was usually called the "CE pack," because IBM never called their 'service technicians' 'repairmen,' but "Customer Engineers" (CEs). And, since the alignment pack was only to be used by CEs, it was called the "CE Pack." Special CE media was also available for tape drives and diskette drives, known as "the CE tape" and "the CE floppy."
Later drives with exchangeable packs (such as the CDC 8" Lark drives) embedded the servo with the data and didn't require regular head alignment.
See also
History of hard disk drives
Caelus Memories
References
IBM 1311 disk storage drive, IBM Archives
Thomas G. Leary, "Transporting and Protecting Cases for Drum and Disk Records," , 1965; R.E. Pattison, "Portable Memory for Data Processing Machine," , 1965
Rotating disc computer storage media
History of computing hardware
Computer-related introductions in 1961 | Disk pack | [
"Technology"
] | 804 | [
"History of computing hardware",
"History of computing"
] |
3,104,469 | https://en.wikipedia.org/wiki/Cavetto | A cavetto is a concave moulding with a regular curved profile that is part of a circle, widely used in architecture as well as furniture, picture frames, metalwork and other decorative arts. In describing vessels and similar shapes in pottery, metalwork and related fields, "cavetto" may be used of a variety of concave curves running round objects. The word comes from Italian, as a diminutive of cave, from the Latin for "hollow" (it is the same root as cave). A vernacular alternative is "cove", most often used where interior walls curve at the top to make a transition to the roof, or for "upside down" cavettos at the bases of elements.
The cavetto moulding is the opposite of the convex, bulging, ovolo, which is equally common in the tradition of Western classical architecture. Both bring the surface forward, and are often combined with other elements of moulding. Usually they include a curve through about a quarter-circle (90°). A concave moulding of about a full semi-circle is known as a "scotia".
Only a minor element of decoration in classical architecture, the prominent cavetto cornice is a common feature of the ancient architecture of Egypt and the Ancient Near East.
Architecture
Ancient Egyptian architecture made special use of large cavetto mouldings as a cornice, with only a short fillet (plain vertical face) above, and a torus moulding (convex semi-circle) below. This cavetto cornice is sometimes also known as an "Egyptian cornice", "hollow and roll" or "gorge cornice", and has been suggested to be a reminiscence in stone architecture of the primitive use of bound bunches of reeds as supports for buildings, the weight of the roof bending their tops out.
Many types of Egyptian capitals for columns are essentially cavettos running round the shaft, often with added decoration. These include the types known as "bell capitals" or "papyrus capitals". These features are often reproduced in Egyptian Revival architecture, as in the Egyptian Building (1845) in Richmond, Virginia.
The cavetto cornice, often forming less than a quarter-circle, influenced Egypt's neighbours and as well as appearing in early Greek architecture, it is seen in Syria and ancient Iran, for example at the Tachara palace of Darius I at Persepolis, completed in 486 BC. Inspired by this precedent, it was then revived by Ardashir I (r. 224–41 AD), the founder of the Sasanian dynasty.
The cavetto took the place of the Greek cymatium in many Etruscan temples, often painted with vertical "tongue" patterns, and combined with the distinctive "Etruscan round moulding", often painted with scales.
This emphasis on the cavetto was very different from its role in mature Ancient Greek architecture, where cavetto elements were relatively small and subordinated to essentially vertical elements, setting the style for the subsequent Western classical tradition. Often an essentially cavetto section is heavily decorated, in Gothic architecture often smothering the shape beneath.
In general the Greeks made much more use of the cyma moulding, where a cavetto and ovolo were placed one above the other to produce a "S" shape; the cymatium using this was a standard part of the cornice in the classical Ionic order, and often used elsewhere. There are two forms, depending on which curve is uppermost: in the cyma recta the cavetto is on top, in the cyma reversa the ovolo. A cavetto alone was sometimes employed in the place of the cymatium of a cornice, as in the Doric order of the Theatre of Marcellus in Rome, one of the standard models for revived classical architecture from the Renaissance onwards. But small cavetto mouldings were normal at various places, including integrated ones, not distinguished as a distinct zone by lines or borders, at the bottom of the shaft of columns, beginning the transition to the wider base. These are called an apophyge, or "concave sweep".
Claude Perrault, one of the architects of Louis XIV's rebuilding of the Louvre Palace, especially the Louvre Colonnade, explained in his architectural textbook Ordonnance for the five kinds of columns after the method of the Ancients (1683) why he had replaced a cavetto with a cyma in his illustration of the Doric capital: "a cavetto is not as strong and is more readily broken than the other molding".
Vessels
In plates and other flattish shapes, cavetto is used for the curving area linking the base and the rim. This is the case whether the rim is a broad flat surface (typical 20th-century Western plate), or merely the edge of the cavetto (typical modern cereal bowl). Normally the term refers to the top surface of the vessel; if the underside is meant (where the curve is now convex), that may be described as "under the cavetto", "under-cavetto" and so forth.
The cavetto is very often left undecorated, but may have decoration of a different sort from the middle or a flat rim, and the term is typically used when it is necessary to describe this. In complicated pottery shapes, where the normal vocabulary of mouldings is appropriate, cavetto may be used in that sense for any concave curving section.<ref>Hellenistic Pottery and Terracottas], By Homer A. Thompson et al., examples B42 and B43</ref>
In the terminology of archaeology, especially relating to pottery (and generally not used of pottery after antiquity, or outside archaeology), a cavetto zone or cavetto is a "sharp concavity encircling the body of a vessel", and also a "deep but narrow neck", both used in relation to mainly upright vessels for storing or cooking food.
See also
Cymatium
Notes
References
Gibson, Alex M., Prehistoric Pottery for the Archaeologist, 1997, A&C Black, , 9780718519544, google books
Perrault, Claude, Ordonnance for the Five Kinds of Columns after the Method of the Ancients, translated by Indra Kagis McEwenand and edited by Alberto Perez-Gomez, 1996, Getty Publications, , 9780892362325, google books
Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series,
Winter, Nancy A., "Monumentalization of the Etruscan Round Moulding in Sixth Century BCE Central Italy", in Monumentality in Etruscan and Early Roman Architecture: Ideology and Innovation'', edited by Michael Thomas, Gretchen E. Meyers, 2012, University of Texas Press, , 9780292749825, [https://books.google.com/books?id=wOezNYCGLc4C&pg=PA61 google books
Architectural terminology
Pottery
Ancient Egyptian architecture | Cavetto | [
"Engineering"
] | 1,455 | [
"Architectural terminology",
"Architecture"
] |
3,104,473 | https://en.wikipedia.org/wiki/Diurnality | Diurnality is a form of plant and animal behavior characterized by activity during daytime, with a period of sleeping or other inactivity at night. The common adjective used for daytime activity is "diurnal". The timing of activity by an animal depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. Diurnality is a cycle of activity within a 24-hour period; cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors except for a zeitgeber. Animals active during twilight are crepuscular, those active during the night are nocturnal and animals active at sporadic times during both night and day are cathemeral.
Plants that open their flowers during the daytime are described as diurnal, while those that bloom during nighttime are nocturnal. The timing of flower opening is often related to the time at which preferred pollinators are foraging. For example, sunflowers open during the day to attract bees, whereas the night-blooming cereus opens at night to attract large sphinx moths.
Animals
Many types of animals are classified as being diurnal, meaning they are active during the day time and inactive or have periods of rest during the night time. Commonly classified diurnal animals include mammals, birds, and reptiles. Most primates are diurnal, including humans. Scientifically classifying diurnality within animals can be a challenge, apart from the obvious increased activity levels during the day time light.
Evolution of diurnality
Initially, most animals were diurnal, but adaptations have allowed some animals to become nocturnal, contributing to the success of many, especially mammals. This evolutionary movement to nocturnality allowed them to better avoid predators and gain resources with less competition from other animals. This did come with some adaptations that mammals live with today. Vision has been one of the most greatly affected senses from switching back and forth from diurnality to nocturnality, and this can be seen using biological and physiological analysis of rod nuclei from primate eyes. This includes losing two of four cone opsins that assists in colour vision, making many mammals dichromats. When early primates converted back to diurnality, better vision that included trichromatic colour vision became very advantageous, making diurnality and colour vision adaptive traits of simiiformes, which includes humans. Studies using chromatin distribution analysis of rod nuclei from different simian eyes found that transitions between diurnality and nocturnality occurred several times within primate lineages, with switching to diurnality being the most common transitions.
Still today, diurnality seems to be reappearing in many lineages of other animals, including small rodent mammals like the Nile grass rat and golden mantle squirrel and reptiles. More specifically, geckos, which were thought to be naturally nocturnal have shown many transitions to diurnality, with about 430 species of geckos now showing diurnal activity. With so many diurnal species recorded, comparative analysis studies using newer lineages of gecko species have been done to study the evolution of diurnality. With about 20 transitions counted for the gecko lineages, it shows the significance of diurnality. Strong environmental influences like climate change, predation risk, and competition for resources are all contributing factors. Using the example of geckos, it is thought that species like Mediodactylus amictopholis that live at higher altitudes have switched to diurnality to help gain more heat through the day, and therefore conserve more energy, especially in colder seasons.
Light
Light is one of the most defining environmental factors that determines an animal's activity pattern. Photoperiod or a light dark cycle is determined by the geographical location, with day time being associated with much ambient light, and night time being associated with little ambient light. Light is one of the strongest influences of the suprachiasmatic nucleus (SCN) which is part of the hypothalamus in the brain that controls the circadian rhythm in most animals. This is what determines whether an animal is diurnal or not. The SCN uses visual information like light to start a cascade of hormones that are released and work on many physiological and behavioural functions.
Light can produce powerful masking effects on an animal's circadian rhythm, meaning that it can "mask" or influence the internal clock, changing the activity patterns of an animal, either temporarily or over the long term if exposed to enough light over a long period of time. Masking can be referred to either as positive masking or negative masking, with it either increasing a diurnal animal's activity or decreasing a nocturnal animal's activity, respectively. This can be depicted when exposing different types of rodents to the same photoperiods. When a diurnal Nile grass rat and nocturnal mouse are exposed to the same photoperiod and light intensity, increased activity occurred within the grass rat (positive masking), and decreased activity within the mouse (negative masking).
Even small amounts of environmental light change have shown to have an effect on the activity of mammals. An observational study done on the activity of nocturnal owl monkeys in the Gran Chaco in South America showed that increased amounts of moonlight at night increased their activity levels through the night which led to a decrease of daytime activity. Meaning that for this species, ambient moonlight is negatively correlated with diurnal activity. This is also connected with the foraging behaviours of the monkeys, as when there were nights of little to no moonlight, it affected the monkey's ability to forage efficiently, so they were forced to be more active in the day to find food.
Other environmental influences
Diurnality has shown to be an evolutionary trait in many animal species, with diurnality mostly reappearing in many lineages. Other environmental factors like ambient temperature, food availability, and predation risk can all influence whether an animal will evolve to be diurnal, or if their effects are strong enough, then mask over their circadian rhythm, changing their activity patterns to becoming diurnal. All three factors often involve one another, and animals need to be able to find a balance between them if they are to survive and thrive.
Ambient temperature has been shown to affect and even convert nocturnal animals to diurnality as it is a way for them to conserve metabolic energy. Nocturnal animals are often energetically challenged due to being most active in the nighttime when ambient temperatures are lower than through the day, and so they lose a lot of energy in the form of body heat. According to the circadian thermos-energetics (CTE) hypothesis, animals that are expending more energy than they are taking in (through food and sleep) will be more active in the light cycle, meaning they will be more active in the day. This has been shown in studies done on small nocturnal mice in a laboratory setting. When they were placed under a combination of enough cold and hunger stress, they converted to diurnality through temporal niche switching, which was expected. Another similar study that involved energetically challenging small mammals showed that diurnality is most beneficial when the animal has a sheltered location to rest in, reducing heat loss. Both studies concluded that nocturnal mammals do change their activity patterns to be more diurnal when energetically stressed (due to heat loss and limited food availability), but only when predation is also limited, meaning the risks of predation are less than the risk of freezing or starving to death.
Plants
Many plants are diurnal or nocturnal in the opening and closing of their flowers. Most angiosperm plants are visited by various insects, so the flower adapts its phenology to the most effective pollinators. For example, the baobab is pollinated by fruit bats and starts blooming in late afternoon; the flowers are dead within twenty-four hours.
In technology operations
Services that alternate between high and low utilization in a daily cycle are described as being diurnal. Many websites have the most users during the day and little utilization at night, or vice versa. Operations planners can use this cycle to plan, for example, maintenance that needs to be done when there are fewer users on the web site.
Notes
See also
Diurnal cycle
Cathemeral
Chronotype
Crepuscular
Crypsis
Nocturnality
References
Ethology
Circadian rhythm
Day | Diurnality | [
"Biology"
] | 1,723 | [
"Behavior",
"Behavioural sciences",
"Circadian rhythm",
"Ethology",
"Sleep"
] |
3,104,720 | https://en.wikipedia.org/wiki/California%20Debris%20Commission | The California Debris Commission was a federal commission created in 1893 by an act of Congress to regulate California streams that had been devastated by the sediment washed into them from gold mining operations upstream in the Sierra Nevada. It was created to mitigate the damage to natural seasonal river flow and navigation, which had been caused by the extensive use of hydraulic mining. The act was codified under Navigation in 33 U.S.C. Chapter 14 (§§ 661-683). Given substantial power by Congress, the commission significantly reduced the stream damage caused.
The United States Army Corps of Engineers (USACE) had conducted similar works for the government since the beginning of the internal improvements program and it was considered the most knowledgeable organization. However, work involving California streams was outside its assigned responsibilities under the Rivers and Harbors Act. A commission was established to support the stream protection work instead of the normal Congressional House Committee on Rivers and Harbors. The commission consisted of three officers of the Corps of Engineers, appointed by the President, by and with the advice and consent of the Senate. As with the other works, they operated under the supervision of the Chief of Engineers and under the direction of the Secretary of War, who was later replaced by the Secretary of the Army.
Hydraulic mining causes large-scale erosion where employed to move unconsolidated sediments for mineral processing; it also causes similar large-scale sedimentation in downstream areas with a decreased stream gradient. The tons of sediment moved in this man-made and natural process resulted in raising the riverbeds along the Yuba, Sacramento, and some other rivers. This in turn increased the threat of floods in areas along the rivers, including such towns as Marysville on the Yuba. Over time this sediment would eventually move downstream to other river confluences and San Francisco Bay, disrupting navigation in those channels.
Congress created the California Debris Commission to address the man-made damage and mitigate its effects. Several methods were used to solve the adverse effects. The commission dredged the sediment from the rivers and deposited it on available land nearby; in some areas they constructed larger basins to contain the debris, along the Yuba River and other rivers, the mountains of sediment were piled along its banks, effectively making levees from the debris to protect against future flooding.
Among the members of the commission were Douglas MacArthur in the early 1900s and Ulysses S. Grant III in the early 1920s.
The Water Resources Development Act of 1986 eliminated the commission, with its work now the responsibility of the Corps' South Pacific Division.
References
See also
Clean Water Act
List of navigation authorities in the United States
Gold mining in California
United States Army Corps of Engineers
Government agencies established in 1893
1986 disestablishments in California
1893 establishments in California
Water pollution in the United States
Water in California
Government agencies disestablished in 1986 | California Debris Commission | [
"Engineering"
] | 574 | [
"Engineering units and formations",
"United States Army Corps of Engineers"
] |
3,104,727 | https://en.wikipedia.org/wiki/Phoning%20home | In computing, phoning home is a term often used to refer to the behavior of security systems that report network location, username, or other such data to another computer.
Phoning home may be useful for the proprietor in tracking a missing or stolen computer. In this way, it is frequently performed by mobile computers at corporations. It typically involves a software agent which is difficult to detect or remove. However, phoning home can also be malicious, as in surreptitious communication between end-user applications or hardware and its manufacturers or developers. The traffic may be encrypted to make it difficult or impractical for the end user to determine what data are being transmitted.
The Stuxnet attack on Iran's nuclear facilities was facilitated by phone-home technology, as reported by The New York Times.
Legally phoning home
Some uses for the practice are legal in some countries. For example, phoning home could be for access restriction, such as transmitting an authorization key. This was done with the Adobe Creative Suite: Each time one of the programs is opened, it phones home with the serial number. If the serial number is already in use, or a fake, then the program will present the user with the option of entering the correct serial number. If the user refuses, the next time the program loads, it will operate in trial mode until a valid serial number has been entered. However, the method can be thwarted by either disabling the internet connection when starting the program or adding a firewall or Hosts file rule to prevent the program from communicating with the verification server.
Phoning home could also be for marketing purposes, such as the "Sony BMG rootkit", which transmits a hash of the currently playing CD back to Sony, or a digital video recorder (DVR) reporting on viewing habits. High-end computing systems such as mainframes have been able to phone home for many years, to alert the manufacturer of hardware problems with the mainframes or disk storage subsystems (this enables repair or maintenance to be performed quickly and even proactively under the maintenance contract). Similarly, high-volume copy machines have long been equipped with phone-home capabilities, both for billing and for preventative/predictive maintenance purposes.
In research computing, phoning home can track the daily usage of open source academic software. This is used to develop logs for the purposes of justification in grant proposals to support the ongoing funding of such projects.
Aside from malicious activity, phoning home may also be done to track computer assets—especially mobile computers. One of the most well-known software applications that leverage phoning home for tracking is Absolute Software's CompuTrace. This software employs an agent which calls into an Absolute-managed server on regular intervals with information companies or the police can use to locate a missing computer.
More uses
Other than phoning the home (website) of the applications' authors, applications can allow their documents to do the same thing, thus allowing the documents' authors to trigger (essentially anonymous) tracking by setting up a connection that is intended to be logged. Such behavior, for example, caused v7.0.5 of Adobe Reader to add an interactive notification whenever a PDF file tries phoning home to its author.
HTML e-mail messages can easily implement a form of "phoning home". Images and other files required by the e-mail body may generate extra requests to a remote web server before they can be viewed. The IP address of the user's own computer is sent to the webserver (an unavoidable process if a reply is required), and further details embedded in request URLs can further identify the user by e-mail address, marketing campaign, etc. Such extra page resources have been referred to as "web bugs" and they can also be used to track off-line viewing and other uses of ordinary web pages. So as to prevent the activation of these requests, many e-mail clients do not load images or other web resources when HTML e-mails are first viewed, giving users the option to load the images only if the e-mail is from a trusted source.
Maliciously phoning home
There are many malware applications that can "phone home" to gather and store information about a person's machine. For example, the Pushdo Trojan shows the new complexity of modern malware applications and the phoning-home capabilities of these systems. Pushdo has 421 executables available to be sent to an infected Windows client.
Surveillance cameras Foscam have been reported by security researcher Brian Krebs to secretly phone home to the manufacturer.
See also
Digital rights management (DRM)
Product activation
Spyware
Internet of Things
Telemetry
References
Computer network security
Spyware
Internet privacy | Phoning home | [
"Engineering"
] | 971 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
3,104,807 | https://en.wikipedia.org/wiki/Temporal%20mean | The temporal mean is the arithmetic mean of a series of values over a time period. Assuming equidistant measuring or sampling times, it can be computed as the sum of the values over a period divided by the number of values. A simple moving average can be considered to be a sequence of temporal means over periods of equal duration. (If the time variable is continuous, the average value during the time period is the integral over the period divided by the length of the duration of the period.)
See also
Moving average
References
Means | Temporal mean | [
"Physics",
"Mathematics"
] | 109 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
3,105,253 | https://en.wikipedia.org/wiki/Water%20transportation | Water transportation is the international movement of water over large distances. Methods of transportation fall into three categories:
Aqueducts, which include pipelines, canals, tunnels and bridges
Container shipment, which includes transport by tank truck, tank car, and tank ship.
Towing, where a tugboat is used to pull an iceberg or a large water bag along behind it.
Due to its weight, the transportation of water is very energy-intensive. Unless it has the assistance of gravity, a canal or long-distance pipeline will need pumping stations at regular intervals. In this regard, the lower friction levels of the canal make it a more economical solution than the pipeline. Water transportation is also very common in rivers and oceans.
Major water transportation projects
The Grand Canal of China, completed in the 7th century AD and measuring .
The California Aqueduct, near Sacramento, is long.
The Great Manmade River is a vast underground network of pipes in the Sahara desert, transporting water from an immense aquifer to the largest cities in the region.
The Keita Integrated Development Project used specially created plows called the donaldo and Scarabeo to build water catchments. In these catchments, trees were planted which grow on the water flowing through the ditches.
The Kimberley Water Source Project is currently under way in Australia to determine the best method of transporting water from the Fitzroy River to the city of Perth. Options being considered include a 3,700-kilometre canal, a pipeline of at least 1,800 kilometres, tankers of 300,000 to 500,000 tonnes, and water bags each carrying between 0.5 and 1.5 gigalitres.
The Goldfields Pipeline built in Western Australia in 1903 was the longest pipeline of its day, at 597 kilometres. It supplies water from Perth to the gold mining centre of Kalgoorlie.
Manual water transportation
Historically water was transported by hand in dry countries, by traditional waterers such as the Sakkas of Arabia and Bhishti of India. Africa is another area where water is often transported by hand, especially in rural areas.
See also
Pipeline#Water
Water export
Water management
Water supply
References
Water supply | Water transportation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 434 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
3,105,389 | https://en.wikipedia.org/wiki/Iconoscope | The iconoscope (from the Greek: εἰκών "image" and σκοπεῖν "to look, to see") was the first practical video camera tube to be used in early television cameras. The iconoscope produced a much stronger signal than earlier mechanical designs, and could be used under any well-lit conditions. This was the first fully electronic system to replace earlier cameras, which used special spotlights or spinning disks to capture light from a single very brightly lit spot.
Some of the principles of this apparatus were described when Vladimir Zworykin filed two patents for a television system in 1923 and 1925. A research group at Westinghouse Electronic Company headed by Zworykin presented the iconoscope to the general public in a press conference in June 1933, and two detailed technical papers were published in September and October of the same year. The German company Telefunken bought the rights from RCA and built the superikonoskop camera used for the historical TV transmission at the 1936 Summer Olympics in Berlin.
The iconoscope was replaced in Europe around 1936 by the much more sensitive Super-Emitron and Superikonoskop, while in the United States the iconoscope was the leading camera tube used for broadcasting from 1936 until 1946, when it was replaced by the image orthicon tube.
Discovery of a New Physical Phenomenon
In a Technikatörténeti Szemle article, subsequently reissued on the internet, entitled The Iconoscope: Kálmán Tihanyi and the Development of Modern Television, Tihanyi's daughter Katalin Tihanyi Glass notes that her father found the "storage principle" included a "new physical phenomenon", the photoconductive effect:
Operation
The main image forming element in the iconoscope was a mica plate with a pattern of photosensitive granules deposited on the front using an electrically insulating glue. The granules were typically made of silver grains covered with caesium or caesium oxide. The back of the mica plate, opposite the granules, was covered with a thin film of silver. The separation between the silver on the back of the plate and the silver in the granules caused them to form individual capacitors, able to store electrical charge. These were typically deposited as small spots, creating pixels. The system as a whole was referred to as a "mosaic".
The system is first charged up by scanning the plate with an electron gun similar to one in a conventional television cathode ray display tube. This process deposits charges into the granules, which in a dark room would slowly decay away at a known rate. When exposed to light, the photosensitive coating releases electrons which are supplied by the charge stored in the silver. The emission rate increases in proportion to the intensity of the light. Through this process, the plate forms an electrical analog of the visual image, with the stored charge representing the inverse of the average brightness of the image at that location.
When the electron beam scans the plate again, any residual charge in the granules resists refilling by the beam. The beam energy is set so that any charge resisted by the granules is reflected back into the tube, where it is collected by the collector ring, a ring of metal placed around the screen. The charge collected by the collector ring varies in relation to the charge stored in that location. This signal is then amplified and inverted, and then represents a positive video signal.
The collector ring is also used to collect electrons being released from the granules in the photoemission process. If the gun is scanning a dark area few electrons would be released directly from the scanned granules, but the rest of the mosaic will also be releasing electrons that will be collected during that time. As a result, the black level of the image will float depending on the average brightness of the image, which caused the iconoscope to have a distinctive patchy visual style. This was normally combatted by keeping the image continually and very brightly lit. This also led to clear visual differences between scenes shot indoors and those shot outdoors in good lighting conditions.
As the electron gun and the image itself both have to be focused on the same side of the tube, some attention has to be paid to the mechanical arrangement of the components. Iconocopes were typically built with the mosaic inside a cylindrical tube with flat ends, with the plate positioned in front of one of the ends. A conventional movie camera lens was placed in front of the other end, focused on the plate. The electron gun was then placed below the lens, tilted so that it was also aimed at the plate, although at an angle. This arrangement has the advantage that both the lens and electron gun lie in front of the imaging plate, which allows the system to be compartmentalized in a box-shaped enclosure with the lens completely within the case.
As the electron gun is tilted compared to the screen, its image of the screen is not as a rectangular plate, but a keystone shape. Additionally, the time needed for the electrons to reach the upper portions of the screen was longer than the lower areas, which were closer to the gun. Electronics in the camera adjusted for this effect by slightly changing the scanning rates.
The accumulation and storage of photoelectric charges during each scanning cycle greatly increased the electrical output of the iconoscope relative to non-storage type image scanning devices. In the 1931 version, the electron beam scanned the granules; while in the 1925 version, the electron beam scanned the back of the image plate.
History
The early electronic camera tubes (like the image dissector ) suffered from a very disappointing fatal flaw: They scanned the subject and what was seen at each point was only the tiny piece of light viewed at the instant that the scanning system passed over it. A practical functional camera tube needed a different technological approach, which later became known as Charge – Storage camera tube. It based on a new, hithero unknown physical phenomenon which was discovered and patented by physicist Kálmán Tihanyi in Hungary in 1926, however the new pehonmenon became widely understood and recognised only from around 1930.
The problem of low sensitivity to light resulting in low electrical output from transmitting or "camera" tubes would be solved with the introduction of charge-storage technology by the Hungarian engineer Kálmán Tihanyi in the beginning of 1925. His solution was a camera tube that accumulated and stored electrical charges ("photoelectrons") within the tube throughout each scanning cycle. The device was first described in a patent application he filed in Hungary in March 1926 for a television system he dubbed "Radioskop". After further refinements included in a 1928 patent application, Tihanyi's patent was declared void in Great Britain in 1930, and so he applied for patents in the United States.
Tihanyi's Radioskop patent was recognized as a Document of Universal Significance by the UNESCO, thus became part of the Memory of the World Programme on September 4, 2001.
Zworykin presented in 1923 his project for a totally electronic television system to the general manager of Westinghouse. In July 1925, Zworykin submitted a patent application for a "Television System" that includes a charge storage plate constructed of a thin layer of isolating material (aluminum oxide) sandwiched between a screen (300 mesh) and a colloidal deposit of photoelectric material (potassium hydride) consisting of isolated globules. The following description can be read between lines 1 and 9 in page 2: The photoelectric material, such as potassium hydride, is evaporated on the aluminum oxide, or other insulating medium, and treated so as to form a colloidal deposit of potassium hydride consisting of minute globules. Each globule is very active photoelectrically and constitutes, to all intents and purposes, a minute individual photoelectric cell. Its first image was transmitted in late summer of 1925, and a patent was issued in 1928. However the quality of the transmitted image failed to impress to H P Davis, the general manager of Westinghouse, and Zworykin was asked to work on something useful. A patent for a television system was also filed by Zworykin in 1923, but this file is not a reliable bibliographic source because extensive revisions were done before a patent was issued fifteen years later and the file itself was divided into two patents in 1931.
The first practical iconoscope was constructed in 1931 by Sanford Essig, when he accidentally left one silvered mica sheet in the oven too long. Upon examination with a microscope, he noticed that the silver layer had broken up into a myriad of tiny isolated silver globules. He also noticed that: the tiny dimension of the silver droplets would enhance the image resolution of the iconoscope by a quantum leap. As head of television development at Radio Corporation of America (RCA), Zworykin submitted a patent application in November 1931, and it was issued in 1935. Nevertheless, Zworykin's team was not the only engineering group working on devices that use a charge stage plate. In 1932, Tedham and McGee under the supervision of Isaac Shoenberg applied for a patent for a new device they dubbed "the emitron", a 405-line broadcasting service employing the super-emitron began at studios in Alexandra Palace in 1936, and a patent was issued in the US in 1937. Meanwhile, in 1933, Philo Farnsworth had also applied for a patent for a device that used a charge storage plate and a low-velocity electron scanning beam. A corresponding patent was issued in 1937, but Farnsworth did not know that the low-velocity scanning beam must land perpendicular to the target and he never actually built such a tube.
The iconoscope was presented to the general public in a press conference in June 1933, and two detailed technical papers were published in September and October of the same year. Unlike the Farnsworth image dissector, the Zworykin iconoscope was much more sensitive, useful with an illumination on the target between 4ft-c (43lx) and 20ft-c (215lx). It was also easier to manufacture and produced a very clear image. The iconoscope was the primary camera tube used in American broadcasting from 1936 until 1946, when it was replaced by the image orthicon tube.
In Britain, a team formed by engineers Lubszynski, Rodda, and MacGee developed the super-emitron (also superikonoscop in Germany and image iconoscope in the Netherlands) in 1934, this new device was between ten and fifteen times more sensitive than the original emitron and iconoscope, and it was used for a public broadcasting by the BBC, for the first time, on Armistice Day 1937. The image iconoscope was the representative of the European tradition in electronic tubes competing against the American tradition represented by the image orthicon.
See also
Image dissector
Video camera tube
References
External links
Iconoscope history
Iconoscope pictures
Audiovisual introductions in 1933
Russian inventions
Television technology | Iconoscope | [
"Technology"
] | 2,278 | [
"Information and communications technology",
"Television technology"
] |
3,105,510 | https://en.wikipedia.org/wiki/Piezochromism | Piezochromism, from the Greek piezô "to squeeze, to press" and chromos "color", describes the tendency of certain materials to change color with the application of pressure. This effect is closely related to the electronic band gap change, which can be found in plastics, semiconductors (e.g. hybrid perovskites) and hydrocarbons. One simple molecule displaying this property is 5-methyl-2-[(2-nitrophenyl)amino]-3-thiophenecarbonitrile, also known as ROY owing to its red, orange and yellow crystalline forms. Individual yellow and pale orange versions transform reversibly to red at high pressure.
References
External links
Piezochromism
Chromism | Piezochromism | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy",
"Engineering"
] | 159 | [
"Spectroscopy stubs",
"Materials science stubs",
"Spectrum (physical sciences)",
"Chromism",
"Astronomy stubs",
"Materials science",
"Smart materials",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
3,105,691 | https://en.wikipedia.org/wiki/Theobromine%20poisoning | Theobromine poisoning, also informally called chocolate poisoning or cocoa poisoning, is an overdosage reaction to the xanthine alkaloid theobromine, found in chocolate, tea, cola beverages, and some other foods.
Sources
Cocoa powder contains about theobromine by weight, so of raw cocoa contains approximately theobromine.
Processed chocolate, in general, has smaller amounts. The amount found in highly refined chocolate candies or sweets (typically ) is much lower than that of dark chocolate or unsweetened baking chocolate ( or ).
In species
Humans
Pharmacology
Theobromine has a half-life of , but over may be unmodified after a single dose of
In general, the amount of theobromine found in chocolate is small enough that chocolate can be safely consumed by humans with a negligible risk of poisoning.
Toxicity
Theobromine doses at per day, such as may be found in of cocoa powder, may be accompanied by sweating, trembling and severe headaches. These are the mild-to-moderate symptoms.
The severe symptoms are cardiac arrhythmias, epileptic seizures, internal bleeding, heart attacks, and eventually death.
Limited mood effects were shown at per day.
In other species
Toxicity
Median lethal () doses of theobromine have only been published for cats, dogs, rats, and mice; these differ by a factor of 6 across species.
Serious poisoning happens more frequently in domestic animals, which metabolize theobromine much more slowly than humans, and can easily consume enough chocolate to cause poisoning. The most common victims of theobromine poisoning are dogs, for whom it can be fatal. The toxic dose for cats is even lower than for dogs. However, cats are less prone to eating chocolate since they are unable to taste sweetness. Theobromine is less toxic to rats and mice, who all have an of about .
In dogs, the biological half-life of theobromine is 17.5 hours; in severe cases, clinical symptoms of theobromine poisoning can persist for 72 hours. Medical treatment performed by a veterinarian involves inducing vomiting within two hours of ingestion and administration of benzodiazepines or barbiturates for seizures, antiarrhythmics for heart arrhythmias, and fluid diuresis. Theobromine is also suspected to induce right atrial cardiomyopathy after long term exposure at levels equivalent to approximately of dark chocolate per day. According to the Merck Veterinary Manual, baker's chocolate of approximately of a dog's body weight is sufficient to cause symptoms of toxicity. For example, of baker's chocolate would be enough to produce mild symptoms in a dog, while a 25% cacao chocolate bar (like milk chocolate) would be only 25% as toxic as the same dose of baker's chocolate. One ounce of milk chocolate per pound of body weight () is a potentially lethal dose in dogs.
Wildlife
In 2014, four American black bears were found dead at a bait site in New Hampshire. A necropsy and toxicology report performed at the University of New Hampshire in 2015 confirmed they died of heart failure caused by theobromine after they consumed of chocolate and doughnuts placed at the site as bait. A similar incident killed a black bear cub in Michigan in 2011.
Pest control
In previous research, the USDA investigated the possible use of theobromine as a toxicant to control coyotes preying on livestock.
See also
Xanthine oxidase
Footnotes
References
Merck Veterinary Manual (Toxicology/Food Hazards section), Merck & Co., Inc., Chocolate Poisoning. (June 16, 2005)
External links
Toxicity basic facts
Cat health
Dog health
Poisoning by drugs, medicaments and biological substances
Veterinary toxicology | Theobromine poisoning | [
"Environmental_science"
] | 772 | [
" medicaments and biological substances",
"Veterinary toxicology",
"Toxicology",
"Poisoning by drugs"
] |
3,105,999 | https://en.wikipedia.org/wiki/Confounding | In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.
Confounders are threats to internal validity.
Example
Let's assume that a trucking company owns a fleet of trucks made by two different manufacturers. Trucks made by one manufacturer are called "A Trucks" and trucks made by the other manufacturer are called "B Trucks." We want to find out whether A Trucks or B Trucks get better fuel economy. We measure fuel and miles driven for a month and calculate the MPG for each truck. We then run the appropriate analysis, which determines that there is a statistically significant trend that A Trucks are more fuel efficient than B Trucks. Upon further reflection, however, we also notice that A Trucks are more likely to be assigned highway routes, and B Trucks are more likely to be assigned city routes. This is a confounding variable. The confounding variable makes the results of the analysis unreliable. It is quite likely that we are just measuring the fact that highway driving results in better fuel economy than city driving.
In statistics terms, the make of the truck is the independent variable, the fuel economy (MPG) is the dependent variable and the amount of city driving is the confounding variable. To fix this study, we have several choices. One is to randomize the truck assignments so that A trucks and B Trucks end up with equal amounts of city and highway driving. That eliminates the confounding variable. Another choice is to quantify the amount of city driving and use that as a second independent variable. A third choice is to segment the study, first comparing MPG during city driving for all trucks, and then run a separate study comparing MPG during highway driving.
Definition
Confounding is defined in terms of the data generating model. Let X be some independent variable, and Y some dependent variable. To estimate the effect of X on Y, the statistician must suppress the effects of extraneous variables that influence both X and Y. We say that X and Y are confounded by some other variable Z whenever Z causally influences both X and Y.
Let be the probability of event Y = y under the hypothetical intervention X = x. X and Y are not confounded if and only if the following holds:
for all values X = x and Y = y, where is the conditional probability upon seeing X = x. Intuitively, this equality states that X and Y are not confounded whenever the observationally witnessed association between them is the same as the association that would be measured in a controlled experiment, with x randomized.
In principle, the defining equality can be verified from the data generating model, assuming we have all the equations and probabilities associated with the model. This is done by simulating an intervention (see Bayesian network) and checking whether the resulting probability of Y equals the conditional probability . It turns out, however, that graph structure alone is sufficient for verifying the equality .
Control
Consider a researcher attempting to assess the effectiveness of drug X, from population data in which drug usage was a patient's choice. The data shows that gender (Z) influences a patient's choice of drug as well as their chances of recovery (Y). In this scenario, gender Z confounds the relation between X and Y since Z is a cause of both X and Y:
We have that
because the observational quantity contains information about the correlation between X and Z, and the interventional quantity does not (since X is not correlated with Z in a randomized experiment). It can be shown that, in cases where only observational data is available, an unbiased estimate of the desired quantity , can
be obtained by "adjusting" for all confounding factors, namely, conditioning on their various values and averaging the result. In the case of a single confounder Z, this leads to the "adjustment formula":
which gives an unbiased estimate for the causal effect of X on Y. The same adjustment formula works when there are multiple confounders except, in this case, the choice of a set Z of variables that would guarantee unbiased estimates must be done with caution. The criterion for a proper choice of variables is called the Back-Door and requires that the chosen set Z "blocks" (or intercepts) every path between X and Y that contains an arrow into X. Such sets are called "Back-Door admissible" and may include variables which are not common causes of X and Y, but merely proxies thereof.
Returning to the drug use example, since Z complies with the Back-Door requirement (i.e., it intercepts the one Back-Door path ), the Back-Door adjustment formula is valid:
In this way the physician can predict the likely effect of administering the drug from observational studies in which the conditional probabilities appearing on the right-hand side of the equation can be estimated by regression.
Contrary to common beliefs, adding covariates to the adjustment set Z can introduce bias. A typical counterexample occurs when Z is a common effect of X and Y, a case in which Z is not a confounder (i.e., the null set is Back-door admissible) and adjusting for Z would create bias known as "collider bias" or "Berkson's paradox." Controls that are not good confounders are sometimes called bad controls.
In general, confounding can be controlled by adjustment if and only if there is a set of observed covariates that satisfies the Back-Door condition. Moreover, if Z is such a set, then the adjustment formula of Eq. (3) is valid. Pearl's do-calculus provides all possible conditions under which can be estimated, not necessarily by adjustment.
History
According to Morabia (2011), the word confounding derives from the Medieval Latin verb "confundere", which meant "mixing", and was probably chosen to represent the confusion (from Latin: con=with + fusus=mix or fuse together) between the cause one wishes to assess and other causes that may affect the outcome and thus confuse, or stand in the way of the desired assessment. Greenland, Robins and Pearl note an early use of the term "confounding" in causal inference by John Stuart Mill in 1843.
Fisher introduced the word "confounding" in his 1935 book "The Design of Experiments" to refer specifically to a consequence of blocking (i.e., partitioning) the set of treatment combinations in a factorial experiment, whereby certain interactions may be "confounded with blocks". This popularized the notion of confounding in statistics, although Fisher was concerned with the control of heterogeneity in experimental units, not with causal inference.
According to Vandenbroucke (2004) it was Kish who used the word "confounding" in the sense of "incomparability" of two or more groups (e.g., exposed and unexposed) in an observational study. Formal conditions defining what makes certain groups "comparable" and others "incomparable" were later developed in epidemiology by Greenland and Robins (1986) using the counterfactual language of Neyman (1935) and Rubin (1974). These were later supplemented by graphical criteria such as the Back-Door condition (Pearl 1993; Greenland, Robins and Pearl 1999).
Graphical criteria were shown to be formally equivalent to the counterfactual definition but more transparent to researchers relying on process models.
Types
In the case of risk assessments evaluating the magnitude and nature of risk to human health, it is important to control for confounding to isolate the effect of a particular hazard such as a food additive, pesticide, or new drug. For prospective studies, it is difficult to recruit and screen for volunteers with the same background (age, diet, education, geography, etc.), and in historical studies, there can be similar variability. Due to the inability to control for variability of volunteers and human studies, confounding is a particular challenge. For these reasons, experiments offer a way to avoid most forms of confounding.
In some disciplines, confounding is categorized into different types. In epidemiology, one type is "confounding by indication", which relates to confounding from observational studies. Because prognostic factors may influence treatment decisions (and bias estimates of treatment effects), controlling for known prognostic factors may reduce this problem, but it is always possible that a forgotten or unknown factor was not included or that factors interact complexly. Confounding by indication has been described as the most important limitation of observational studies. Randomized trials are not affected by confounding by indication due to random assignment.
Confounding variables may also be categorised according to their source. The choice of measurement instrument (operational confound), situational characteristics (procedural confound), or inter-individual differences (person confound).
An operational confounding can occur in both experimental and non-experimental research designs. This type of confounding occurs when a measure designed to assess a particular construct inadvertently measures something else as well.
A procedural confounding can occur in a laboratory experiment or a quasi-experiment. This type of confound occurs when the researcher mistakenly allows another variable to change along with the manipulated independent variable.
A person confounding occurs when two or more groups of units are analyzed together (e.g., workers from different occupations), despite varying according to one or more other (observed or unobserved) characteristics (e.g., gender).
Examples
Say one is studying the relation between birth order (1st child, 2nd child, etc.) and the presence of Down Syndrome in the child. In this scenario, maternal age would be a confounding variable:
Higher maternal age is directly associated with Down Syndrome in the child
Higher maternal age is directly associated with Down Syndrome, regardless of birth order (a mother having her 1st vs 3rd child at age 50 confers the same risk)
Maternal age is directly associated with birth order (the 2nd child, except in the case of twins, is born when the mother is older than she was for the birth of the 1st child)
Maternal age is not a consequence of birth order (having a 2nd child does not change the mother's age)
In risk assessments, factors such as age, gender, and educational levels often affect health status and so should be controlled. Beyond these factors, researchers may not consider or have access to data on other causal factors. An example is on the study of smoking tobacco on human health. Smoking, drinking alcohol, and diet are lifestyle activities that are related. A risk assessment that looks at the effects of smoking but does not control for alcohol consumption or diet may overestimate the risk of smoking. Smoking and confounding are reviewed in occupational risk assessments such as the safety of coal mining. When there is not a large sample population of non-smokers or non-drinkers in a particular occupation, the risk assessment may be biased towards finding a negative effect on health.
Decreasing the potential for confounding
A reduction in the potential for the occurrence and effect of confounding factors can be obtained by increasing the types and numbers of comparisons performed in an analysis. If measures or manipulations of core constructs are confounded (i.e. operational or procedural confounds exist), subgroup analysis may not reveal problems in the analysis. Additionally, increasing the number of comparisons can create other problems (see multiple comparisons).
Peer review is a process that can assist in reducing instances of confounding, either before study implementation or after analysis has occurred. Peer review relies on collective expertise within a discipline to identify potential weaknesses in study design and analysis, including ways in which results may depend on confounding. Similarly, replication can test for the robustness of findings from one study under alternative study conditions or alternative analyses (e.g., controlling for potential confounds not identified in the initial study).
Confounding effects may be less likely to occur and act similarly at multiple times and locations. In selecting study sites, the environment can be characterized in detail at the study sites to ensure sites are ecologically similar and therefore less likely to have confounding variables. Lastly, the relationship between the environmental variables that possibly confound the analysis and the measured parameters can be studied. The information pertaining to environmental variables can then be used in site-specific models to identify residual variance that may be due to real effects.
Depending on the type of study design in place, there are various ways to modify that design to actively exclude or control confounding variables:
Case-control studies assign confounders to both groups, cases and controls, equally. For example, if somebody wanted to study the cause of myocardial infarct and thinks that the age is a probable confounding variable, each 67-year-old infarct patient will be matched with a healthy 67-year-old "control" person. In case-control studies, matched variables most often are the age and sex. Drawback: Case-control studies are feasible only when it is easy to find controls, i.e. persons whose status vis-à-vis all known potential confounding factors is the same as that of the case's patient: Suppose a case-control study attempts to find the cause of a given disease in a person who is 1) 45 years old, 2) African-American, 3) from Alaska, 4) an avid football player, 5) vegetarian, and 6) working in education. A theoretically perfect control would be a person who, in addition to not having the disease being investigated, matches all these characteristics and has no diseases that the patient does not also have—but finding such a control would be an enormous task.
Cohort studies: A degree of matching is also possible and it is often done by only admitting certain age groups or a certain sex into the study population, creating a cohort of people who share similar characteristics and thus all cohorts are comparable in regard to the possible confounding variable. For example, if age and sex are thought to be confounders, only 40 to 50 years old males would be involved in a cohort study that would assess the myocardial infarct risk in cohorts that either are physically active or inactive. Drawback: In cohort studies, the overexclusion of input data may lead researchers to define too narrowly the set of similarly situated persons for whom they claim the study to be useful, such that other persons to whom the causal relationship does in fact apply may lose the opportunity to benefit from the study's recommendations. Similarly, "over-stratification" of input data within a study may reduce the sample size in a given stratum to the point where generalizations drawn by observing the members of that stratum alone are not statistically significant.
Double blinding: conceals from the trial population and the observers the experiment group membership of the participants. By preventing the participants from knowing if they are receiving treatment or not, the placebo effect should be the same for the control and treatment groups. By preventing the observers from knowing of their membership, there should be no bias from researchers treating the groups differently or from interpreting the outcomes differently.
Randomized controlled trial: A method where the study population is divided randomly in order to mitigate the chances of self-selection by participants or bias by the study designers. Before the experiment begins, the testers will assign the members of the participant pool to their groups (control, intervention, parallel), using a randomization process such as the use of a random number generator. For example, in a study on the effects of exercise, the conclusions would be less valid if participants were given a choice if they wanted to belong to the control group which would not exercise or the intervention group which would be willing to take part in an exercise program. The study would then capture other variables besides exercise, such as pre-experiment health levels and motivation to adopt healthy activities. From the observer's side, the experimenter may choose candidates who are more likely to show the results the study wants to see or may interpret subjective results (more energetic, positive attitude) in a way favorable to their desires.
Stratification: As in the example above, physical activity is thought to be a behaviour that protects from myocardial infarct; and age is assumed to be a possible confounder. The data sampled is then stratified by age group – this means that the association between activity and infarct would be analyzed per each age group. If the different age groups (or age strata) yield much different risk ratios, age must be viewed as a confounding variable. There exist statistical tools, among them Mantel–Haenszel methods, that account for stratification of data sets.
Controlling for confounding by measuring the known confounders and including them as covariates is multivariable analysis such as regression analysis. Multivariate analyses reveal much less information about the strength or polarity of the confounding variable than do stratification methods. For example, if multivariate analysis controls for antidepressant, and it does not stratify antidepressants for TCA and SSRI, then it will ignore that these two classes of antidepressant have opposite effects on myocardial infarction, and one is much stronger than the other.
All these methods have their drawbacks:
The best available defense against the possibility of spurious results due to confounding is often to dispense with efforts at stratification and instead conduct a randomized study of a sufficiently large sample taken as a whole, such that all potential confounding variables (known and unknown) will be distributed by chance across all study groups and hence will be uncorrelated with the binary variable for inclusion/exclusion in any group.
Ethical considerations: In double-blind and randomized controlled trials, participants are not aware that they are recipients of sham treatments and may be denied effective treatments. There is a possibility that patients only agree to invasive surgery (which carry real medical risks) under the understanding that they are receiving treatment. Although this is an ethical concern, it is not a complete account of the situation. For surgeries that are currently being performed regularly, but for which there is no concrete evidence of a genuine effect, there may be ethical issues to continue such surgeries. In such circumstances, many of people are exposed to the real risks of surgery yet these treatments may possibly offer no discernible benefit. Sham-surgery control is a method that may allow medical science to determine whether a surgical procedure is efficacious or not. Given that there are known risks associated with medical operations, it is questionably ethical to allow unverified surgeries to be conducted ad infinitum into the future.
Artifacts
Artifacts are variables that should have been systematically varied, either within or across studies, but that were accidentally held constant. Artifacts are thus threats to external validity. Artifacts are factors that covary with the treatment and the outcome. Campbell and Stanley identify several artifacts. The major threats to internal validity are history, maturation, testing, instrumentation, statistical regression, selection, experimental mortality, and selection-history interactions.
One way to minimize the influence of artifacts is to use a pretest-posttest control group design. Within this design, "groups of people who are initially equivalent (at the pretest phase) are randomly assigned to receive the experimental treatment or a control condition and then assessed again after this differential experience (posttest phase)". Thus, any effects of artifacts are (ideally) equally distributed in participants in both the treatment and control conditions.
See also
observational interpretation fallacy
Omitted-variable bias
Notes
References
Further reading
External links
Tutorial: Confounding and Effect Measure Modification (Boston University School of Public Health)
Linear Regression (Yale University)
Tutorial by University of New England
Analysis of variance
Causal inference
Design of experiments | Confounding | [
"Mathematics"
] | 4,258 | [
"Experimental bias",
"Statistical concepts"
] |
3,106,291 | https://en.wikipedia.org/wiki/IAU%20Circular | The International Astronomical Union Circulars (IAUCs) are notices that give information about astronomical phenomena. IAUCs are issued by the International Astronomical Union's Central Bureau for Astronomical Telegrams (CBAT) at irregular intervals for the discovery and follow-up information regarding such objects as planetary satellites, novae, supernovae, and comets.
History
The first series of IAUCs was published at Uccle during 1920–1922 when the IAU's first CBAT was located there; the first IAUC published in the present series was published in 1922 at Copenhagen Observatory after the transfer of the CBAT from Uccle to Copenhagen.
At the end of 1964, the CBAT moved from Copenhagen to the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, where it remains, on the grounds of the Harvard College Observatory (HCO). HCO had maintained a Central Bureau for the Western hemisphere from 1883 until the end of 1964, when its staff took on the IAU's CBAT; HCO had published its own Announcement Cards that paralleled the IAUCs from 1926 until the end of 1964, but the Announcement Cards ceased publication when the IAUCs began to be issued from the same building.
Accessibility
The IAUCs are delivered via the United States Postal Service, e-mail, and through the Central Bureau for Astronomical Telegrams/Minor Planet Center Computer Service. Most of the announcement circulars published at Cambridge, Copenhagen, and Uccle from 1895 to the present day are available for viewing via the CBAT website.
See also
Minor Planet Circular
Minor Planet Electronic Circular
External links
References
Publications established in 1920
Astronomy data and publications
pt:Central Bureau for Astronomical Telegrams#Publicação | IAU Circular | [
"Astronomy"
] | 343 | [
"Works about astronomy",
"Astronomy data and publications"
] |
3,106,346 | https://en.wikipedia.org/wiki/Levmetamfetamine | Levmetamfetamine, also known as l-desoxyephedrine or levomethamphetamine, and commonly sold under the brand name Vicks VapoInhaler among others, is an optical isomer of methamphetamine primarily used as a topical nasal decongestant. It is used to treat nasal congestion from allergies and the common cold. It was first used medically as decongestant beginning in 1958 and has been used for such purposes, primarily in the United States, since then.
Medical uses
Levmetamfetamine is used to treat nasal congestion related to the common cold and allergic rhinitis. It is available in the form of an inhaler containing 50mg total per inhaler and delivering between 0.04 and 0.15mg of the drug per inhalation. Inhalers with a total of 113mg levmetamfetamine were previously marketed in the United States, but the total amount was eventually reduced to 50mg.
Side effects
When the nasal decongestant is taken in excess, levmetamfetamine has potential side effects. These would be similar to those of other decongestants.
Pharmacology
Pharmacodynamics
Levmetamfetamine acts as a selective norepinephrine releasing agent. The potencies of levmetamfetamine, levoamphetamine, dextromethamphetamine, and dextroamphetamine in terms of norepinephrine release in vitro and in vivo in rats are all similar.
Conversely, whereas dextromethamphetamine and dextroamphetamine are relatively balanced releasers of dopamine and norepinephrine in vitro, levmetamfetamine is about 15- to 20-fold less potent in inducing dopamine release relative to norepinephrine release. Moreover, whereas levoamphetamine is about 3- to 5-fold less potent in terms of dopamine release than dextroamphetamine in vivo, levmetamfetamine is dramatically less potent than dextromethamphetamine and substantially less potent than levoamphetamine in this regard.
In accordance with the findings of catecholamine release studies, levmetamfetamine is 2- to 10-fold or more less potent than dextromethamphetamine in terms of psychostimulant-like effects in rodents. For comparison, levoamphetamine is only 1- to 4-fold less potent than dextroamphetamine in its stimulating and reinforcing effects in monkeys and humans.
The effects of levmetamfetamine are qualitatively distinct relative to those of racemic methamphetamine and dextromethamphetamine and it does not possess the same potential for euphoria or addiction that these drugs possess. In clinical studies, levmetamfetamine at oral doses of 1 to 10mg has been found not to affect subjective drug responses, heart rate, blood pressure, core temperature, electrocardiography, respiration rate, oxygen saturation, or other clinical parameters. As such, doses of levmetamfetamine of less than or equal to 10mg have no significant physiological or subjective effects. However, higher doses of levmetamfetamine, for instance 0.25 to 0.5mg/kg (mean doses of ~18–37mg) intravenously, have been reported to produce significant pharmacological effects, including increased heart rate and blood pressure, increased respiration rate, and subjective effects like intoxication and drug liking. On the other hand, in contrast to dextromethamphetamine, levmetamfetamine also produces subjective "bad" or aversive drug effects. Among the physiological effects of levmetamfetamine is vasoconstriction, which makes it useful for nasal decongestion.
For comparison to levmetamfetamine, 5 to 60mg oral doses of the related drug levoamphetamine have been used clinically and have been reported to produce significant pharmacological effects, for instance on wakefulness and mood.
In addition to its norepinephrine-releasing activity, levmetamfetamine is also an agonist of the trace amine-associated receptor 1 (TAAR1). Levmetamfetamine has also been found to act as a catecholaminergic activity enhancer (CAE), notably at much lower concentrations than its catecholamine releasing activity. It is 1- to 10-fold less potent than selegiline but is 3- to 5-fold more potent than dextromethamphetamine in this action. The CAE effects of such agents may be mediated by TAAR1 agonism.
Pharmacokinetics
Absorption
The bioavailability of levmetamfetamine is approximately 100%. The peak levels of levmetamfetamine range from 3.3 to 31.4ng/mL with single oral doses of 1 to 10mg and from 65.4 to 125.9ng/mL with single intravenous doses of 0.25 to 0.5mg/kg. The area-under-the-curve (AUC) levels of levmetamfetamine range from 73.0 to 694.7ng⋅h/mL with single oral doses of 1 to 10mg and from 1,190.7 to 2,368.1mg/kg with single intravenous doses of 0.25 to 0.5mg/kg.
Distribution
The volume of distribution of levmetamfetamine is 288.5 to 315.5L or 4.15 to 4.17L/kg.
Metabolism
The pharmacokinetics of levmetamfetamine generated as a metabolite from selegiline have been found to be significantly different in CYP2D6 poor metabolizers versus extensive metabolizers. Area-under-the-curve (AUC) levels of levmetamfetamine were 46% higher and its elimination half-life was 33% longer in CYP2D6 poor metabolizers compared to extensive metabolizers. These findings suggest that CYP2D6 may be significantly involved in the metabolism of levmetamfetamine.
Levmetamfetamine is metabolized into levoamphetamine in small amounts.
Elimination
Levmetamfetamine is excreted in urine 40.8 to 49.0% as unchanged levmetamfetamine and 2.1 to 3.3% as levoamphetamine.
The mean elimination half-life of levmetamfetamine ranges between 10.2 and 15.0hours. For comparison, the elimination half-life of dextromethamphetamine was around 10.2 to 10.7hours in the same studies. The clearance of levmetamfetamine is 15.5 to 19.1L/h or 0.221L/h⋅kg.
With selegiline at an oral dose of 10mg, levmetamfetamine and levoamphetamine are eliminated in urine and recovery of levmetamfetamine is 20 to 60% (or about 2–6mg) while that of levoamphetamine is 9 to 30% (or about 1–3mg).
Chemistry
Levmetamfetamine, also known as L-α,N-dimethyl-β-phenylethylamine or as L-N-methylamphetamine, is a substituted phenethylamine and amphetamine. It is the levorotatory enantiomer of methamphetamine. Racemic methamphetamine contains two optical isomers in equal amounts, dextromethamphetamine (the dextrorotatory enantiomer) and levmetamfetamine.
Detection in body fluids
Levmetamfetamine can register on urine drug tests as either methamphetamine, amphetamine, or both, depending on the subject's metabolism and dosage. Levmetamfetamine metabolizes completely into levoamphetamine after a period of time.
History
Methamphetamine, a racemic mixture of dextromethamphetamine and levmetamfetamine, was first discovered and synthesized in 1919. Methamphetamine was first introduced for medical use in 1938 in oral form under the brand name Pervitin in Germany. Over-the-counter nasal decongestant inhalers containing enantiopure levmetamfetamine, originally labeled with the chemical name l-desoxyephedrine, were first introduced in 1958 under the brand name Vicks Inhaler. By 1995, the brand name was changed to Vicks Vapor Inhaler. In 1998, the United States Food and Drug Administration (FDA) required that the chemical name on the labeling be changed from l-desoxyephedrine to levmetamfetamine.
Society and culture
Legal status
Levomethamphetamine is a controlled substance in the Philippines.
Recreational use
As of 2006, there were no studies demonstrating "drug liking" scores of oral levmetamfetamine that are similar to racemic methamphetamine or dextromethamphetamine in either recreational users or medicinal users. In any case, misuse of levmetamfetamine at high doses has been reported.
In recent years, tighter controls in Mexico on certain methamphetamine precursors like ephedrine and pseudoephedrine has led to a greater percentage of illicit methamphetamine from Mexican drug cartels consisting of a higher ratio of levmetamfetamine to dextromethamphetamine within batches of racemic methamphetamine.
Manufacturing
The manufacturing of levmetamfetamine products for therapeutic use is done according to government regulations and pharmacopeia monographs. The most recent change in Food and Drug Administration regulations for levmetamfetamine inhalers was in 1994, with the adoption of a final monograph.
Notes
References
Enantiopure drugs
Methamphetamine
Methamphetamines
Norepinephrine-dopamine releasing agents
Selegiline
Substituted amphetamines
Sympathomimetics
TAAR1 agonists
VMAT inhibitors | Levmetamfetamine | [
"Chemistry"
] | 2,176 | [
"Stereochemistry",
"Enantiopure drugs"
] |
1,600,616 | https://en.wikipedia.org/wiki/KvLQT1 | Kv7.1 (KvLQT1) is a potassium channel protein whose primary subunit in humans is encoded by the KCNQ1 gene. Its mutation causes Long QT syndrome, Kv7.1 is a voltage and lipid-gated potassium channel present in the cell membranes of cardiac tissue and in inner ear neurons among other tissues. In the cardiac cells, Kv7.1 mediates the IKs (or slow delayed rectifying K+) current that contributes to the repolarization of the cell, terminating the cardiac action potential and thereby the heart's contraction. It is a member of the KCNQ family of potassium channels.
Structure
KvLQT1 is made of six membrane-spanning domains S1-S6, two intracellular domains, and a pore loop. The KvLQT1 channel is made of four KCNQ1 subunits, which form the actual ion channel.
Function
This gene encodes a protein for a voltage-gated potassium channel required for the repolarization phase of the cardiac action potential. The gene product can form heteromultimers with two other potassium channel proteins, KCNE1 and KCNE3. The gene is located in a region of chromosome 11 that contains a large number of contiguous genes that are abnormally imprinted in cancer and the Beckwith-Wiedemann syndrome. Two alternative transcripts encoding distinct isoforms have been described.
Clinical significance
Mutations in the gene can lead to a defective protein and several forms of inherited arrhythmias as Long QT syndrome which is a prolongation of the QT interval of heart repolarization, Short QT syndrome, and Familial Atrial Fibrillation. KvLQT1 are also expressed in the pancreas, and KvLQT1 Long QT syndrome patients has been shown to have hyperinsulinemic hypoglycaemia following an oral glucose load. Currents arising from Kv7.1 in over-expression systems have never been recapitulated in native tissues - Kv7.1 is always found in native tissues with a modulatory subunit. In cardiac tissue, these subunits comprise KCNE1 and yotiao. Though physiologically irrelevant, homotetrameric Kv7.1 channels also display a unique form of C-type inactivation that reaches equilibrium quickly, allowing KvLQT1 currents to plateau. This is different from the inactivation seen in A-type currents, which causes rapid current decay.
Ligands
ML277: potent and selective channel activator
Interactions
KvLQT1 has been shown to interact with PRKACA, PPP1CA and AKAP9.
KvLQT1 can also associate with any of the five members of the KCNE family of proteins, but interactions with KCNE1, KCNE2, KCNE3 are the only interactions within this protein family that affect the human heart. KCNE2, KCNE4, and KCNE5 have been shown to have an inhibitory effect on the functionality of KvLQT1, while KCNE1 and KCNE3 are activators of KvLQT1. KvLQT1 can associate with KCNE1 and KCNE4 with the activation effects of KCNE1 overriding the inhibitory effects of KCNE4 on the KvLQT1 channel, and KvLQT1 will commonly associate with anywhere from two to four different KCNE proteins in order to be functional. However, KvLQT1 most commonly associates with KCNE1 and forms the KvLQT1/KCNE1 complex since it has only been seen to function in vivo when associated with another protein. KCNQ1 will form a heteromer with KCNE1 in order to slow its activation and enhance the current density at the plasma membrane of the neuron. In addition to associating with KCNE proteins, the N-terminal juxtamembranous domain of KvLQT1 can also associate with SGK1, which stimulates the slow delayed potassium rectifier current. Since SGK1 requires structural integrity to stimulate KvLQT1/KCNE1, any mutations present in the KvLQT1 protein can result in reduced stimulation of this channel by SGK1. General mutations in KvLQT1 have been known to cause a decrease in this slow delayed potassium rectifier current, longer cardiac action potentials, and a tendency to have tachyarrhythmias.
KvLQT1/KCNE1
KCNE1 (minK), can assemble with KvLQT1 to form a slow delayed potassium rectifier channel. KCNE1 slows the inactivation of KvLQT1 when the two proteins form a heteromeric complex, and the current amplitude is greatly increased compared to WT-KvLQT1 homotetrameric channels. KCNE1 associates with the pore region of KvLQT1, and its transmembrane domain contributes to the selectivity filter of this heteromeric channel complex. The alpha helix of the KCNE1 protein interacts with the pore domain S5/S6 and with the S4 domain of the KvLQT1 channel. This results in structural modifications of the voltage sensor and the selectivity filter of the KvLQT1 channel. Mutations in either the alpha subunit of this complex, KvLQT1 or the beta subunit, KCNE1, can lead to Long QT Syndrome or other cardiac rhythmic deformities. When associated with KCNE1, the KvLQT1 channel activates much more slowly and at a more positive membrane potential. It is believed that two KCNE1 proteins interact with a tetrameric KvLQT1 channel, since experimental data suggests that there are 4 alpha subunits and 2 beta subunits in this complex.
KVLQT1/KCNE1 channels are taken up from the plasma membrane through a RAB5 dependent mechanism, but inserted into the membrane by RAB11, a GTPase.
See also
Voltage-gated potassium channel
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Romano-Ward Syndrome
Ion channels
Proteins
Cardiac electrophysiology | KvLQT1 | [
"Chemistry"
] | 1,309 | [
"Biomolecules by chemical classification",
"Molecular biology",
"Proteins",
"Neurochemistry",
"Ion channels"
] |
1,600,670 | https://en.wikipedia.org/wiki/Astrology%20and%20the%20classical%20elements | Astrology has used the concept of classical elements from antiquity up until the present. In Western astrology and Sidereal astrology four elements are used: Fire, Earth, Air, and Water.
Western astrology
In Western tropical astrology, there are 12 astrological signs. Each of the four elements is associated with three signs of the Zodiac, which are always located exactly 120 degrees away from each other along the ecliptic and said to be in trine with one another. Most modern astrologers use the four classical elements extensively, (also known as triplicities), and indeed it is still viewed as a critical part of interpreting the astrological chart.
Beginning with the first sign Aries which is a Fire sign, the next in line Taurus is Earth, then to Gemini which is Air, and finally to Cancer which is Water. This cycle continues on twice more and ends with the twelfth and final astrological sign, Pisces. The elemental rulerships for the twelve astrological signs of the zodiac (according to Marcus Manilius) are summarised as follows:
Fire — 1 – Aries; 5 – Leo; 9 – Sagittarius – hot, dry, ardent
Earth — 2 – Taurus; 6 – Virgo; 10 – Capricorn – heavy, cold, dry
Air — 3 – Gemini; 7 – Libra; 11 – Aquarius – light, hot, wet
Water — 4 – Cancer; 8 – Scorpio; 12 – Pisces – cold, wet, soft
Elements of the zodiac
Triplicity rulerships
In traditional astrology, each triplicity has several planetary rulers, which change with conditions of sect – that is, whether the chart is a day chart or a night chart. Triplicity rulerships are an important essential dignity – one of the several factors used by traditional astrologers to weigh strength, effectiveness, and integrity of each planet in a chart.
Triplicity rulerships (using the "Dorothean system") are as follows:
"Participating" rulers were not used by Ptolemy, as well as some subsequent astrologers in later traditions who followed his approach.
Triplicities by season
In ancient astrology, triplicities were more of a seasonal nature, so a season was given the qualities of an element, which means the signs associated with that season would be allocated to that element. The seasonal elements of ancient astrology are as follows:
Spring (wet becoming hot) – Air – Gemini, Libra, Aquarius
Summer (hot becoming dry) – Fire – Aries, Leo, Sagittarius
Autumn (dry becoming cold) – Earth – Taurus, Virgo, Capricorn
Winter (cold becoming wet) – Water – Cancer, Scorpio, Pisces
The seasonal qualities account for the differences in expression between signs of the same element. All the fire signs are by their nature hot and dry. However, the addition of the elemental qualities of the seasons results in differences between the fire signs. Aries being a Spring sign is wet (hot & dry, hot & wet), Leo being the midsummer sign gets a double dose of hot and dry and is the pure fire sign, and Sagittarius being an Autumnal sign is colder (hot & dry, cold & dry).
In the Southern Hemisphere the seasonal cycle is reversed.
This yields secondary and tertiary elements for each sign.
These associations are not given any great importance in modern astrology, although they are prominent in modern Western ceremonial magic, reconstructionist neopagan systems such as neodruidism and Wicca.
Vedic astrology
Sidereal (Vedic) astrology shares the same system as Western astrology of linking zodiac signs to elements.
In addition, in Vedic thought each of the five planets are linked to an element (with space as the fifth). It was said in the Veda that everything emanated from the one basic vibration of "Om" or "Aum". From "Om" the five elemental vibrations emerged representing the five different tattwas (or elements). The five planets represent these five vibrations – Jupiter for Space, Saturn for Air, Mars for Fire,
Mercury for Earth, and Venus for Water.
Chinese astrology
In many traditional Chinese theory fields, matter and its developmental movement stage can be classified into the Wu Xing. They are Wood, ruler of Jupiter, Green, East and Spring, Fire, ruler of Mars, Red, South and Summer, Earth, ruler of Saturn, Yellow, Center and Last Summer, Metal, ruler of Venus, White, West and Autumn, and finally Water, ruler of Mercury, Black, North and Winter. That said, the essence of the Wu Xing is really about the notion of five stages, rather than about five types of material.
Explanatory notes
Ptolemy later modified the rulerships of Water triplicity, making Mars the ruler of the water triplicity for both day and night charts – and William Lilly concurred.
References
classical elements
classical elements
Classical elements
Esoteric cosmology
classical elements
classical elements | Astrology and the classical elements | [
"Astronomy"
] | 1,030 | [
"History of astrology",
"History of astronomy"
] |
1,600,718 | https://en.wikipedia.org/wiki/Iodine%E2%80%93starch%20test | The iodine–starch test is a chemical reaction that is used to test for the presence of starch or for iodine. The combination of starch and iodine is intensely blue-black.
The interaction between starch and the triiodide anion () is the basis for iodometry.
History and principles
The iodine–starch test was first described in 1814 by Jean-Jacques Colin and Henri-François Gaultier de Claubry, and independently by Friedrich Stromeyer the same year.
In 1937, Canadian-American biochemist Charles S. Hanes extensively investigated the action of amylases on starch and the changes in iodine coloration during starch degradation and proposed a spiral chain conformation for the starch molecule, suggesting that fragments with more than one complete coil of the spiral might be necessary for iodine coloration.
Karl Freudenberg et al., in 1939, building upon Hanes' helical model, proposed that the helical conformation of amylose creates a hydrophobic cavity lined with CH groups, which attracts iodine molecules and leads to a shift in iodine's absorption spectrum, explaining the characteristic blue color of the complex.
This model was subsequently confirmed by Robert E. Rundle and co-workers ca. 1943, who used X-ray diffraction and optical studies to provide experimental evidence for the linear arrangement of iodine molecules within the amylose helix.
Research in the mid-20th century began to highlight the importance of iodide anion (as opposed to neutral molecules) in the complex formation, particularly in aqueous solutions. Studies by Mukherjee and Bhattacharyya demonstrated in 1946 that varying potassium iodide concentrations affected the ratio of I- to I2 in the complex. Thoma and French in 1960 further emphasized the necessity of iodide for complex formation in aqueous media.
By the 1980s, the presence of polyiodide chains within the amylose helix became widely accepted. However, the precise composition/structure of these chains, including the balance between molecular iodine and various iodide anions, continues to be debated and investigated, with a 2022 article suggesting that they might alternate.
The triiodide anion instantly produces an intense blue-black colour upon contact with starch. The intensity of the colour decreases with increasing temperature and with the presence of water-miscible organic solvents such as ethanol. The test cannot be performed at very low pH due to the hydrolysis of the starch under these conditions. It is thought that the iodine–iodide mixture combines with the starch to form an infinite polyiodide homopolymer. This was rationalized through single crystal X-ray crystallography and comparative Raman spectroscopy.
Starch as an indicator
Starch is often used in chemistry as an indicator for redox titrations where triiodide is present. Starch forms a very dark blue-black complex with triiodide. However, the complex is not formed if only iodine or only iodide (I−) is present. The colour of the starch complex is so deep, that it can be detected visually when the concentration of the iodine is as low as 20 μM at 20 °C. During iodine titrations, concentrated iodine solutions must be reacted with some titrant, often thiosulfate, in order to remove most of the iodine before the starch is added. This is due to the insolubility of the starch–triiodide complex which may prevent some of the iodine reacting with the titrant. Close to the endpoint, the starch is added, and the titration process is resumed taking into account the amount of thiosulfate added before adding the starch.
The color change can be used to detect moisture or perspiration, as in the Minor test or starch–iodine test.
Starch is also useful in detecting the enzyme amylase, which breaks down starch into sugars. Many bacteria like Bacillus subtilis can produce such an enzyme to help scientists identify unknown bacterial samples -- the starch-iodine test is one of many tests needed to identify the exact bacterium. The positive test for bacteria that has starch hydrolysis capabilities (able to produce amylase) is the presence of a yellow zone around a colony when iodine is added to detect starch.
Medical use
Although the starch-iodine test is predominantly employed in the lab, recent assessments have shown potential for clinical use, such as confirming the diagnosis of Horner's syndrome. Hospitals with limited technical accessibility can exploit this diagnostic tool since it requires resources that may be easily attainable. In order to perform the experiment, a patient's skin is first dried with 70% alcohol; with the iodine solution added, subsequently. After the skin dries completely once more, it will be dusted with a starch material. Inducing sweating conditions will cause the skin to turn dark blue. Physicians can then make a diagnosis if the test shows sweating of different intensities on the left and right side of the body.
See also
Lugol's iodine
Counterfeit banknote detection pen
References
Further reading
Vogel's Textbook of Quantitative Chemical Analysis, 5th edition.
External links
How does starch indicate iodine? General Chemistry Online
Iodine test at Braukaiser
Titrations.info: Potentiometric titration--Solutions used in iodometric titrations
Biochemistry detection methods
Carbohydrate methods
Chemical tests
Laboratory techniques
Iodine
Polyhalides
Starch | Iodine–starch test | [
"Chemistry",
"Biology"
] | 1,156 | [
"Biochemistry methods",
"Biochemistry detection methods",
"Chemical tests",
"Carbohydrate chemistry",
"nan",
"Carbohydrate methods"
] |
1,600,866 | https://en.wikipedia.org/wiki/Jellium | Jellium, also known as the uniform electron gas (UEG) or homogeneous electron gas (HEG), is a quantum mechanical model of interacting electrons in a solid where the positive charges (i.e. atomic nuclei) are assumed to be uniformly distributed in space; the electron density is a uniform quantity as well in space. This model allows one to focus on the effects in solids that occur due to the quantum nature of electrons and their mutual repulsive interactions (due to like charge) without explicit introduction of the atomic lattice and structure making up a real material. Jellium is often used in solid-state physics as a simple model of delocalized electrons in a metal, where it can qualitatively reproduce features of real metals such as screening, plasmons, Wigner crystallization and Friedel oscillations.
At zero temperature, the properties of jellium depend solely upon the constant electronic density. This property lends it to a treatment within density functional theory; the formalism itself provides the basis for the local-density approximation to the exchange-correlation energy density functional.
The term jellium was coined by Conyers Herring in 1952, alluding to the "positive jelly" background, and the typical metallic behavior it displays.
Hamiltonian
The jellium model treats the electron-electron coupling rigorously. The artificial and structureless background charge interacts electrostatically with itself and the electrons. The jellium Hamiltonian for N electrons confined within a volume of space Ω, and with electronic density ρ(r) and (constant) background charge density n(R) = N/Ω is
where
Hel is the electronic Hamiltonian consisting of the kinetic and electron-electron repulsion terms:
Hback is the Hamiltonian of the positive background charge interacting electrostatically with itself:
Hel-back is the electron-background interaction Hamiltonian, again an electrostatic interaction:
Hback is a constant and, in the limit of an infinite volume, divergent along with Hel-back. The divergence is canceled by a term from the electron-electron coupling: the background interactions cancel and the system is dominated by the kinetic energy and coupling of the electrons. Such analysis is done in Fourier space; the interaction terms of the Hamiltonian which remain correspond to the Fourier expansion of the electron coupling for which q ≠ 0.
Contributions to the total energy
The traditional way to study the electron gas is to start with non-interacting electrons which are governed only by the kinetic energy part of the Hamiltonian, also called a Fermi gas. The kinetic energy per electron is given by
where is the Fermi energy, is the Fermi wave vector, and the last expression shows the dependence on the Wigner–Seitz radius where energy is measured in rydbergs. is the Bohr radius. In what follows is the normalized value
Without doing much work, one can guess that the electron-electron interactions will scale like the inverse of the average electron-electron separation and hence as (since the Coulomb interaction goes like one over distance between charges) so that if we view the interactions as a small correction to the kinetic energy, we are describing the limit of small (i.e. being larger than ) and hence high electron density. Unfortunately, real metals typically have between 2-5 which means this picture needs serious revision.
The first correction to the free electron model for jellium is from the Fock exchange contribution to electron-electron interactions. Adding this in, one has a total energy of
where the negative term is due to exchange: exchange interactions lower the total energy. Higher order corrections to the total energy are due to electron correlation and if one decides to work in a series for small , one finds
The series is quite accurate for small but of dubious value for values found in actual metals.
For the full range of , Chachiyo's correlation energy density can be used as the higher order correction. In this case,
, which agrees quite well (on the order of milli-Hartree) with the quantum Monte Carlo simulation.
Zero-temperature phase diagram of jellium in three and two dimensions
The physics of the zero-temperature phase behavior of jellium is driven by competition between the kinetic energy of the electrons and the electron-electron interaction energy. The kinetic-energy operator in the Hamiltonian scales as , where is the Wigner–Seitz radius, whereas the interaction energy operator scales as . Hence the kinetic energy dominates at high density (small ), while the interaction energy dominates at low density (large ).
The limit of high density is where jellium most resembles a noninteracting free electron gas. To minimize the kinetic energy, the single-electron states are delocalized, in a state very close to the Slater determinant (non-interacting state) constructed from plane waves. Here the lowest-momentum plane-wave states are doubly occupied by spin-up and spin-down electrons, giving a paramagnetic Fermi fluid.
At lower densities, where the interaction energy is more important, it is energetically advantageous for the electron gas to spin-polarize (i.e., to have an imbalance in the number of spin-up and spin-down electrons), resulting in a ferromagnetic Fermi fluid. This phenomenon is known as itinerant ferromagnetism. At sufficiently low density, the kinetic-energy penalty resulting from the need to occupy higher-momentum plane-wave states is more than offset by the reduction in the interaction energy due to the fact that exchange effects keep indistinguishable electrons away from one another.
A further reduction in the interaction energy (at the expense of kinetic energy) can be achieved by localizing the electron orbitals. As a result, jellium at zero temperature at a sufficiently low density will form a so-called Wigner crystal, in which the single-particle orbitals are of approximately Gaussian form centered on crystal lattice sites. Once a Wigner crystal has formed, there may in principle be further phase transitions between different crystal structures and between different magnetic states for the Wigner crystals (e.g., antiferromagnetic to ferromagnetic spin configurations) as the density is lowered. When Wigner crystallization occurs, jellium acquires a band gap.
Within Hartree–Fock theory, the ferromagnetic fluid abruptly becomes more stable than the paramagnetic fluid at a density parameter of in three dimensions (3D) and in two dimensions (2D). However, according to Hartree–Fock theory, Wigner crystallization occurs at in 3D and in 2D, so that jellium would crystallise before itinerant ferromagnetism occurs. Furthermore, Hartree–Fock theory predicts exotic magnetic behavior, with the paramagnetic fluid being unstable to the formation of a spiral spin-density wave. Unfortunately, Hartree–Fock theory does not include any description of correlation effects, which are energetically important at all but the very highest densities, and so a more accurate level of theory is required to make quantitative statements about the phase diagram of jellium.
Quantum Monte Carlo (QMC) methods, which provide an explicit treatment of electron correlation effects, are generally agreed to provide the most accurate quantitative approach for determining the zero-temperature phase diagram of jellium. The first application of the diffusion Monte Carlo method was Ceperley and Alder's famous 1980 calculation of the zero-temperature phase diagram of 3D jellium. They calculated the paramagnetic-ferromagnetic fluid transition to occur at and Wigner crystallization (to a body-centered cubic crystal) to occur at . Subsequent QMC calculations have refined their phase diagram: there is a second-order transition from a paramagnetic fluid state to a partially spin-polarized fluid from to about ; and Wigner crystallization occurs at .
In 2D, QMC calculations indicate that the paramagnetic fluid to ferromagnetic fluid transition and Wigner crystallization occur at similar density parameters, in the range . The most recent QMC calculations indicate that there is no region of stability for a ferromagnetic fluid. Instead there is a transition from a paramagnetic fluid to a hexagonal Wigner crystal at . There is possibly a small region of stability for a (frustrated) antiferromagnetic Wigner crystal, before a further transition to a ferromagnetic crystal. The crystallization transition in 2D is not first order, so there must be a continuous series of transitions from fluid to crystal, perhaps involving striped crystal/fluid phases. Experimental results for a 2D hole gas in a GaAs/AlGaAs heterostructure (which, despite being clean, may not correspond exactly to the idealized jellium model) indicate a Wigner crystallization density of .
Applications
Jellium is the simplest model of interacting electrons. It is employed in the calculation of properties of metals, where the core electrons and the nuclei are modeled as the uniform positive background and the valence electrons are treated with full rigor. Semi-infinite jellium slabs are used to investigate surface properties such as work function and surface effects such as adsorption; near surfaces the electronic density varies in an oscillatory manner, decaying to a constant value in the bulk.
Within density functional theory, jellium is used in the construction of the local-density approximation, which in turn is a component of more sophisticated exchange-correlation energy functionals. From quantum Monte Carlo calculations of jellium, accurate values of the correlation energy density have been obtained for several values of the electronic density, which have been used to construct semi-empirical correlation functionals.
The jellium model has been applied to superatoms, metal clusters, octacarbonyl complexes, and used in nuclear physics.
See also
Free electron model — a model electron gas where the electrons do not interact with anything.
Nearly free electron model — a model electron gas where the electrons do not interact with each other, but do feel a (weak) potential from the atomic lattice.
References
Condensed matter physics
Density functional theory
Nuclear physics | Jellium | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,094 | [
"Density functional theory",
"Quantum chemistry",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Nuclear physics",
"Matter"
] |
1,600,941 | https://en.wikipedia.org/wiki/Swiss%20coordinate%20system | The Swiss coordinate system (or Swiss grid) is a geographic coordinate system used in Switzerland and Liechtenstein for maps and surveying by the Swiss Federal Office of Topography (Swisstopo).
A first coordinate system was introduced in 1903 under the name LV03 (Landesvermessung 1903, German for “land survey 1903”), based on the Mercator projection and the Bessel ellipsoid. With the advent of GPS technology, a new coordinate system was introduced in 1995 under the name LV95 (Landesvermessung 1995, German for “land survey 1995”) after a 7-year measurement campaign.
LV is translated as MN in English.
LV03
Introduced in 1903, this first geographic coordinate system rested upon the two dominant methodological pillars of geodesy and cartography at the time: the Bessel ellipsoid and the Mercator projection. Its measurements used the Bessel ellipsoid as an approximation of the Earth's shape, and its maps used the Mercator projection as a projection technique. Although not ideal, these approximations still offered a high level of precision in the case of Switzerland, due to the small size of its territory (41,285 km2 with max. 350km lengthways and 220km from North to South).
The fundamental reference point of the LV03 coordinate system was the old observatory of Bern, nowadays the location of the Institute of Exact Sciences of Bern University, in downtown Bern (Sidlerstrasse 5 - 46°57'3.9" N, 7°26'19.1" E). The coordinates of this reference point were arbitrarily fixed at 600'000 m E / 200'000 m N – with the East coordinate (E) noted before the North coordinate (N), unlike in the traditional latitude / longitude coordinate system. In selecting the values of these reference coordinates, the intention of the Swiss Federal Office of Topography was to guarantee that every point of the Swiss territory be identified by positive coordinates. The reference coordinates thus needed to be large enough to allow for positive coordinates to be allocated to the southernmost and westernmost areas of Switzerland. This goal was largely met by the 600'000 m E / 200'000 m N reference coordinates, as the corresponding origin point (0 m E / 0 m N) was located in Southwest France, near Bordeaux. In addition, the ratio between the East (E) and North (N) coordinates of the reference point was also set to be sufficiently high to guarantee that E coordinates be bigger than N coordinates over the entire Swiss territory.
Examples of LV03 coordinates
- Rigi E=679520, N=212273
- Zürich-Seebach E=684592, N=252857
LV95
With the advent of GPS technology, it became clear over the course of the 1980s that the LV03 coordinate system was no longer in a position to meet the rapidly growing precision standards set by new technologies. For instance, a difference of several meters was discovered when comparing the performance of LV03 and GPS technology in the measurement of the distance between the westernmost and easternmost areas of Switzerland (Geneva and Lower Engadin). As a result, the Swiss Federal Office of Topography decided to launch a new land survey campaign in 1988, with the intention of gathering precise data for the development of a new coordinate system based on WGS84. This survey ended in 1995, which is the reason why it was officially called LV95 (Landesvermessung 1995, German for “land survey 1995”).
The new coordinate system was designed with two main goals in mind: significantly increasing the precision of geographic coordinates, while at the same time preserving the conceptual foundations of the old LV03 for the sake of continuity. As such, the LV95 system continues to provide coordinates in the same order (East (E) before North (N)), and continues to allocate positive coordinates to every point of the Swiss territory.
In order to nonetheless achieve a clear distinction between the two systems, an additional digit was added to the coordinates of LV95: any East coordinate (E) now starts with a 2, and any North coordinate (N) with a 1. Consequently, LV95 coordinates are given by pairs of 7-digit numbers, whereas LV03 used pairs of 6-digit numbers – for instance the coordinates (2 600 000m E / 1 200 000m N) in LV95 would be expressed as (600 000m E / 200 000m N) in LV03.
Another significant difference lies in the location of the fundamental reference point of the two systems. Under the LV95 system, coordinates are no longer calculated by referring to the old observatory of Bern (as was the case under the LV03 system), but instead to the Zimmerwald Observatory, located outside of Bern (approximately 10km to the South). Exact formulas used for the conversion of LV95 coordinates into latitude and longitude are provided by the Swiss Federal Office of Topography in its formal documentation of the LV95 system.
Although the new geographic coordinate system LV95 was introduced in 1995, it was only progressively brought to use by Swiss authorities, with the official deadline for its definitive implementation having been fixed for the year 2016. Nowadays the LV95 system has become the main geographic reference frame of various institutions and governmental agencies, such as the Federal Statistical Office, the Swiss Army and the Swiss Border Guard, as well as cantonal police corps, emergency services and cadastre offices. Likewise, the official National Maps of Switzerland are now also founded upon this new coordinate system.
See also
Swiss cartography
References
External links
Swisstopo: Swiss map projections
Map Projections of Switzerland
Converters CH1903 <-> WGS 84
NAVREF, Swisstopo
Umrechnung von Schweizer Landeskoordinaten in das WGS84-System, University of Potsdam
Online maps with search by Swiss coordinates:
Retorte.ch Koordinator - Google Maps mashup that works with Swiss coordinates
http://map.geo.admin.ch (The official Swiss geographic information system (GIS) including the complete range of the Swiss topographical maps)
Geography of Switzerland
Geographic coordinate systems | Swiss coordinate system | [
"Mathematics"
] | 1,317 | [
"Geographic coordinate systems",
"Coordinate systems"
] |
1,601,343 | https://en.wikipedia.org/wiki/Second%20Cambridge%20Catalogue%20of%20Radio%20Sources | The Second Cambridge Catalogue of Radio Sources (2C) was published in 1955 by John R Shakeshaft and colleagues. It comprised a list of 1936 sources between declinations -38 and +83, giving their right ascension, declination, both in 1950.0 coordinates, and flux density. The observations were made with the Cambridge Interferometer, at 81.5 MHz.
The data appeared to show a flux/number ('source counts') trend which precluded some cosmological models (such as the Steady-State):-
For a uniform distribution of radio sources the slope of the cumulative distribution of log(number, N) versus log (power, S) would have been -1.5, but the Cambridge data apparently implied a (log(N),log(S)) slope of nearly -3.0.
Unfortunately, this interpretation was premature as a significant number of the sources listed were later found to be the product of 'confusion', the blending of several weaker sources in the lobes of the interferometer to produce the apparent effect of a single stronger source. Key data demonstrating this came from the then-recently commissioned Mills Cross Telescope in Australia. However, subsequent statistical analysis by Hewish of the interferometer records later showed some aspects of the initial interpretation to have been broadly correct, with the correct measure of the (log(N),log(S)) slope of nearly -1.8 derived once confusion was taken into account.
The survey was superseded by the much more reliable 3C and 3CR surveys. The 3C survey also used the Cambridge Interferometer, but at 159 MHz, which helped significantly reduce the 'confusion' (see above) in the later survey.
References
2 | Second Cambridge Catalogue of Radio Sources | [
"Astronomy"
] | 357 | [
"Astronomical catalogue stubs",
"Astronomy stubs"
] |
1,601,406 | https://en.wikipedia.org/wiki/Correlative | In grammar, a correlative is a word that is paired with another word with which it functions to perform a single function but from which it is separated in the sentence.
In English, examples of correlative pairs are both–and, either–or, neither–nor, the–the ("the more the better"), so–that ("it ate so much food that it burst"), and if–then.
In the Romance languages, the demonstrative pro-forms function as correlatives with the relative pro-forms, as autant–que in French; in English, demonstratives are not used in such constructions, which depend on the relative only: "I saw what you did", rather than *"I saw that, what you did".
See also
Correlative conjunction
Pro-form (namely section Table of correlatives)
Parts of speech | Correlative | [
"Technology"
] | 190 | [
"Parts of speech",
"Components"
] |
1,601,407 | https://en.wikipedia.org/wiki/Direct%20comparison%20test | In mathematics, the comparison test, sometimes called the direct comparison test to distinguish it from similar related tests (especially the limit comparison test), provides a way of deducing whether an infinite series or an improper integral converges or diverges by comparing the series or integral to one whose convergence properties are known.
For series
In calculus, the comparison test for series typically consists of a pair of statements about infinite series with non-negative (real-valued) terms:
If the infinite series converges and for all sufficiently large n (that is, for all for some fixed value N), then the infinite series also converges.
If the infinite series diverges and for all sufficiently large n, then the infinite series also diverges.
Note that the series having larger terms is sometimes said to dominate (or eventually dominate) the series with smaller terms.
Alternatively, the test may be stated in terms of absolute convergence, in which case it also applies to series with complex terms:
If the infinite series is absolutely convergent and for all sufficiently large n, then the infinite series is also absolutely convergent.
If the infinite series is not absolutely convergent and for all sufficiently large n, then the infinite series is also not absolutely convergent.
Note that in this last statement, the series could still be conditionally convergent; for real-valued series, this could happen if the an are not all nonnegative.
The second pair of statements are equivalent to the first in the case of real-valued series because converges absolutely if and only if , a series with nonnegative terms, converges.
Proof
The proofs of all the statements given above are similar. Here is a proof of the third statement.
Let and be infinite series such that converges absolutely (thus converges), and without loss of generality assume that for all positive integers n. Consider the partial sums
Since converges absolutely, for some real number T. For all n,
is a nondecreasing sequence and is nonincreasing.
Given then both belong to the interval , whose length decreases to zero as goes to infinity.
This shows that is a Cauchy sequence, and so must converge to a limit. Therefore, is absolutely convergent.
For integrals
The comparison test for integrals may be stated as follows, assuming continuous real-valued functions f and g on with b either or a real number at which f and g each have a vertical asymptote:
If the improper integral converges and for , then the improper integral also converges with
If the improper integral diverges and for , then the improper integral also diverges.
Ratio comparison test
Another test for convergence of real-valued series, similar to both the direct comparison test above and the ratio test, is called the ratio comparison test:
If the infinite series converges and , , and for all sufficiently large n, then the infinite series also converges.
If the infinite series diverges and , , and for all sufficiently large n, then the infinite series also diverges.
See also
Convergence tests
Convergence (mathematics)
Dominated convergence theorem
Integral test for convergence
Limit comparison test
Monotone convergence theorem
Notes
References
Convergence tests
fr:Série convergente#Principe général : règles de comparaison | Direct comparison test | [
"Mathematics"
] | 657 | [
"Theorems in mathematical analysis",
"Convergence tests"
] |
1,601,457 | https://en.wikipedia.org/wiki/Solution%20concept | In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.
Many solution concepts, for many games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions. Each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games.
Formal definition
Let be the class of all games and, for each game , let be the set of strategy profiles of . A solution concept is an element of the direct product i.e., a function such that for all
Rationalizability and iterated dominance
In this solution concept, players are assumed to be rational and so strictly dominated strategies are eliminated from the set of strategies that might feasibly be played. A strategy is strictly dominated when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. (Strictly dominated strategies are also important in minimax game-tree search.) For example, in the (single period) prisoners' dilemma (shown below), cooperate is strictly dominated by defect for both players because either player is always better off playing defect, regardless of what his opponent does.
Nash equilibrium
A Nash equilibrium is a strategy profile (a strategy profile specifies a strategy for every player, e.g. in the above prisoners' dilemma game (cooperate, defect) specifies that prisoner 1 plays cooperate and prisoner 2 plays defect) in which every strategy played by every agent (agent i) is a best response to every other strategy played by all the other opponents (agents j for every j≠i) . A strategy by a player is a best response to another player's strategy if there is no other strategy that could be played that would yield a higher pay-off in any situation in which the other player's strategy is played.
Backward induction
In some games, there are multiple Nash equilibria, but not all of them are realistic. In dynamic games, backward induction can be used to eliminate unrealistic Nash equilibria. Backward induction assumes that players are rational and will make the best decisions based on their future expectations. This eliminates noncredible threats, which are threats that a player would not carry out if they were ever called upon to do so.
For example, consider a dynamic game with an incumbent firm and a potential entrant to the industry. The incumbent has a monopoly and wants to maintain its market share. If the entrant enters, the incumbent can either fight or accommodate the entrant. If the incumbent accommodates, the entrant will enter and gain profit. If the incumbent fights, it will lower its prices, run the entrant out of business (incurring exit costs), and damage its own profits.
The best response for the incumbent if the entrant enters is to accommodate, and the best response for the entrant if the incumbent accommodates is to enter. This results in a Nash equilibrium. However, if the incumbent chooses to fight, the best response for the entrant is to not enter. If the entrant does not enter, it does not matter what the incumbent chooses to do. Hence, fight can be considered a best response for the incumbent if the entrant does not enter, resulting in another Nash equilibrium.
However, this second Nash equilibrium can be eliminated by backward induction because it relies on a noncredible threat from the incumbent. By the time the incumbent reaches the decision node where it can choose to fight, it would be irrational to do so because the entrant has already entered. Therefore, backward induction eliminates this unrealistic Nash equilibrium.
See also:
Monetary policy theory
Stackelberg competition
Subgame perfect Nash equilibrium
A generalization of backward induction is subgame perfection. Backward induction assumes that all future play will be rational. In subgame perfect equilibria, play in every subgame is rational (specifically a Nash equilibrium). Backward induction can only be used in terminating (finite) games of definite length and cannot be applied to games with imperfect information. In these cases, subgame perfection can be used. The eliminated Nash equilibrium described above is subgame imperfect because it is not a Nash equilibrium of the subgame that starts at the node reached once the entrant has entered.
Perfect Bayesian equilibrium
Sometimes subgame perfection does not impose a large enough restriction on unreasonable outcomes. For example, since subgames cannot cut through information sets, a game of imperfect information may have only one subgame – itself – and hence subgame perfection cannot be used to eliminate any Nash equilibria. A perfect Bayesian equilibrium (PBE) is a specification of players' strategies and beliefs about which node in the information set has been reached by the play of the game. A belief about a decision node is the probability that a particular player thinks that node is or will be in play (on the equilibrium path). In particular, the intuition of PBE is that it specifies player strategies that are rational given the player beliefs it specifies and the beliefs it specifies are consistent with the strategies it specifies.
In a Bayesian game a strategy determines what a player plays at every information set controlled by that player. The requirement that beliefs are consistent with strategies is something not specified by subgame perfection. Hence, PBE is a consistency condition on players' beliefs. Just as in a Nash equilibrium no player's strategy is strictly dominated, in a PBE, for any information set no player's strategy is strictly dominated beginning at that information set. That is, for every belief that the player could hold at that information set there is no strategy that yields a greater expected payoff for that player. Unlike the above solution concepts, no player's strategy is strictly dominated beginning at any information set even if it is off the equilibrium path. Thus in PBE, players cannot threaten to play strategies that are strictly dominated beginning at any information set off the equilibrium path.
The Bayesian in the name of this solution concept alludes to the fact that players update their beliefs according to Bayes' theorem. They calculate probabilities given what has already taken place in the game.
Forward induction
Forward induction is so called because just as backward induction assumes future play will be rational, forward induction assumes past play was rational. Where a player does not know what type another player is (i.e. there is imperfect and asymmetric information), that player may form a belief of what type that player is by observing that player's past actions. Hence the belief formed by that player of what the probability of the opponent being a certain type is based on the past play of that opponent being rational. A player may elect to signal his type through his actions.
Kohlberg and Mertens (1986) introduced the solution concept of Stable equilibrium, a refinement that satisfies forward induction. A counter-example was found where such a stable equilibrium did not satisfy backward induction. To resolve the problem Jean-François Mertens introduced what game theorists now call Mertens-stable equilibrium concept, probably the first solution concept satisfying both forward and backward induction.
Forward induction yields a unique solution for the burning money game.
See also
Extensive form game
Trembling hand equilibrium
"The Intuitive Criterion"
References
Harsanyi, J. (1973) Oddness of the number of equilibrium points: a new proof. International Journal of Game Theory 2:235–250.
Govindan, Srihari & Robert Wilson, 2008. "Refinements of Nash Equilibrium," The New Palgrave Dictionary of Economics, 2nd Edition.
Hines, W. G. S. (1987) Evolutionary stable strategies: a review of basic theory. Theoretical Population Biology 31:195–272.
Kohlberg, Elon & Jean-François Mertens, 1986. "On the Strategic Stability of Equilibria," Econometrica, Econometric Society, vol. 54(5), pages 1003-37, September.
Mertens, Jean-François, 1989. "Stable Equilibria - A reformulation. Part 1 Basic Definitions and Properties," Mathematics of Operations Research, Vol. 14, No. 4, Nov.
Noldeke, G. & Samuelson, L. (1993) An evolutionary analysis of backward and forward induction. Games & Economic Behaviour 5:425–454.
Maynard Smith, J. (1982) Evolution and the Theory of Games.
.
Selten, R. (1983) Evolutionary stability in extensive two-person games. Math. Soc. Sci. 5:269–363.
Selten, R. (1988) Evolutionary stability in extensive two-person games – correction and further development. Math. Soc. Sci. 16:223–266
Thomas, B. (1985a) On evolutionary stable sets. J. Math. Biol. 22:105–115.
Thomas, B. (1985b) Evolutionary stable sets in mixed-strategist models. Theor. Pop. Biol. 28:332–341
Game theory
Game theory equilibrium concepts | Solution concept | [
"Mathematics"
] | 1,908 | [
"Game theory",
"Game theory equilibrium concepts"
] |
1,601,592 | https://en.wikipedia.org/wiki/Bat-Signal | The Bat-Signal is a distress signal device appearing in American comic books published by DC Comics, as a means to summon the superhero Batman. It is a specially modified searchlight with a stylized emblem of a bat affixed to the light, allowing it to project a large bat symbol onto cloudy night skies over Gotham City.
The signal is used by the Gotham City Police Department as a method of contacting and summoning Batman in the event his help is needed, but also as a weapon of psychological intimidation to the numerous criminals of Gotham City.
It doubles as the primary logo for the Batman series of comic books, TV shows, and films.
To celebrate Batman's 80th anniversary, DC Comics and Warner Bros. lit the Bat-Signal in thirteen cities on September 21, 2019, starting in Melbourne and ending in Los Angeles.
Origins
The Bat-Signal first appeared in Detective Comics #60 (February 1942). The signal has several different origins in comics featuring post-Crisis continuity. It is introduced as a new tool after Batman's first encounter with the Joker in the 2005 series Batman: The Man Who Laughs, and also during the 1990 "Prey" storyline in Legends of the Dark Knight.
In the 2006 series Batman and the Mad Monk, Commissioner James Gordon initially uses a pager to contact Batman, but during a meeting with the superhero, Gordon throws it away, saying he prefers a more public means of contacting him. After Batman departs, Gordon looks out at the city and considers the exceptional view from his current position, hinting at the future creation of the Signal.
In the 1989 Batman film, Batman gives the signal to the Gotham police force, enabling them to call him when the city was in danger. In 2005's Batman Begins, then-lieutenant James Gordon installs the Bat-signal on the roof of the police department himself. The film suggests Gordon was inspired to create the signal after Batman left mobster Carmine Falcone chained across a spotlight after a confrontation at the docks, Falcone's silhouette on the spotlight vaguely resembling a bat.
On the 1992 television show Batman: The Animated Series, the signal is introduced in the episode "The Cape and Cowl Conspiracy", though a makeshift signal was used earlier in "Joker's Favor". In 2004's The Batman, Gordon invents it to summon Batman in "Night in the City", although the signal is also alluded to in an earlier episode.
Additional appearances
In Detective Comics #466 (1976), the villainous Signalman traps Batman inside the Bat-Signal device.
In issue #6 of the 1989 series Legends of the Dark Knight, a group of crime bosses projects the signal upside down to summon Batman to help them fight a killer they cannot defeat.
Catwoman uses the Bat-Signal in the 1996 special The Long Halloween.
In the 1999 miniseries Batman: Dark Victory, after Batman asks for The Riddler to offer his insight into the riddles of the new villain the Hangman, the Riddler uses the Signal to summon Batman after he's finished his analysis. Later in the series, the Hangman sneaks onto the roof of Police Headquarters and turns the Bat-Signal on to lure then-recently appointed Commissioner Gordon to the roof and try to kill him, but is thwarted when Two-Face cuts Gordon down.
During the 1993 Knightfall storyline, one of Bane's henchmen remarks that the Bat-Signal is a "stupid set-up", as it allows criminals to know where Batman is, or at least where he will be, and lets them keep track of his movements.
In the 1996 Halloween special comic series, Batman: Haunted Knight, Scarecrow alters the Bat-Signal to notify Batman that he has kidnapped Gordon. By adding an orange bulb and painting "eyes" on the signal, he turns the beam into a stylized Jack-o'-lantern image, with the bat symbol forming the mouth beneath two eyes.
At the beginning of the 1999 No Man's Land story arc in Batman, a junior officer creates an improvised Bat-Signal out of spare parts. Gordon smashes it to pieces as he is angry at Batman as he believes that the vigilante abandoned Gotham. Oracle also builds a small Bat-Signal to summon Batman.
In the 2002 comic book series Gotham Central, it is explained that Batman's existence is not officially recognized by the Gotham City authorities, and the police claim to Gotham citizens that the Bat-Signal is merely a method of using the Batman "urban legend" to intimidate Gotham's criminal underworld. Owing to the events in the "War Crimes" storyline, relations between Batman and the Gotham City Police Department under Commissioner Michael Akins are officially severed, and as a result, the Bat-Signal is removed from the roof of Gotham Central. Needing Batman's help later, Akins retrieves a spare Bat Signal for single use. This signal is a more sophisticated laser which paints a green bat symbol in the clouds and is more visible. This version of the signal is donated by Kord Industries (see the Blue Beetle). The laser signal is said to have been unused because the city council deems it an "inappropriate gift" (The characters are notably unimpressed by the more high-tech version).
In the 2006 series 52, The Question alters the traditional Bat-Signal to project a spray-painted question mark. In the One Year Later series, however, with the re-installation of Gordon as commissioner, relations with Batman improve. Upon Batman's return from one year of self-imposed exile, the Bat-Signal is activated once again.
In the "Lovers and Madmen" story arc from the 2006 series Batman Confidential, Batman sees the Bat-Signal and assumes Gordon is calling him to ask for his help. When he reaches the rooftop, however, he finds the Joker instead.
In the 2009 crossover event Blackest Night: Batman, Batman and Robin deal with resurrected zombies of their dead foes, some of which have attacked the GCPD Headquarters. When Black Lanterns attack the headquarters, the Bat-Signal shines in the sky, cracked and covered with two corpses surrounding the bat symbol. This prompts the Dynamic Duo to head over and help.
In the 2014 series Batman Eternal, the Bat-Signal is shattered by new Commissioner Jack Forbes as part of his campaign against Batman, Forbes acting as a patsy for Carmine Falcone as he seeks to undermine Batman's status in the city as part of a new plan by an unknown foe. After the storyline, Cluemaster— the true villain of the piece— ties Batman to the Bat-Signal before unmasking him and carving the bat symbol onto his chest, but Bruce manages to escape his bonds, the storyline concluding with a new signal on the roof of the GCPD as Gordon is released and Batman's reputation is redeemed.
During the Joker's 2015 attack on Gotham, Batman notes that his enemies have a pact that they will shine the Bat-Signal upside-down on the day he dies, with the Dark Knight using that plan to rally his other enemies to help him stop Joker's latest rampage, reasoning that none of them want the kind of destruction the Joker intends to unleash.
After Batman's apparent death fighting the Joker, the Powers Corporation, as part of a campaign to create a new Batman, create the 'Bat-Blimp', which includes a high-tech electromagnetic Bat-Signal projected down from the airborne blimp, often used to carry the new Batman into action. After Gordon is nearly killed by new villain Mister Bloom, the true Batman returns in a confrontation right next to the original, reactivated Bat-Signal, even hitting one of Bloom's minions with the metal bat in the Signal when it is shattered by an attack.
In the Elseworlds Batman & Dracula trilogy novel Batman: Crimson Mist, Gordon and Alfred use the Signal to summon the vampire Batman after he has killed Penguin's gang, wanting to establish the situation now that Batman has surrendered to his vampire instincts. Later, Two-Face and Killer Croc use the signal to draw Gordon and Alfred to the roof so that the two sides can discuss a possible alliance against the vampire Batman and his new assault on Gotham's criminals.
In other media
1949 Columbia serial
The Bat-Signal made its first on-screen appearance in the Batman and Robin serial by Columbia. In its first incarnation, it was simply a high-powered projector that was kept in Commissioner Gordon's office. When needed, he would simply wheel the Bat-Signal over to his office window and shine it directly to the sky. Though small, it was powerful enough to cast an image of the Bat symbol against the clouds.
1960s TV series
The Bat-Signal seldom appeared in the 1960s TV series, Commissioner Gordon generally contacting Batman using a dedicated phone line (the Batphone). However, the Bat-Signal was occasionally used (for instance, in the episode "The Sandman Cometh" when Bruce Wayne and Dick Grayson are away on a camping trip), whenever Batman needed to be summoned from the field. Its first appearance was in the pilot episode, "Hi Diddle Riddle". The animated background for the closing credits of the TV series depicted the Bat-signal in the night sky over Gotham City.
Gotham
A promotion website for the 2014 Gotham TV series on Fox.com called "Gotham Chronicle", which is an online newspaper following recent events from Gotham, one of them stated that a Floodlight was built on top of the G.C.P.D building, referencing that the future Bat-Signal was used by police before it was a calling card for Batman, also stating that the series introduced the early uses of the Bat-Signal.
After the third-season finale, "Heroes Rise: Heavydirtysoul", Bruce Wayne is seen standing on a ledge overlooking the city as a searchlight gradually rises and picks out an area of the dark cloud that, when illuminated, looks like a bat.
In the finale of the fourth season, "A Dark Knight: No Man's Land", James Gordon has Lucius Fox activate the Floodlight on top of the G.C.P.D building. After the episode, Gordon tells Bruce Wayne that the signal is meant to be a symbol of hope, while both are looking up at the clouds illuminated by the Floodlight.
During No Man's Land in the fifth season, Gordon continues using the signal as a symbol of hope for the good people left in Gotham and later meets with Bruce at the Floodlight in "Year Zero". In the series finale "The Beginning...", Commissioner Gordon and Harvey Bullock re-ignite the searchlight to celebrate Bruce Wayne's return to Gotham after ten years. Alfred Pennyworth then arrives and informs them that Bruce is otherwise engaged and can not attend their meeting. However, having noticed the searchlight that illuminates the sky, the Dark Knight then appears on a building from across the street, watching Gordon, Bullock, and Alfred.
Arrowverse
In the CW series The Flash, a 'Flash signal' is created by Cisco, who claimed to have gotten the idea from "some comic book", which implies Batman does not exist in on Arrowverse Earth One. However, Earth 38, the universe Supergirl takes place in makes several references to him, solely as "Clark's (kind of) friend", and Oliver later refers to Bruce Wayne on Earth One.
In the second part of the Elseworlds crossover, the Bat-Signal is shown, though it seems to have been inactive for some time during Batman's disappearance.
In the pilot episode of Batwoman, Gotham City Mayor Michael Akins was planning to turn off the Bat-Signal forever due to Batman's disappearance. The Bat-Signal was later destroyed by Alice in the episode "Down Down Down". A new Bat-Signal was made in "Who Are You?" by Luke Fox.
Titans
The Bat-Signal appears in the season finale of Titans, titled "Dick Grayson", in a dream world created by Trigon.
The Penguin
A spin-off series to the 2022 film The Batman, the 2024 finale of The Penguin ends with the Bat-Signal shining in the sky over Gotham.
Live-action film
Burton/Schumacher series
In Tim Burton's 1989 film Batman, Batman gives the signal to the police as a gift so that they can summon him when he is needed after he defeats The Joker.
In Burton's 1992 sequel Batman Returns, Batman has mirrors stationed atop Wayne Manor that reflect the Bat-Signal through his window, alerting him to its presence in the night sky. The signal is used when Commissioner Gordon needs Batman's help when the Red Triangle Circus Gang attack Max Shreck during Christmas and appears again at the end of the film as a surviving Catwoman looks on.
In Joel Schumacher's 1995 sequel Batman Forever, the criminal psychologist Dr. Chase Meridian uses the Bat-Signal to call Batman, to seduce him. Batman is slightly peeved at this: "The Bat-Signal is not a beeper". Later, the Riddler alters the Bat-Signal by projecting a question mark into the sky with the Bat-symbol forming the dot at the base. (The Riddler in the comics uses a similar tactic in Batman: Dark Victory; after brokering a tentative alliance with Batman, the Riddler changes the signal, projecting a question mark into the sky to let Batman know that he has an answer for him). A music video for "Kiss from a Rose", also from Batman Forever, features singer Seal performing the song while standing near the Bat-Signal.
In Schumacher's 1997 film Batman & Robin, Poison Ivy alters the Bat-Signal by changing it to a "Robin-Signal" to lure Robin into a trap.
Nolan series
In Christopher Nolan's 2005 film Batman Begins, then-lieutenant James "Jim" Gordon finds the mobster Carmine Falcone strapped onto a searchlight in the docks of Gotham City, for the Gotham Police force to arrest him, left by Batman. Lieutenant Gordon then notices that Falcone's shadow is projected into the clouds of the night sky, similar to the silhouette of a bat. At the end of the film, the Bat-signal appears, as a searchlight that projects the shape of a bat, installed atop police headquarters as a means to contact Batman.
In the 2008 sequel The Dark Knight, as in Frank Miller's Batman: The Dark Knight Returns, Gordon uses the Bat-Signal to remind Gotham of Batman's presence. The signal proves to be very effective, with drug dealers and criminals becoming apprehensive at its very appearance. At the end of the film, after reluctantly agreeing to let Batman take the blame for the murders committed by Harvey Dent to preserve Dent's image as Gotham's hero, Gordon hesitantly destroys the signal using an axe in front of various members of the police force and the press.
In the 2012 film The Dark Knight Rises, the rusted remains of the destroyed Bat-Signal are still atop police headquarters. However, at the end of the film, with Batman declared dead, Gordon sees a restored Bat-Signal, providing hope that Batman has survived. (The signal itself is never used once in the film, however, making it the only live-action film about Batman where this occurs.)
DC Extended Universe
During 2014's SDCC, a teaser for Zack Snyder's Batman v Superman: Dawn of Justice was shown to the audience in Hall H. The teaser showed Batman in his armored Batsuit atop a building one rainy night in Gotham. Batman removes a sheet to reveal the Bat Signal and proceeds to turn it on. Their audience is shown the projected image of the Batman logo in the sky until a figure appears out of nowhere in its place. A close-up of the figure reveals it is Superman glaring down at Batman readying his heat vision, as Batman stares back at the Man of Steel.
In the actual film, the Bat-Signal is first referenced when Superman lands in front of the Batmobile, causing it to crash into an empty warehouse, Superman tears the car open to inform Batman not to respond the next time they shine his light in the sky. Later, believing Superman responsible for the bombing of Congress, Batman activates the Bat-Signal himself to draw Superman to Gotham to confront him, unaware that Lex Luthor is manipulating them both into combat so that Superman will either be killed by Batman's kryptonite spear or forever compromise his image by killing Batman to save his mother. During the battle, the Bat-Signal is destroyed when Superman throws Batman into it.
The Bat-Signal appears again in Justice League and its director's cut, with Gordon using it to call Batman along with Wonder Woman, The Flash, and Cyborg. It also appears again at the end.
The Batman (2022)
The Bat-Signal is a major plot point in the 2022 film The Batman, directed by Matt Reeves who co-wrote the screenplay with Peter Craig. Unknown to anyone, Gotham Police Lieutenant James Gordon flashes the signal in the sky from one of the old Gotham Renewal Project buildings as a mode of contacting and meeting with Batman. Batman and Gordon use the signal to call each other to the location to meet and discuss. Batman in his opening monologue states that the signal also creates another purpose of spreading fear among Gotham's criminal element, as a warning. Criminals and thugs often are scared when looking at the signal as they think Batman's nearby and abandon their plans, fleeing the scene, which as Batman puts it, is an effective way of using fear as a tool, since he cannot be everywhere.
Animation
DC Animated Universe
In 1992's Batman: The Animated Series, the signal was built by Commissioner Gordon in "The Cape and Cowl Conspiracy". Barbara Gordon uses it to contact Batman in "Heart of Steel" when she believes that an impostor has replaced her father. At this meeting, the signal is partially destroyed when Batman is attacked by a Harvey Bullock duplicate, and Barbara uses Batman's grapple gun to pull the robot into the signal, electrocuting it. Likewise, the real Bullock uses the signal for the first time when reluctantly asking for Batman's help in discovering who is trying to kill him in "A Bullet for Bullock". The first use of a Bat-Signal of any kind in the series was in "Joker's Favor", where a man, forced to do a favor for the Joker at a dinner honoring Commissioner Gordon, uses a large bat model hanging from a crane, swinging it back and forth in front of a window to try to contact Batman.
In the 1993 film Batman: Mask of the Phantasm, Batman is being hunted by the police as a suspect in the recent murder of several gang lords (a crime committed by the Phantasm), and Bullock, under orders from Councilman Arthur Reeves, tries to use the Bat-Signal to lure him in. Batman, knowing that it is a trap, does not respond. It is also used at the end of the film to call Batman to action once again (after Batman was cleared of the murder charges).
The Bat-Signal is not used in the 1999 series Batman Beyond, save for one appearance, as Police Commissioner Barbara Gordon both has a direct line to the Batcave and is not as cooperative with the original Batman and his successor as her father was. The one appearance of the signal is in "Ascension", where Paxton Powers, the son of Derek Powers (Blight), has a small replica of it built to summon the new Batman, Terry McGinnis. Terry destroys it upon arrival, advising Paxton to "try e-mail", indicating his dislike of the device as being obsolete to his time.
In the 2002 web series Gotham Girls, Batgirl appears to push her father Commissioner Gordon onto the Bat-Signal, crushing it. It is revealed that he is merely a robotic replacement.
The Batman
In the episode "The Cat, the Bat, and the Ugly" of the animated TV series The Batman, Batman has just foiled a plot that The Penguin tried to pull on top of a lighthouse. After talking to Detective Yin, Batman is standing in front of the lighthouse light when the Bat Signal appears in the sky. In the second-season finale, "Night in the City" after newly inducted Commissioner Gordon finally agrees to ally with Batman; he begins using the Bat-Signal. After that, his "Batwave" alarm was rarely used.
The Lego Batman Movie
At the beginning of The Lego Batman Movie, Commissioner Gordon attempted to use the Bat-Signal to alert Batman only for it to be egged by Egghead thus disabling it. Later Batman uses the Bat-Signal to make different versions of the symbol for Robin, Barbara, Alfred, and many of Batman's allies summoning them to team up and defeat the Joker.
Batman: Gotham by Gaslight
In Batman: Gotham by Gaslight, when Selina Kyle is being pursued by Jack the Ripper in an empty fair, she uses her blood and a spotlight to create a makeshift Bat-Signal to attract Batman's attention, sketching a bat on the light and aiming it at the sky.
DC Super Hero Girls
In the DC Super Hero Girls animated short "#BatCatcher", Batgirl mistakenly believes she is summoned by the Bat-Signal when in reality the shadow is cast from a real bat inside her bedroom. In the episode "#FromBatToWorse", Batgirl tries to use a Bat-Signal flashlight to call Batman for help against Poison Ivy, but it doesn't work and Poison Ivy points out that, unlike Gotham City, there is no pollution in the skies of Metropolis for the Bat-Signal to shine against.
Harley Quinn
In the Harley Quinn episode "You're a Damn Good Cop, Jim Gordon", an overworked and depressed Commissioner Gordon starts excessively using the Bat-Signal to contact Batman for petty things like having someone to talk to about his failing marriage. Batman gets so annoyed that he confiscates the Bat Signal. By the end of the episode, they make amends and Batman restores it.
Batwheels: The Series
The Bat-Signal appears in the kid & family-friendly animated TV series, Batwheels. In the Season 2 episode, "Bat-Light Blow-Out",…
Video games
The Bat-Signal is also seen in DC Universe Online (2010), on top of the GCPD 9th station in the East End of Gotham. It is the focus of the feat to see places related to major DC Universe figures.
The Bat-Signal is seen in Batman: Arkham Asylum (2009) in the sky of Gotham City. During Batman's Scarecrow-induced nightmares, Batman must sneak through the remains of Arkham and defeat a gigantic Scarecrow by aiming the Bat-Signal at him. The Bat-Signal is also used in Batman: Arkham City (2011) as a waypoint in the sky that hovers high above the location of the player's objective, and the original signal is located at the now-abandoned GCPD building as the subject of a Riddler Challenge. The usage of the Bat-Signal as a waypoint continues in Batman: Arkham Origins (2013) and Batman: Arkham Knight (2015), though the signal itself appears only in the latter game. At the end of Arkham Knight, Batman initiates the Knightfall Protocol, which includes the destruction of the Bat-Signal via a built-in explosive added by Lucius Fox. In Batman: Arkham Shadow (2024), Batman, posing as Irving "Matches" Malone, sets the Bat-Signal alight with a molotov cocktail in order to get himself landed in Blackgate Prison.
In Batman: The Telltale Series (2016), Gordon first uses the Bat Signal in Episode 3, as he needed Batman's help when the cops are stretched thin throughout the city.
See also
Bat phone
References
Fictional elements introduced in 1942
Searchlights
Fictional symbols
it:Batman#Batsegnale | Bat-Signal | [
"Mathematics"
] | 4,922 | [
"Symbols",
"Fictional symbols"
] |
1,601,611 | https://en.wikipedia.org/wiki/Tempering%20%28metallurgy%29 | Tempering is a process of heat treating, which is used to increase the toughness of iron-based alloys. Tempering is usually performed after hardening, to reduce some of the excess hardness, and is done by heating the metal to some temperature below the critical point for a certain period of time, then allowing it to cool in still air. The exact temperature determines the amount of hardness removed, and depends on both the specific composition of the alloy and on the desired properties in the finished product. For instance, very hard tools are often tempered at low temperatures, while springs are tempered at much higher temperatures.
Introduction
Tempering is a heat treatment technique applied to ferrous alloys, such as steel or cast iron, to achieve greater toughness by decreasing the hardness of the alloy. The reduction in hardness is usually accompanied by an increase in ductility, thereby decreasing the brittleness of the metal. Tempering is usually performed after quenching, which is rapid cooling of the metal to put it in its hardest state. Tempering is accomplished by controlled heating of the quenched workpiece to a temperature below its "lower critical temperature". This is also called the lower transformation temperature or lower arrest (A1) temperature: the temperature at which the crystalline phases of the alloy, called ferrite and cementite, begin combining to form a single-phase solid solution referred to as austenite. Heating above this temperature is avoided, so as not to destroy the very-hard, quenched microstructure, called martensite.
Precise control of time and temperature during the tempering process is crucial to achieve the desired balance of physical properties. Low tempering temperatures may only relieve the internal stresses, decreasing brittleness while maintaining a majority of the hardness. Higher tempering temperatures tend to produce a greater reduction in the hardness, sacrificing some yield strength and tensile strength for an increase in elasticity and plasticity. However, in some low alloy steels, containing other elements like chromium and molybdenum, tempering at low temperatures may produce an increase in hardness, while at higher temperatures the hardness will decrease. Many steels with high concentrations of these alloying elements behave like precipitation hardening alloys, which produces the opposite effects under the conditions found in quenching and tempering, and are referred to as maraging steels.
In carbon steels, tempering alters the size and distribution of carbides in the martensite, forming a microstructure called "tempered martensite". Tempering is also performed on normalized steels and cast irons, to increase ductility, machinability, and impact strength. Steel is usually tempered evenly, called "through tempering," producing a nearly uniform hardness, but it is sometimes heated unevenly, referred to as "differential tempering," producing a variation in hardness.
History
Tempering is an ancient heat-treating technique. The oldest known example of tempered martensite is a pick axe which was found in Galilee, dating from around 1200 to 1100 BC. The process was used throughout the ancient world, from Asia to Europe and Africa. Many different methods and cooling baths for quenching have been attempted during ancient times, from quenching in urine, blood, or metals like mercury or lead, but the process of tempering has remained relatively unchanged over the ages. Tempering was often confused with quenching and, often, the term was used to describe both techniques. In 1889, Sir William Chandler Roberts-Austen wrote, "There is still so much confusion between the words "temper," "tempering," and "hardening," in the writings of even eminent authorities, that it is well to keep these old definitions carefully in mind. I shall employ the word tempering in the same sense as softening."
Terminology
In metallurgy, one may encounter many terms that have very specific meanings within the field, but may seem rather vague when viewed from the outside. Terms such as "hardness," "impact resistance," "toughness," and "strength" can carry many different connotations, making it sometimes difficult to discern the specific meaning. Some of the terms encountered, and their specific definitions are:
Strength – Resistance to permanent deformation and tearing. Strength, in metallurgy, is still a rather vague term, so is usually divided into yield strength (strength beyond which deformation becomes permanent), tensile strength (the ultimate tearing strength), shear strength (resistance to transverse, or cutting forces), and compressive strength (resistance to elastic shortening under a load).
Toughness – Resistance to fracture, as measured by the Charpy test. Toughness often increases as strength decreases, because a material that bends is less likely to break.
Hardness – A surface's resistance to scratching, abrasion, or indentation. In conventional metal alloys, there is a linear relation between indentation hardness and tensile strength, which eases the measurement of the latter.
Brittleness – Brittleness describes a material's tendency to break before bending or deforming either elastically or plastically. Brittleness increases with decreased toughness, but is greatly affected by internal stresses as well.
Plasticity – The ability to mold, bend or deform in a manner that does not spontaneously return to its original shape. This is proportional to the ductility or malleability of the substance.
Elasticity – Also called flexibility, this is the ability to deform, bend, compress, or stretch and return to the original shape once the external stress is removed. Elasticity is inversely related to the Young's modulus of the material.
Impact resistance – Usually synonymous with high-strength toughness, it is the ability to resist shock-loading with minimal deformation.
Wear resistance – Usually synonymous with hardness, this is resistance to erosion, ablation, spalling, or galling.
Structural integrity – The ability to withstand a maximum-rated load while resisting fracture, resisting fatigue, and producing a minimal amount of flexing or deflection, to provide a maximum service life.
Carbon steel
Very few metals react to heat treatment in the same manner, or to the same extent, that carbon steel does, and carbon-steel heat-treating behavior can vary radically depending on alloying elements. Steel can be softened to a very malleable state through annealing, or it can be hardened to a state as hard and brittle as glass by quenching. However, in its hardened state, steel is usually far too brittle, lacking the fracture toughness to be useful for most applications. Tempering is a method used to decrease the hardness, thereby increasing the ductility of the quenched steel, to impart some springiness and malleability to the metal. This allows the metal to bend before breaking. Depending on how much temper is imparted to the steel, it may bend elastically (the steel returns to its original shape once the load is removed), or it may bend plastically (the steel does not return to its original shape, resulting in permanent deformation), before fracturing. Tempering is used to precisely balance the mechanical properties of the metal, such as shear strength, yield strength, hardness, ductility, and tensile strength, to achieve any number of a combination of properties, making the steel useful for a wide variety of applications. Tools such as hammers and wrenches require good resistance to abrasion, impact resistance, and resistance to deformation. Springs do not require as much wear resistance, but must deform elastically without breaking. Automotive parts tend to be a little less strong, but need to deform plastically before breaking.
Except in rare cases where maximum hardness or wear resistance is needed, such as the untempered steel used for files, quenched steel is almost always tempered to some degree. However, steel is sometimes annealed through a process called normalizing, leaving the steel only partially softened. Tempering is sometimes used on normalized steels to further soften it, increasing the malleability and machinability for easier metalworking. Tempering may also be used on welded steel, to relieve some of the stresses and excess hardness created in the heat affected zone around the weld.
Quenched steel
Tempering is most often performed on steel that has been heated above its upper critical (A3) temperature and then quickly cooled, in a process called quenching, using methods such as immersing the hot steel in water, oil, or forced-air. The quenched steel, being placed in or very near its hardest possible state, is then tempered to incrementally decrease the hardness to a point more suitable for the desired application. The hardness of the quenched steel depends on both cooling speed and on the composition of the alloy. Steel with a high carbon content will reach a much harder state than steel with a low carbon content. Likewise, tempering high-carbon steel to a certain temperature will produce steel that is considerably harder than low-carbon steel that is tempered at the same temperature. The amount of time held at the tempering temperature also has an effect. Tempering at a slightly elevated temperature for a shorter time may produce the same effect as tempering at a lower temperature for a longer time. Tempering times vary, depending on the carbon content, size, and desired application of the steel, but typically range from a few minutes to a few hours.
Tempering quenched steel at very low temperatures, between , will usually not have much effect other than a slight relief of some of the internal stresses and a decrease in brittleness. Tempering at higher temperatures, from , will produce a slight reduction in hardness, but will primarily relieve much of the internal stresses. In some steels with low alloy content, tempering in the range of causes a decrease in ductility and an increase in brittleness, and is referred to as the "tempered martensite embrittlement" (TME) range. Except in the case of blacksmithing, this range is usually avoided. Steel requiring more strength than toughness, such as tools, are usually not tempered above . Instead, a variation in hardness is usually produced by varying only the tempering time. When increased toughness is desired at the expense of strength, higher tempering temperatures, from , are used. Tempering at even higher temperatures, between , will produce excellent toughness, but at a serious reduction in strength and hardness. At , the steel may experience another stage of embrittlement, called "temper embrittlement" (TE), which occurs if the steel is held within the temperature range of temper embrittlement for too long. When heating above this temperature, the steel will usually not be held for any amount of time, and quickly cooled to avoid temper embrittlement.
Normalized steel
Steel that has been heated above its upper critical temperature and then cooled in standing air is called normalized steel. Normalized steel consists of pearlite, martensite, and sometimes bainite grains, mixed together within the microstructure. This produces steel that is much stronger than full-annealed steel, and much tougher than tempered quenched steel. However, added toughness is sometimes needed at a reduction in strength. Tempering provides a way to carefully decrease the hardness of the steel, thereby increasing the toughness to a more desirable point. Cast steel is often normalized rather than annealed, to decrease the amount of distortion that can occur. Tempering can further decrease the hardness, increasing the ductility to a point more like annealed steel. Tempering is often used on carbon steels, producing much the same results. The process, called "normalize and temper", is used frequently on steels such as 1045 carbon steel, or most other steels containing 0.35 to 0.55% carbon. These steels are usually tempered after normalizing, to increase the toughness and relieve internal stresses. This can make the metal more suitable for its intended use and easier to machine.
Welded steel
Steel that has been arc welded, gas welded, or welded in any other manner besides forge welded, is affected in a localized area by the heat from the welding process. This localized area, called the heat-affected zone (HAZ), consists of steel that varies considerably in hardness, from normalized steel to steel nearly as hard as quenched steel near the edge of this heat-affected zone. Thermal contraction from the uneven heating, solidification, and cooling creates internal stresses in the metal, both within and surrounding the weld. Tempering is sometimes used in place of stress relieving (even heating and cooling of the entire object to just below the A1 temperature) to both reduce the internal stresses and to decrease the brittleness around the weld. Localized tempering is often used on welds when the construction is too large, intricate, or otherwise too inconvenient to heat the entire object evenly. Tempering temperatures for this purpose are generally around and .
Quench and self-temper
Modern reinforcing bar of 500 MPa strength can be made from expensive microalloyed steel or by a quench and self-temper (QST) process. After the bar exits the final rolling pass, where the final shape of the bar is applied, the bar is then sprayed with water which quenches the outer surface of the bar. The bar speed and the amount of water are carefully controlled in order to leave the core of the bar unquenched. The hot core then tempers the already quenched outer part, leaving a bar with high strength but with a certain degree of ductility too.
Blacksmithing
Tempering was originally a process used and developed by blacksmiths (forgers of iron). The process was most likely developed by the Hittites of Anatolia (modern-day Turkey), in the twelfth or eleventh century BC. Without knowledge of metallurgy, tempering was originally devised through a trial-and-error method.
Because few methods of precisely measuring temperature existed until modern times, the temperature was usually judged by watching the tempering colors of the metal. Tempering often consisted of heating above a charcoal or coal forge, or by fire, so holding the work at exactly the right temperature for the correct amount of time was usually not possible. Tempering was usually performed by slowly, evenly overheating the metal, as judged by the color, and then immediately cooling, either in open air or by immersing it in water. This produced much the same effect as heating at the proper temperature for the right amount of time, and avoided embrittlement by tempering within a short time period. However, although tempering-color guides exist, this method of tempering usually requires a good amount of practice to perfect, because the final outcome depends on many factors, including the composition of the steel, the speed at which it was heated, the type of heat source (oxidizing or carburizing), the cooling rate, oil films or impurities on the surface, and many other circumstances which vary from smith to smith or even from job to job. The thickness of the steel also plays a role. With thicker items, it becomes easier to heat only the surface to the right temperature, before the heat can penetrate through. However, very thick items may not be able to harden all the way through during quenching.
Tempering colors
If steel has been freshly ground, sanded, or polished, it will form an oxide layer on its surface when heated. As the temperature of the steel is increased, the thickness of the iron oxide will also increase. Although iron oxide is not normally transparent, such thin layers do allow light to pass through, reflecting off both the upper and lower surfaces of the layer. This causes a phenomenon called thin-film interference, which produces colors on the surface. As the thickness of this layer increases with temperature, it causes the colors to change from a very light yellow, to brown, to purple, and then to blue. These colors appear at very precise temperatures and provide the blacksmith with a very accurate gauge for measuring the temperature. The various colors, their corresponding temperatures, and some of their uses are:
Faint-yellow – – gravers, razors, scrapers
Light-straw – – rock drills, reamers, metal-cutting saws
Dark-straw – – scribers, planer blades
Brown – – taps, dies, drill bits, hammers, cold chisels
Purple – – surgical tools, punches, stone carving tools
Dark blue – – screwdrivers, wrenches
Light blue – – springs, wood-cutting saws
Grey-blue – and higher – structural steel
For carbon steel, beyond the grey-blue color the iron oxide loses its transparency, and the temperature can no longer be judged in this way, although other alloys like stainless steel may produce a much broader range including golds, teals, and magentas. The layer will also increase in thickness as time passes, which is another reason overheating and immediate cooling is used. Steel in a tempering oven, held at for a long time, will begin to turn brown, purple, or blue, even though the temperature did not exceed that needed to produce a light-straw color. Oxidizing or carburizing heat sources may also affect the final result. The iron oxide layer, unlike rust, also protects the steel from corrosion through passivation.
Differential tempering
Differential tempering is a method of providing different amounts of temper to different parts of the steel. The method is often used in bladesmithing, for making knives and swords, to provide a very hard edge while softening the spine or center of the blade. This increased the toughness while maintaining a very hard, sharp, impact-resistant edge, helping to prevent breakage. This technique was more often found in Europe, as opposed to the differential hardening techniques more common in Asia, such as in Japanese swordsmithing.
Differential tempering consists of applying heat to only a portion of the blade, usually the spine, or the center of double-edged blades. For single-edged blades, the heat, often in the form of a flame or a red-hot bar, is applied to the spine of the blade only. The blade is then carefully watched as the tempering colors form and slowly creep toward the edge. The heat is then removed before the light-straw color reaches the edge. The colors will continue to move toward the edge for a short time after the heat is removed, so the smith typically removes the heat a little early, so that the pale yellow just reaches the edge, and travels no farther. A similar method is used for double-edged blades, but the heat source is applied to the center of the blade, allowing the colors to creep out toward each edge.
Interrupted quenching
Interrupted quenching methods are often referred to as tempering, although the processes are very different from traditional tempering. These methods consist of quenching to a specific temperature that is above the martensite start (Ms) temperature, and then holding at that temperature for extended amounts of time. Depending on the temperature and the amount of time, this allows either pure bainite to form, or holds off forming the martensite until much of the internal stresses relax. These methods are known as austempering and martempering.
Austempering
Austempering is a technique used to form pure bainite, a transitional microstructure found between pearlite and martensite. In normalizing, both upper and lower bainite are usually found mixed with pearlite. To avoid the formation of pearlite or martensite, the steel is quenched in a bath of molten metals or salts. This quickly cools the steel past the point where pearlite can form and into the bainite-forming range. The steel is then held at the bainite-forming temperature, beyond the point where the temperature reaches an equilibrium, until the bainite fully forms. The steel is then removed from the bath and allowed to air-cool, without the formation of either pearlite or martensite.
Depending on the holding temperature, austempering can produce either upper or lower bainite. Upper bainite is a laminate structure formed at temperatures typically above and is a much tougher microstructure. Lower bainite is a needle-like structure, produced at temperatures below 350 °C, and is stronger but much more brittle. In either case, austempering produces greater strength and toughness for a given hardness, which is determined mostly by composition rather than cooling speed, and reduced internal stresses which could lead to breakage. This produces steel with superior impact resistance. Modern punches and chisels are often austempered. Because austempering does not produce martensite, the steel does not require further tempering.
Martempering
Martempering is similar to austempering, in that the steel is quenched in a bath of molten metal or salts to quickly cool it past the pearlite-forming range. However, in martempering, the goal is to create martensite rather than bainite. The steel is quenched to a much lower temperature than is used for austempering; to just above the martensite start temperature. The metal is then held at this temperature until the temperature of the steel reaches an equilibrium. The steel is then removed from the bath before any bainite can form, and then is allowed to air-cool, turning it into martensite. The interruption in cooling allows much of the internal stresses to relax before the martensite forms, decreasing the brittleness of the steel. However, the martempered steel will usually need to undergo further tempering to adjust the hardness and toughness, except in rare cases where maximum hardness is needed but the accompanying brittleness is not. Modern files are often martempered.
Physical processes
Tempering involves a three-step process in which unstable martensite decomposes into ferrite and unstable carbides, and finally into stable cementite, forming various stages of a microstructure called tempered martensite. The martensite typically consists of laths (strips) or plates, sometimes appearing acicular (needle-like) or lenticular (lens-shaped). Depending on the carbon content, it also contains a certain amount of "retained austenite." Retained austenite are crystals that are unable to transform into martensite, even after quenching below the martensite finish (Mf) temperature. An increase in alloying agents or carbon content causes an increase in retained austenite. Austenite has much higher stacking-fault energy than martensite or pearlite, lowering the wear resistance and increasing the chances of galling, although some or most of the retained austenite can be transformed into martensite by cold and cryogenic treatments prior to tempering.
The martensite forms during a diffusionless transformation, in which the transformation occurs due to shear stresses created in the crystal lattices rather than by chemical changes that occur during precipitation. The shear stresses create many defects, or "dislocations," between the crystals, providing less-stressful areas for the carbon atoms to relocate. Upon heating, the carbon atoms first migrate to these defects and then begin forming unstable carbides. This reduces the amount of total martensite by changing some of it to ferrite. Further heating reduces the martensite even more, transforming the unstable carbides into stable cementite.
The first stage of tempering occurs between room temperature and . In the first stage, carbon precipitates into ε-carbon (Fe2,4C). In the second stage, occurring between and , the retained austenite transforms into a form of lower-bainite containing ε-carbon rather than cementite (archaically referred to as "troostite"). The third stage occurs at and higher. In the third stage, ε-carbon precipitates into cementite, and the carbon content in the martensite decreases. If tempered at higher temperatures, between and , or for longer amounts of time, the martensite may become fully ferritic and the cementite may become coarser or more spherical. In spheroidized steel, the cementite network breaks apart and recedes into rods or spherical-shaped globules, and the steel becomes softer than annealed steel; nearly as soft as pure iron, making it very easy to form or machine.
Embrittlement
Embrittlement occurs during tempering when, through a specific temperature range, the steel experiences an increase in hardness and a reduction in ductility, as opposed to the normal decrease in hardness that occurs on either side of this range. The first type is called tempered martensite embrittlement (TME) or one-step embrittlement. The second is referred to as temper embrittlement (TE) or two-step embrittlement.
One-step embrittlement usually occurs in carbon steel at temperatures between and , and was historically referred to as "500 degree [Fahrenheit] embrittlement." This embrittlement occurs due to the precipitation of Widmanstatten needles or plates, made of cementite, in the interlath boundaries of the martensite. Impurities such as phosphorus, or alloying agents like manganese, may increase the embrittlement, or alter the temperature at which it occurs. This type of embrittlement is permanent, and can only be relieved by heating above the upper critical temperature and then quenching again. However, these microstructures usually require an hour or more to form, so are usually not a problem in the blacksmith method of tempering.
Two-step embrittlement typically occurs by aging the metal within a critical temperature range, or by slowly cooling it through that range, For carbon steel, this is typically between and , although impurities like phosphorus and sulfur increase the effect dramatically. This generally occurs because the impurities are able to migrate to the grain boundaries, creating weak spots in the structure. The embrittlement can often be avoided by quickly cooling the metal after tempering. Two-step embrittlement, however, is reversible. The embrittlement can be eliminated by heating the steel above and then quickly cooling.
Alloy steels
Many elements are often alloyed with steel. The main purpose for alloying most elements with steel is to increase its hardenability and to decrease softening under temperature. Tool steels, for example, may have elements like chromium or vanadium added to increase both toughness and strength, which is necessary for things like wrenches and screwdrivers. On the other hand, drill bits and rotary files need to retain their hardness at high temperatures. Adding cobalt or molybdenum can cause the steel to retain its hardness, even at red-hot temperatures, forming high-speed steels. Often, small amounts of many different elements are added to the steel to give the desired properties, rather than just adding one or two.
Most alloying elements (solutes) have the benefit of not only increasing hardness, but also lowering both the martensite start temperature and the temperature at which austenite transforms into ferrite and cementite. During quenching, this allows a slower cooling rate, which allows items with thicker cross-sections to be hardened to greater depths than is possible in plain carbon steel, producing more uniformity in strength.
Tempering methods for alloy steels may vary considerably, depending on the type and amount of elements added. In general, elements like manganese, nickel, silicon, and aluminum will remain dissolved in the ferrite during tempering while the carbon precipitates. When quenched, these solutes will usually produce an increase in hardness over plain carbon steel of the same carbon content. When hardened alloy-steels, containing moderate amounts of these elements, are tempered, the alloy will usually soften somewhat proportionately to carbon steel.
However, during tempering, elements like chromium, vanadium, and molybdenum precipitate with the carbon. If the steel contains fairly low concentrations of these elements, the softening of the steel can be retarded until much higher temperatures are reached, when compared to those needed for tempering carbon steel. This allows the steel to maintain its hardness in high-temperature or high-friction applications. However, this also requires very high temperatures during tempering, to achieve a reduction in hardness. If the steel contains large amounts of these elements, tempering may produce an increase in hardness until a specific temperature is reached, at which point the hardness will begin to decrease. For instance, molybdenum steels will typically reach their highest hardness around whereas vanadium steels will harden fully when tempered to around . When very large amounts of solutes are added, alloy steels may behave like precipitation-hardening alloys, which do not soften at all during tempering.
Cast iron
Cast iron comes in many types, depending on the carbon content. However, they are usually divided into grey and white cast iron, depending on the form that the carbides take. In grey cast iron, the carbon is mainly in the form of graphite, but in white cast iron, the carbon is usually in the form of cementite. Grey cast iron consists mainly of the microstructure called pearlite, mixed with graphite and sometimes ferrite. Grey cast iron is usually used as cast, with its properties being determined by its composition.
White cast iron is composed mostly of a microstructure called ledeburite mixed with pearlite. Ledeburite is very hard, making cast iron very brittle. If the white cast iron has a hypoeutectic composition, it is usually tempered to produce malleable or ductile cast iron. Two methods of tempering are used, called "white tempering" and "black tempering." The purpose of both tempering methods is to cause the cementite within the ledeburite to decompose, increasing the ductility.
White tempering
Malleable (porous) cast iron is manufactured by white tempering. White tempering is used to burn off excess carbon, by heating it for extended amounts of time in an oxidizing environment. The cast iron will usually be held at temperatures as high as for as long as 60 hours. The heating is followed by a slow cooling rate of around 10 °C (18 °F) per hour. The entire process may last 160 hours or more. This causes the cementite to decompose from the ledeburite, and then the carbon burns out through the surface of the metal, increasing the malleability of the cast iron.
Black tempering
Ductile (non-porous) cast iron (often called "black iron") is produced by black tempering. Unlike white tempering, black tempering is done in an inert gas environment, so that the decomposing carbon does not burn off. Instead, the decomposing carbon turns into a type of graphite called "temper graphite" or "flaky graphite," increasing the malleability of the metal. Tempering is usually performed at temperatures as high as for up to 20 hours. The tempering is followed by slow cooling through the lower critical temperature, over a period that may last from 50 to over 100 hours.
Precipitation hardening alloys
Precipitation-hardening alloys first came into use during the early 1900s. Most heat-treatable alloys fall into the category of precipitation-hardening alloys, including alloys of aluminum, magnesium, titanium, and nickel. Several high-alloy steels are also precipitation-hardening alloys. These alloys become softer than normal when quenched and then harden over time. For this reason, precipitation hardening is often referred to as "aging."
Although most precipitation-hardening alloys will harden at room temperature, some will only harden at elevated temperatures and, in others, the process can be sped up by aging at elevated temperatures. Aging at temperatures higher than room-temperature is called "artificial aging". Although the method is similar to tempering, the term "tempering" is usually not used to describe artificial aging, because the physical processes, (i.e.: precipitation of intermetallic phases from a supersaturated alloy) the desired results, (i.e.: strengthening rather than softening), and the amount of time held at a certain temperature is very different from tempering as used in carbon-steel.
See also
Annealing (metallurgy)
Austempering
Precipitation strengthening
Tempered glass
References
Further reading
Manufacturing Processes Reference Guide by Robert H. Todd, Dell K. Allen, and Leo Alting pg. 410
External links
A thorough discussion of tempering processes
Webpage showing heating glow and tempering colors
Metal heat treatments | Tempering (metallurgy) | [
"Chemistry"
] | 6,776 | [
"Metallurgical processes",
"Metal heat treatments"
] |
1,601,765 | https://en.wikipedia.org/wiki/Intensional%20logic | Intensional logic is an approach to predicate logic that extends first-order logic, which has quantifiers that range over the individuals of a universe (extensions), by additional quantifiers that range over terms that may have such individuals as their value (intensions). The distinction between intensional and extensional entities is parallel to the distinction between sense and reference.
Overview
Logic is the study of proof and deduction as manifested in language (abstracting from any underlying psychological or biological processes). Logic is not a closed, completed science, and presumably, it will never stop developing: the logical analysis can penetrate into varying depths of the language (sentences regarded as atomic, or splitting them to predicates applied to individual terms, or even revealing such fine logical structures like modal, temporal, dynamic, epistemic ones).
In order to achieve its special goal, logic was forced to develop its own formal tools, most notably its own grammar, detached from simply making direct use of the underlying natural language. Functors (also known as function words) belong to the most important categories in logical grammar (along with basic categories like sentence and individual name): a functor can be regarded as an "incomplete" expression with argument places to fill in. If we fill them in with appropriate subexpressions, then the resulting entirely completed expression can be regarded as a result, an output. Thus, a functor acts like a function sign, taking on input expressions, resulting in a new, output expression.
Semantics links expressions of language to the outside world. Also logical semantics has developed its own structure. Semantic values can be attributed to expressions in basic categories: the reference of an individual name (the "designated" object named by that) is called its extension; and as for sentences, their truth value is their extension.
As for functors, some of them are simpler than others: extension can be attributed to them in a simple way. In case of a so-called extensional functor we can in a sense abstract from the "material" part of its inputs and output, and regard the functor as a function turning directly the extension of its input(s) into the extension of its output. Of course, it is assumed that we can do so at all: the extension of input expression(s) determines the extension of the resulting expression. Functors for which this assumption does not hold are called intensional.
Natural languages abound with intensional functors; this can be illustrated by intensional statements. Extensional logic cannot reach inside such fine logical structures of the language, but stops at a coarser level. The attempts for such deep logical analysis have a long past: authors as early as Aristotle had already studied modal syllogisms. Gottlob Frege developed a kind of two-dimensional semantics: for resolving questions like those of intensional statements, Frege introduced a distinction between two semantic values: sentences (and individual terms) have both an extension and an intension. These semantic values can be interpreted, transferred also for functors (except for intensional functors, they have only intension).
As mentioned, motivations for settling problems that belong today to intensional logic have a long past. As for attempts of formalizations, the development of calculi often preceded the finding of their corresponding formal semantics. Intensional logic is not alone in that: also Gottlob Frege accompanied his (extensional) calculus with detailed explanations of the semantical motivations, but the formal foundation of its semantics appeared only in the 20th century. Thus sometimes similar patterns repeated themselves for the history of development of intensional logic like earlier for that of extensional logic.
There are some intensional logic systems that claim to fully analyze the common language:
Transparent intensional logic
Modal logic
Modal logic
Modal logic is historically the earliest area in the study of intensional logic, originally motivated by formalizing "necessity" and "possibility" (recently, this original motivation belongs to alethic logic, just one of the many branches of modal logic).
Modal logic can be regarded also as the most simple appearance of such studies: it extends extensional logic just with a few sentential functors: these are intensional, and they are interpreted (in the metarules of semantics) as quantifying over possible worlds. For example, the Necessity operator (the 'box') when applied to a sentence A says 'The sentence "('box')A" is true in world i if and only if it is true in all worlds accessible from world i'. The corresponding Possibility operator (the 'diamond') when applied to A asserts that "('diamond')A" is true in world i if and only if A is true in some worlds (at least one) accessible to world i. The exact semantic content of these assertions therefore depends crucially on the nature of the accessibility relation. For example, is world i accessible from itself? The answer to this question characterizes the precise nature of the system, and many exist, answering moral and temporal questions (in a temporal system, the accessibility relation relates states or 'instants' and only the future is accessible from a given moment. The Necessity operator corresponds to 'for all future moments' in this logic. The operators are related to one another by similar dualities to those relating existential and universal quantifiers (for example by the analogous correspondents of De Morgan's laws). I.e., Something is necessary if and only if its negation is not possible, i.e. inconsistent. Syntactically, the operators are not quantifiers, they do not bind variables, but govern whole sentences. This gives rise to the problem of referential opacity, i.e. the problem of quantifying over or 'into' modal contexts. The operators appear in the grammar as sentential functors, they are called modal operators.
As mentioned, precursors of modal logic include Aristotle. Medieval scholarly discussions accompanied its development, for example about de re versus de dicto modalities: said in recent terms, in the de re modality the modal functor is applied to an open sentence, the variable is bound by a quantifier whose scope includes the whole intensional subterm.
Modern modal logic began with the Clarence Irving Lewis. His work was motivated by establishing the notion of strict implication. The possible worlds approach enabled more exact study of semantical questions. Exact formalization resulted in Kripke semantics (developed by Saul Kripke, Jaakko Hintikka, Stig Kanger).
Type-theoretical intensional logic
Already in 1951, Alonzo Church had developed an intensional calculus. The semantical motivations were explained expressively, of course without those tools that we now use for establishing semantics for modal logic in a formal way, because they had not been invented then: Church did not provide formal semantic definitions.
Later, the possible worlds approach to semantics provided tools for a comprehensive study in intensional semantics. Richard Montague could preserve the most important advantages of Church's intensional calculus in his system. Unlike its forerunner, Montague grammar was built in a purely semantical way: a simpler treatment became possible, thank to the new formal tools invented since Church's work.
See also
Extensionality
Frege–Church ontology
Kripke semantics
Temperature paradox
Relevance
Notes
References
Melvin Fitting (2004). First-order intensional logic. Annals of Pure and Applied Logic 127:171–193. The 2003 preprint is used in this article.
Melvin Fitting (2007). Intensional Logic. In the Stanford Encyclopedia of Philosophy.
. Translation of the title: “Classical, modal and intensional logic”.
. Original: “The Development of Logic”. Translation of the title of the Appendix by Ruzsa, present only in Hungarian publication: “The last two decades”.
. Translation of the title: “Syntax and semantics of logic”.
.
Translation of the title: “Introduction to modern logic”.
External links
Non-classical logic
Philosophical logic
Predicate logic | Intensional logic | [
"Mathematics"
] | 1,691 | [
"Basic concepts in set theory",
"Predicate logic",
"Mathematical logic"
] |
1,601,810 | https://en.wikipedia.org/wiki/Adipoyl%20chloride | Adipoyl chloride (or adipoyl dichloride) is the organic compound with the formula (CH2CH2C(O)Cl)2. It is a colorless liquid. It reacts with water to give adipic acid.
It is prepared by treatment of adipic acid with thionyl chloride.
Adipoyl chloride reacts with hexamethylenediamine to form nylon 6,6.
See also
Adipamide
Adiponitrile
References
External links
MSDS Safety data
Acyl chlorides
Monomers | Adipoyl chloride | [
"Chemistry",
"Materials_science"
] | 111 | [
"Monomers",
"Polymer chemistry"
] |
1,601,833 | https://en.wikipedia.org/wiki/Solution%20in%20radicals | A solution in radicals or algebraic solution is an expression of a solution of a polynomial equation that is algebraic, that is, relies only on addition, subtraction, multiplication, division, raising to integer powers, and extraction of th roots (square roots, cube roots, etc.).
A well-known example is the quadratic formula
which expresses the solutions of the quadratic equation
There exist algebraic solutions for cubic equations and quartic equations, which are more complicated than the quadratic formula. The Abel–Ruffini theorem, and, more generally Galois theory, state that some quintic equations, such as
do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation can be solved as The eight other solutions are nonreal complex numbers, which are also algebraic and have the form where is a fifth root of unity, which can be expressed with two nested square roots. See also for various other examples in degree 5.
Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result.
See also
Radical symbol
Solvable quintics
Solvable sextics
Solvable septics
References
Algebra
Equations | Solution in radicals | [
"Mathematics"
] | 266 | [
"Algebra stubs",
"Equations",
"Mathematical objects",
"Algebra"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.