id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
2,987,576
https://en.wikipedia.org/wiki/Amineptine
Amineptine, formerly sold under the brand name Survector among others, is an atypical antidepressant of the tricyclic antidepressant (TCA) family. It acts as a selective and mixed dopamine reuptake inhibitor and releasing agent, and to a lesser extent as a norepinephrine reuptake inhibitor. Amineptine was developed by the French Society of Medical research in the 1960s. Introduced in France in 1978 by the pharmaceutical company Servier, amineptine soon gained a reputation for abuse due to its short-lived, but pleasant, stimulant effect experienced by some patients. After its release into the European market, cases of hepatotoxicity emerged, some serious. This, along with the potential for abuse, led to the suspension of the French marketing authorization for Survector in 1999. Amineptine is illegal in both Germany and the United States. Medical uses Amineptine was approved in France for severe clinical depression of endogenous origin in 1978. Contraindications Chorea Hypersensitivity: Known hypersensitivity to amineptine, in particular antecedents of hepatitis after dosage of the product. MAO inhibitors Precautions for use Warnings and precautions before taking amineptine: Breast feeding Children less than 15-year of age General anaesthesia: Discontinue the drug 24 to 48 hours before anaesthesia. Official sports/Olympic Games: Prohibited substance. 7 March Official Journal 2000. Pregnancy (first trimester) Effects on the fetus Lacking information in humans Non-teratogenic in rodents Side effects Dermatological Severe acne due to amineptine was first reported in 1988 by various authors—Grupper, Thioly-Bensoussan, Vexiau, Fiet, Puissant, Gourmel, Teillac, Levigne, to name a few—simultaneously in the same issue of Annales de Dermatologie et de Vénéréologie and in the 12 March 1988 issue of The Lancet. A year later, Dr Martin-Ortega and colleagues in Barcelona, Spain reported a case of "acneiform eruption" in a 54-year-old woman whose intake of amineptine was described as "excessive." One year after that, Vexiau and colleagues reported six women, one of whom never admitted to using amineptine, getting severe acne concentrated in the face, back and thorax, the severity of which varied with the dosage. Most of them were treated unsuccessfully with isotretinoin (Accutane) for about 18 months; two of the three that discontinued amineptine experienced a reduction in cutaneous symptoms, with the least affected patient going into remission. Psychiatric Psychomotor excitation can very rarely occur with this drug. Insomnia Irritability Nervousness Suicidal ideation. Seen early in the treatment, by lifting of psychomotor inhibition. Abuse and dependence The risk of addiction is low, but exists nonetheless. Between 1978 and 1988, there were 186 cases of amineptine addiction reported to the French Regional Centres of Pharmacovigilance; an analysis of 155 of those cases found that they were predominantly female, and that two-thirds of cases had known risk factors for addiction. However, a 1981 study of known opiate addicts and schizophrenia patients found no drug addiction in any of the subjects. In a 1990 study of eight amineptine dependence cases, the gradual withdrawal of amineptine could be achieved without problems in six people; in two others, anxiety, psychomotor agitation, and/or bulimia appeared. Withdrawal Pharmacodependence is very common with amineptine compared to other antidepressants. A variety of psychological symptoms can occur during withdrawal from amineptine, such as anxiety and agitation. Cardiovascular Very rarely: Arterial hypotension Palpitations Vasomotor episode Hepatic Amineptine can rarely cause hepatitis, of the cytolytic, cholestatic varieties. Amineptine-induced hepatitis, which is sometimes preceded by a rash, is believed to be due to an immunoallergic reaction. It resolves upon discontinuation of the offending drug. The risk of getting this may or may not be genetically determined. Additionally, amineptine is known to rarely elevate transaminases, alkaline phosphatase, and bilirubin. Mixed hepatitis, which is very rare, generally occurs between the 15th and 30th day of treatment of amineptine. Often preceded by sometimes intense abdominal pains, nausea, vomiting or a rash, the jaundice is variable. Hepatitis is either of mixed type or with cholestatic prevalence. The evolution was, in all the cases, favorable to the discontinuation of the drug. The mechanism is discussed (immunoallergic and/or toxic). In circa 1994 Spain, there was a case associating acute pancreatitis and mixed hepatitis, after three weeks of treatment. Lazaros and colleagues at the Western Attica General Hospital in Athens, Greece reported two cases of drug induced hepatitis 18 and 15 days of treatment. One case of cytolytic hepatitis occurred after ingestion of only one tablet. Gastrointestinal Acute pancreatitis (very rare) A case associating acute pancreatitis and mixed hepatitis after three weeks of treatment. Immunological A case of anaphylactic shock in a woman who had been taking amineptine has been reported. Pharmacology Pharmacodynamics Amineptine inhibits the reuptake of dopamine and, to a much lesser extent, of norepinephrine. In addition, it has been found to induce the release of dopamine. However, amineptine is much less efficacious as a dopamine releasing agent relative to D-amphetamine, and the drug appears to act predominantly as a dopamine reuptake inhibitor. In contrast to the case for dopamine, amineptine does not induce the release of norepinephrine, and hence acts purely as a norepinephrine reuptake inhibitor. Unlike other TCAs, amineptine interacts very weakly or not at all with the serotonin, adrenergic, dopamine, histamine, and muscarinic acetylcholine receptors. The major metabolites of amineptine have similar activity to that of the parent compound, albeit with lower potency. No human data appear to be available for binding or inhibition of the monoamine transporters by amineptine. Pharmacokinetics Peak plasma levels of amineptine following a single 100 mg oral dose have been found to range between 277 and 2,215 ng/mL (818–6,544 nM), with a mean of 772 ng/mL (2,281 nM), whereas maximal plasma concentrations of its major metabolite ranged between 144 and 1,068 ng/mL (465–3,452 nM), with a mean of 471 ng/mL (1,522 nM). After a single 200 mg oral dose of amineptine, mean peak plasma levels of amineptine were around 750 to 940 ng/mL (2,216–2,777 nM), while those of its major metabolite were about 750 to 970 ng/mL (2,216–3,135 nM). The time to peak concentrations is about 1 hour for amineptine and 1.5 hours for its major metabolite. The elimination half-life of amineptine is about 0.80 to 1.0 hours and that of its major metabolite is about 1.5 to 2.5 hours. Due to their very short elimination half-lives, amineptine and its major metabolite do not accumulate significantly with repeated administration. Society and culture Brand names Amineptine has been sold under a variety of brand names including Survector, Maneon, Directim, Neolior, Provector, and Viaspera. Legal status It had been proposed that Amineptine become a Schedule I controlled substance in the United States in July 2021. This announcement was followed by the placement of Amineptine into Schedule I. Research Wakefulness Amineptine shows wakefulness-promoting effects in animals and might be useful in the treatment of narcolepsy. References Amines Carboxylic acids Dibenzocycloheptenes Dopamine releasing agents Hepatotoxins Laboratoires Servier Norepinephrine–dopamine reuptake inhibitors Stimulants Tricyclic antidepressants Wakefulness-promoting agents Withdrawn drugs
Amineptine
[ "Chemistry" ]
1,817
[ "Carboxylic acids", "Functional groups", "Drug safety", "Amines", "Bases (chemistry)", "Withdrawn drugs" ]
2,987,828
https://en.wikipedia.org/wiki/Copper%28II%29%20acetate
Copper(II) acetate, also referred to as cupric acetate, is the chemical compound with the formula Cu(OAc)2 where AcO− is acetate (). The hydrated derivative, Cu2(OAc)4(H2O)2, which contains one molecule of water for each copper atom, is available commercially. Anhydrous copper(II) acetate is a dark green crystalline solid, whereas Cu2(OAc)4(H2O)2 is more bluish-green. Since ancient times, copper acetates of some form have been used as fungicides and green pigments. Today, copper acetates are used as reagents for the synthesis of various inorganic and organic compounds. Copper acetate, like all copper compounds, emits a blue-green glow in a flame. Structure Copper acetate hydrate adopts the paddle wheel structure seen also for related Rh(II) and Cr(II) tetraacetates. One oxygen atom on each acetate is bound to one copper atom at 1.97 Å (197 pm). Completing the coordination sphere are two water ligands, with Cu–O distances of 2.20 Å (220 pm). The two copper atoms are separated by only 2.62 Å (262 pm), which is close to the Cu–Cu separation in metallic copper. The two copper centers interact resulting in a diminishing of the magnetic moment such that at temperatures below 90 K, Cu2(OAc)4(H2O)2 is essentially diamagnetic. Cu2(OAc)4(H2O)2 was a critical step in the development of modern theories for antiferromagnetic exchange coupling, which ascribe its low-temperature diamagnetic behavior to cancellation of the two opposing spins on the adjacent copper atoms. Synthesis Copper(II) acetate is prepared industrially by heating copper(II) hydroxide or basic copper(II) carbonate with acetic acid. Uses in chemical synthesis Copper(II) acetate has found some use as an oxidizing agent in organic syntheses. In the Eglinton reaction Cu2(OAc)4 is used to couple terminal alkynes to give a 1,3-diyne: Cu2(OAc)4 + 2 RC≡CH → 2 CuOAc + RC≡C−C≡CR + 2 HOAc The reaction proceeds via the intermediacy of copper(I) acetylides, which are then oxidized by the copper(II) acetate, releasing the acetylide radical. A related reaction involving copper acetylides is the synthesis of ynamines, terminal alkynes with amine groups using Cu2(OAc)4. It has been used for hydroamination of acrylonitrile. It is also an oxidising agent in Barfoed's test. It reacts with arsenic trioxide to form copper acetoarsenite, a powerful insecticide and fungicide called Paris green. Related compounds Heating a mixture of anhydrous copper(II) acetate and copper metal affords copper(I) acetate: Cu + Cu(OAc)2 → 2 CuOAc Unlike the copper(II) derivative, copper(I) acetate is colourless and diamagnetic. "Basic copper acetate" is prepared by neutralizing an aqueous solution of copper(II) acetate. The basic acetate is poorly soluble. This material is a component of verdigris, the blue-green substance that forms on copper during long exposures to atmosphere. Other uses A mixture of copper acetate and ammonium chloride is used to chemically color copper with a bronze patina. Mineralogy The mineral hoganite is a naturally occurring form of copper(II) acetate. A related mineral, also containing calcium, is paceite. Both are very rare. References External links Copper.org – Other Copper Compounds 5 Feb. 2006 Infoplease.com – Paris green 6 Feb. 2006 Verdigris – History and Synthesis 6 Feb. 2006 Australian - National Pollutant Inventory 8 Aug. 2016 USA NIH National Center for Biotechnology Information 8 Aug. 2016 Copper(II) compounds Acetates Oxidizing agents Catalysts
Copper(II) acetate
[ "Chemistry" ]
893
[ "Catalysis", "Catalysts", "Redox", "Oxidizing agents", "Chemical kinetics" ]
2,987,843
https://en.wikipedia.org/wiki/Floquet%20theory
Floquet theory is a branch of the theory of ordinary differential equations relating to the class of solutions to periodic linear differential equations of the form with and being a piecewise continuous periodic function with period and defines the state of the stability of solutions. The main theorem of Floquet theory, Floquet's theorem, due to , gives a canonical form for each fundamental matrix solution of this common linear system. It gives a coordinate change with that transforms the periodic system to a traditional linear system with constant, real coefficients. When applied to physical systems with periodic potentials, such as crystals in condensed matter physics, the result is known as Bloch's theorem. Note that the solutions of the linear differential equation form a vector space. A matrix is called a fundamental matrix solution if the columns form a basis of the solution set. A matrix is called a principal fundamental matrix solution if all columns are linearly independent solutions and there exists such that is the identity. A principal fundamental matrix can be constructed from a fundamental matrix using . The solution of the linear differential equation with the initial condition is where is any fundamental matrix solution. Floquet's theorem Let be a linear first order differential equation, where is a column vector of length and an periodic matrix with period (that is for all real values of ). Let be a fundamental matrix solution of this differential equation. Then, for all , Here is known as the monodromy matrix. In addition, for each matrix (possibly complex) such that there is a periodic (period ) matrix function such that Also, there is a real matrix and a real periodic (period-) matrix function such that In the above , , and are matrices. Consequences and applications This mapping gives rise to a time-dependent change of coordinates (), under which our original system becomes a linear system with real constant coefficients . Since is continuous and periodic it must be bounded. Thus the stability of the zero solution for and is determined by the eigenvalues of . The representation is called a Floquet normal form for the fundamental matrix . The eigenvalues of are called the characteristic multipliers of the system. They are also the eigenvalues of the (linear) Poincaré maps . A Floquet exponent (sometimes called a characteristic exponent), is a complex such that is a characteristic multiplier of the system. Notice that Floquet exponents are not unique, since , where is an integer. The real parts of the Floquet exponents are called Lyapunov exponents. The zero solution is asymptotically stable if all Lyapunov exponents are negative, Lyapunov stable if the Lyapunov exponents are nonpositive and unstable otherwise. Floquet theory is very important for the study of dynamical systems, such as the Mathieu equation. Floquet theory shows stability in Hill differential equation (introduced by George William Hill) approximating the motion of the moon as a harmonic oscillator in a periodic gravitational field. Bond softening and bond hardening in intense laser fields can be described in terms of solutions obtained from the Floquet theorem. Dynamics of strongly driven quantum systems are often examined using Floquet theory. In superconducting circuits, Floquet framework has been leveraged to shed light on the quantum electrodynamics of drive-induced multiqubit interactions. References C. Chicone. Ordinary Differential Equations with Applications. Springer-Verlag, New York 1999. M.S.P. Eastham, "The Spectral Theory of Periodic Differential Equations", Texts in Mathematics, Scottish Academic Press, Edinburgh, 1973. . , Translation of Mathematical Monographs, 19, 294p. W. Magnus, S. Winkler. Hill's Equation, Dover-Phoenix Editions, . N.W. McLachlan, Theory and Application of Mathieu Functions, New York: Dover, 1964. External links Dynamical systems
Floquet theory
[ "Physics", "Mathematics" ]
814
[ "Mathematical objects", "Differential equations", "Equations", "Mechanics", "Dynamical systems" ]
2,987,862
https://en.wikipedia.org/wiki/Global%20Biodiversity%20Information%20Facility
The Global Biodiversity Information Facility (GBIF) is an international organisation that focuses on making scientific data on biodiversity available via the Internet using web services. The data are provided by many institutions from around the world; GBIF's information architecture makes these data accessible and searchable through a single portal. Data available through the GBIF portal are primarily distribution data on plants, animals, fungi, and microbes for the world, and scientific names data. The mission of the GBIF is to facilitate free and open access to biodiversity data worldwide to underpin sustainable development. Priorities, with an emphasis on promoting participation and working through partners, include mobilising biodiversity data, developing protocols and standards to ensure scientific integrity and interoperability, building an informatics architecture to allow the interlinking of diverse data types from disparate sources, promoting capacity building and catalysing development of analytical tools for improved decision-making. GBIF strives to form informatics linkages among digital data resources from across the spectrum of biological organisation, from genes to ecosystems, and to connect these to issues important to science, society and sustainability by using georeferencing and GIS tools. It works in partnership with other international organisations such as the Catalogue of Life partnership, Biodiversity Information Standards, the Consortium for the Barcode of Life (CBOL), the Encyclopedia of Life (EOL), and GEOSS. The biodiversity data available through the GBIF has increased by more than 1,150% in the past decade, partially due to the participation of citizen scientists. From 2002 to 2014, GBIF awarded a prestigious annual global award in the area of biodiversity informatics, the Ebbe Nielsen Prize, valued at €30,000. , the GBIF Secretariat presents two annual prizes: the GBIF Ebbe Nielsen Challenge and the Young Researchers Award. See also ABCD Schema Atlas of Living Australia (ALA) Australasian Virtual Herbarium (AVH) Darwin Core Global biodiversity List of electronic Floras (for other online flora databases) References External links Short description of GBIF GBIF network GBIF Data publishers International environmental organizations Biodiversity Ecology organizations Biodiversity databases Online taxonomy databases International organizations based in Denmark
Global Biodiversity Information Facility
[ "Biology", "Environmental_science" ]
441
[ "Biodiversity databases", "Environmental science databases", "Biodiversity" ]
2,987,931
https://en.wikipedia.org/wiki/Integrated%20Powerhead%20Demonstrator
The integrated powerhead demonstrator (IPD) was a U.S. Air Force project in the 1990s and early 2000s run by NASA and the Air Force Research Laboratory (AFRL) to develop a new rocket engine front-end ("powerhead", sometimes also termed a powerpack) that would utilize a full flow staged combustion cycle (FFSC). The prime contractors were Rocketdyne and Aerojet. The long-term design goal was to apply the advantages of FFSC to create a reusable engine with improved life, reliability and performance. The powerhead demonstrator project was to develop a demonstrator design of what could become the front-end for a future engine development project. No subsequent funding was made available by public policymakers, so no full engine design was ever completed. The turbines were also planned to feature hydrostatic bearings instead of the traditional ball bearings. History On July 19, 2006 Rocketdyne announced that the demonstrator engine front-end had been operated at full capacity. According to NASA, the Integrated Powerhead Demonstrator project was the first of three potential phases of the Integrated High Payoff Rocket Propulsion Technology Program, which was aimed at demonstrating technologies that double the capability of state-of-the-art cryogenic booster engines. The project's goal in 2005 was to develop a full-flow, hydrogen-fueled, staged combustion rocket engine. In 2007, Northrop Grumman announced it had received an AFRL contract to design and test a turbopump for liquid hydrogen propellants that could be used for these engines. In 2013, NASA announced in a press release that the powerhead demo had achieved steady test performance at 100% power for the first time and had achieved 300 seconds of operation across 26 tests. Future engine development work beyond the powerhead demo was never funded by the US government, and neither Rocketdyne—nor later Aerojet Rocketdyne after a 2013 merger—chose to pursue such development with their own or other private funding. References External links (Includes info on tech hurdles and development of IPD.) (Test firing news with pictures.) “Integrated Powerhead Demonstration (IPD).”Air Force Research Laboratory. Nov. 2004. Accessed 23 Apr. 2024. “Integrated Powerhead Demonstration (IPD) 1.” YouTube, YouTube, 29 Dec. 2021. "Integrated Powerhead Demonstration (IPD) 2."YouTube, YouTube, 28 Dec. 2021. Rocket engines using hydrogen propellant Rocketdyne engines Rocket engines using full flow staged combustion cycle Rocket engines of the United States
Integrated Powerhead Demonstrator
[ "Astronomy" ]
528
[ "Rocketry stubs", "Astronomy stubs" ]
2,987,943
https://en.wikipedia.org/wiki/Duffing%20equation
The Duffing equation (or Duffing oscillator), named after Georg Duffing (1861–1944), is a non-linear second-order differential equation used to model certain damped and driven oscillators. The equation is given by where the (unknown) function is the displacement at time , is the first derivative of with respect to time, i.e. velocity, and is the second time-derivative of i.e. acceleration. The numbers and are given constants. The equation describes the motion of a damped oscillator with a more complex potential than in simple harmonic motion (which corresponds to the case ); in physical terms, it models, for example, an elastic pendulum whose spring's stiffness does not exactly obey Hooke's law. The Duffing equation is an example of a dynamical system that exhibits chaotic behavior. Moreover, the Duffing system presents in the frequency response the jump resonance phenomenon that is a sort of frequency hysteresis behaviour. Parameters The parameters in the above equation are: controls the amount of damping, controls the linear stiffness, controls the amount of non-linearity in the restoring force; if the Duffing equation describes a damped and driven simple harmonic oscillator, is the amplitude of the periodic driving force; if the system is without a driving force, and is the angular frequency of the periodic driving force. The Duffing equation can be seen as describing the oscillations of a mass attached to a nonlinear spring and a linear damper. The restoring force provided by the nonlinear spring is then When and the spring is called a hardening spring. Conversely, for it is a softening spring (still with ). Consequently, the adjectives hardening and softening are used with respect to the Duffing equation in general, dependent on the values of (and ). The number of parameters in the Duffing equation can be reduced by two through scaling (in accord with the Buckingham π theorem), e.g. the excursion and time can be scaled as: and assuming is positive (other scalings are possible for different ranges of the parameters, or for different emphasis in the problem studied). Then: where and The dots denote differentiation of with respect to This shows that the solutions to the forced and damped Duffing equation can be described in terms of the three parameters (, , and ) and two initial conditions (i.e. for and ). Methods of solution In general, the Duffing equation does not admit an exact symbolic solution. However, many approximate methods work well: Expansion in a Fourier series may provide an equation of motion to arbitrary precision. The term, also called the Duffing term, can be approximated as small and the system treated as a perturbed simple harmonic oscillator. The Frobenius method yields a complex but workable solution. Any of the various numeric methods such as Euler's method and Runge–Kutta methods can be used. The homotopy analysis method (HAM) has also been reported for obtaining approximate solutions of the Duffing equation, also for strong nonlinearity. In the special case of the undamped () and undriven () Duffing equation, an exact solution can be obtained using Jacobi's elliptic functions. Boundedness of the solution for the unforced oscillator Undamped oscillator Multiplication of the undamped and unforced Duffing equation, with gives: with a constant. The value of is determined by the initial conditions and The substitution in H shows that the system is Hamiltonian: When both and are positive, the solution is bounded: with the Hamiltonian being positive. Damped oscillator Similarly, the damped oscillator converges globally, by Lyapunov function method since for damping. Without forcing the damped Duffing oscillator will end up at (one of) its stable equilibrium point(s). The equilibrium points, stable and unstable, are at If the stable equilibrium is at If and the stable equilibria are at and Frequency response The forced Duffing oscillator with cubic nonlinearity is described by the following ordinary differential equation: The frequency response of this oscillator describes the amplitude of steady state response of the equation (i.e. ) at a given frequency of excitation For a linear oscillator with the frequency response is also linear. However, for a nonzero cubic coefficient , the frequency response becomes nonlinear. Depending on the type of nonlinearity, the Duffing oscillator can show hardening, softening or mixed hardening–softening frequency response. Anyway, using the homotopy analysis method or harmonic balance, one can derive a frequency response equation in the following form: For the parameters of the Duffing equation, the above algebraic equation gives the steady state oscillation amplitude at a given excitation frequency. Graphically solving for frequency response We may graphically solve for as the intersection of two curves in the plane:For fixed , the second curve is a fixed hyperbola in the first quadrant. The first curve is a parabola with shape , and apex at location . If we fix and vary , then the apex of the parabola moves along the line . Graphically, then, we see that if is a large positive number, then as varies, the parabola intersects the hyperbola at one point, then three points, then one point again. Similarly we can analyze the case when is a large negative number. Jumps For certain ranges of the parameters in the Duffing equation, the frequency response may no longer be a single-valued function of forcing frequency For a hardening spring oscillator ( and large enough positive ) the frequency response overhangs to the high-frequency side, and to the low-frequency side for the softening spring oscillator ( and ). The lower overhanging side is unstable – i.e. the dashed-line parts in the figures of the frequency response – and cannot be realized for a sustained time. Consequently, the jump phenomenon shows up: when the angular frequency is slowly increased (with other parameters fixed), the response amplitude drops at A suddenly to B, if the frequency is slowly decreased, then at C the amplitude jumps up to D, thereafter following the upper branch of the frequency response. The jumps A–B and C–D do not coincide, so the system shows hysteresis depending on the frequency sweep direction. Transition to chaos The above analysis assumed that the base frequency response dominates (necessary for performing harmonic balance), and higher frequency responses are negligible. This assumption fails to hold when the forcing is sufficiently strong. Higher order harmonics cannot be neglected, and the dynamics become chaotic. There are different possible transitions to chaos, most commonly by successive period doubling. Examples Some typical examples of the time series and phase portraits of the Duffing equation, showing the appearance of subharmonics through period-doubling bifurcation – as well chaotic behavior – are shown in the figures below. The forcing amplitude increases from to The other parameters have the values: and The initial conditions are and The red dots in the phase portraits are at times which are an integer multiple of the period References Citations Bibliography External links Duffing oscillator on Scholarpedia MathWorld page Ordinary differential equations Chaotic maps Nonlinear systems Articles containing video clips
Duffing equation
[ "Mathematics" ]
1,522
[ "Functions and mappings", "Mathematical objects", "Nonlinear systems", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
2,987,957
https://en.wikipedia.org/wiki/Endosulfan
Endosulfan is an organochlorine insecticide and acaricide, which acts by blocking the GABA-gated chloride channel of the insect (IRAC group 2A). It became highly controversial due to its acute toxicity, potential for bioaccumulation, and role as an endocrine disruptor. Because of its threats to human health and the environment, a global ban on the manufacture and use of endosulfan was negotiated under the Stockholm Convention in April 2011. The ban took effect in mid-2012, with certain uses exempted for five additional years. More than 80 countries, including the European Union, Australia, New Zealand, several West African nations, the United States, Brazil, and Canada had already banned it or announced phase-outs by the time the Stockholm Convention ban was agreed upon. It is still used extensively in India and China despite laws against its use. It is also used in a few other countries. It is produced by the Israeli firm Makhteshim Agan and several manufacturers in India and China. On May 13, 2011, the India Supreme Court ordered a ban on the production and sale of endosulfan in India, pending further notice. Uses Endosulfan has been used in agriculture around the world to control insect pests including whiteflies, aphids, leafhoppers, Colorado potato beetles and cabbage worms. Due to its unique mode of action, it is useful in resistance management; however, as it is not specific, it can negatively impact populations of beneficial insects. It is, however, considered to be moderately toxic to honey bees, and it is less toxic to bees than organophosphate insecticides. Production The World Health Organization estimated worldwide annual production to be about 9,000 tonnes (t) in the early 1980s. From 1980 to 1989, worldwide consumption averaged 10,500 tonnes per year, and for the 1990s use increased to 12,800 tonnes per year. Endosulfan is a derivative of hexachlorocyclopentadiene, and is chemically similar to aldrin, chlordane, and heptachlor. Specifically, it is produced by the Diels-Alder reaction of hexachlorocyclopentadiene with cis-butene-1,4-diol and subsequent reaction of the adduct with thionyl chloride. Technical endosulfan is a 7:3 mixture of stereoisomers, designated α and β. α- and β-Endosulfan are configurational isomers arising from the pyramidal stereochemistry of the tetravalent sulfur. α-Endosulfan is the more thermodynamically stable of the two, thus β-endosulfan irreversibly converts to the α form, although the conversion is slow. History of commercialization and regulation Early 1950s: Endosulfan was developed. 1954: Hoechst AG (now Sanofi) won USDA approval for the use of endosulfan in the United States. 2000: Home and garden use in the United States was terminated by agreement with the EPA. 2002: The U.S. Fish and Wildlife Service recommended that endosulfan registration should be cancelled, and the EPA determined that endosulfan residues on food and in water pose unacceptable risks. The agency allowed endosulfan to stay on the US market, but imposed restrictions on its agricultural uses. 2007: International steps were taken to restrict the use and trade of endosulfan. It is recommended for inclusion in the Rotterdam Convention on Prior Informed Consent, and the European Union proposed inclusion in the list of chemicals banned under the Stockholm Convention on Persistent Organic Pollutants. Such inclusion would ban all use and manufacture of endosulfan globally. Meanwhile, the Canadian government announced that endosulfan was under consideration for phase-out, and Bayer CropScience voluntarily pulled its endosulfan products from the U.S. market but continues to sell the products elsewhere. 2008: In February, environmental, consumer, and farm labor groups including the Natural Resources Defense Council, Organic Consumers Association, and the United Farm Workers called on the U.S. EPA to ban endosulfan. In May, coalitions of scientists, environmental groups, and arctic tribes asked the EPA to cancel endosulfan, and in July a coalition of environmental and workers groups filed a lawsuit against the EPA challenging its 2002 decision to not ban it. In October, the Review Committee of the Stockholm Convention moved endosulfan along in the procedure for listing under the treaty, while India blocked its addition to the Rotterdam Convention. 2009: The Stockholm Convention's Persistent Organic Pollutants Review Committee (POPRC) agreed that endosulfan is a persistent organic pollutant and that "global action is warranted", setting the stage of a global ban. New Zealand banned endosulfan. 2010: The POPRC nominated endosulfan to be added to the Stockholm Convention at the Conference of Parties (COP) in April 2011, which would result in a global ban. The EPA announced that the registration of endosulfan in the U.S. will be cancelled Australia banned the use of the chemical. 2011: The Supreme Court of India banned manufacture, sale, and use of toxic pesticide endosulfan in India. The apex court said the ban would remain effective for eight weeks during which an expert committee headed by DG, ICMR, will give an interim report to the court about the harmful effect of the widely used pesticide. 2011: the Argentinian Service for Sanity and Agroalimentary Quality (SENASA) decided on August 8 that the import of endosulfan into the South American country will be banned from July 1, 2012, and its commercialization and use from July 1, 2013. In the meantime, a reduced quantity can be imported and sold. Health effects Endosulfan is alleged to be responsible for many fatal pesticide poisoning incidents around the world by NGOs opposing pesticide usage. Endosulfan is also a xenoestrogen—a synthetic substance that imitates or enhances the effect of estrogens—and it can act as an endocrine disruptor, causing reproductive and developmental damage in both animals and humans. It has also been found to act as an aromatase inhibitor. Whether endosulfan can cause cancer is debated. With regard to consumers' intake of endosulfan from residues on food, the Food and Agriculture Organization of United Nations has concluded that long-term exposure from food is unlikely to present a public health concern, but short-term exposure can exceed acute reference doses. Toxicity Endosulfan is acutely neurotoxic to both insects and mammals, including humans. The US EPA classifies it as Category I: "Highly Acutely Toxic" based on a LD50 value of 30 mg/kg for female rats, while the World Health Organization classifies it as Class II "Moderately Hazardous" based on a rat LD50 of 80 mg/kg. It is a GABA-gated chloride channel antagonist, and a Ca2+, Mg2+ ATPase inhibitor. Both of these enzymes are involved in the transfer of nerve impulses. Symptoms of acute poisoning include hyperactivity, tremors, convulsions, lack of coordination, staggering, difficulty breathing, nausea and vomiting, diarrhea, and in severe cases, unconsciousness. Doses as low as 35 mg/kg have been documented to cause death in humans, and many cases of sublethal poisoning have resulted in permanent brain damage. Farm workers with chronic endosulfan exposure are at risk of rashes and skin irritation. EPA's acute reference dose for dietary exposure to endosulfan is 0.015 mg/kg for adults and 0.0015 mg/kg for children. For chronic dietary expsoure, the EPA references doses are 0.006 mg/(kg·day) and 0.0006 mg/(kg·day) for adults and children, respectively. Endocrine disruption Theo Colborn, an expert on endocrine disruption, lists endosulfan as a known endocrine disruptor, and both the EPA and the Agency for Toxic Substances and Disease Registry consider endosulfan to be a potential endocrine disruptor. Numerous in vitro studies have documented its potential to disrupt hormones and animal studies have demonstrated its reproductive and developmental toxicity, especially among males. A number of studies have documented that it acts as an antiandrogen in animals. Endosulfan has shown to affect crustacean molt cycles, which are important biological and endocrine-controlled physiological processes essential for the crustacean growth and reproduction. Environmentally relevant doses of endosulfan equal to the EPA's safe dose of 0.006 mg/kg/day have been found to affect gene expression in female rats similarly to the effects of estrogen. It is not known whether endosulfan is a human teratogen (an agent that causes birth defects), though it has significant teratogenic effects in laboratory rats. A 2009 assessment concluded the endocrine disruption in rats occurs only at endosulfan doses that cause neurotoxicity. Reproductive and developmental effects Some studies have documented that endosulfan can also affect human development. Researchers studying children from many villages in Kasargod District, Kerala, India, have linked endosulfan exposure to delays in sexual maturity among boys. Endosulfan was the only pesticide applied to cashew plantations in the villages for 20 years, and had contaminated the village environment. The researchers compared the villagers to a control group of boys from a demographically similar village that lacked a history of endosulfan pollution. Relative to the control group, the exposed boys had high levels of endosulfan in their bodies, lower levels of testosterone, and delays in reaching sexual maturity. Birth defects of the male reproductive system, including cryptorchidism, were also more prevalent in the study group. The researchers concluded, "our study results suggest that endosulfan exposure in male children may delay sexual maturity and interfere with sex hormone synthesis." Increased incidences of cryptorchidism have been observed in other studies of endosulfan exposed populations. A 2007 study by the California Department of Public Health found that women who lived near farm fields sprayed with endosulfan and the related organochloride pesticide dicofol during the first eight weeks of pregnancy are several times more likely to give birth to children with autism. However a 2009 assessment concluded that epidemiology and rodent studies that suggest male reproductive and autism effects are open to other interpretations, and that developmental or reproductive toxicity in rats occurs only at endosulfan doses that cause neurotoxicity. Cancer Endosulfan is not listed as known, probable, or possible carcinogen by the EPA, IARC, or other agencies. No epidemiological studies link exposure to endosulfan specifically to cancer in humans, but in vitro assays have shown that endosulfan can promote proliferation of human breast cancer cells. Evidence of carcinogenicity in animals is mixed. In a 2016 study by the Department of Biochemistry, Indian Institute of Science, Bangalore published in Carcinogenesis, endosulfan was found to induce reactive oxygen species (ROS) in a concentration and time-dependent manner leading to double-stranded breaks in the DNA and also found to favour subsequent erroneous DNA repair. Environmental fate Endosulfan is a ubiquitous environmental contaminant. The chemical is semivolatile and persistent to degradation processes in the environment. Endosulfan is subject to long-range atmospheric transport, i.e. it can travel long distances from where it is used. Thus, it occurs in many environmental compartments. For example, a 2008 report by the National Park Service found that endosulfan commonly contaminates air, water, plants, and fish of national parks in the US. Most of these parks are far from areas where endosulfan is used. Endosulfan has been found in remote locations such as the Arctic Ocean, as well as in the Antarctic atmosphere. The pesticide has also been detected in dust from the Sahara Desert collected in the Caribbean after being blown across the Atlantic Ocean. The compound has been shown to be one of the most abundant organochlorine pesticides in the global atmosphere. The compound breaks down into endosulfan sulfate, endosulfan diol, and endosulfan furan, all of which have structures similar to the parent compound and, according to the EPA, "are also of toxicological concern...The estimated half-lives for the combined toxic residues (endosulfan plus endosulfan sulfate) [range] from roughly 9 months to 6 years." In soils, endosulfan sulfate is often the dominating compound. The EPA concluded, "[b]ased on environmental fate laboratory studies, terrestrial field dissipation studies, available models, monitoring studies, and published literature, it can be concluded that endosulfan is a very persistent chemical which may stay in the environment for lengthy periods of time, particularly in acid media." The EPA also concluded, "[e]ndosulfan has relatively high potential to bioaccumulate in fish." It is also toxic to amphibians; low levels have been found to kill tadpoles. In 2009, the committee of scientific experts of the Stockholm Convention concluded, "endosulfan is likely, as a result of long range environmental transport, to lead to significant adverse human health and environmental effects such that global action is warranted." In May 2011, the Stockholm Convention committee approved the recommendation for elimination of production and use of endosulfan and its isomers worldwide. This is, however, subject to certain exemptions. Overall, this will lead to its elimination from the global markets. Status by region India Although classified as a yellow label (highly toxic) pesticide by the Central Insecticides Board, India is one of the largest producers and the largest consumer of endosulfan in the world. Of the total volume manufactured in India, three companies—Excel Crop Care, Hindustan Insecticides Ltd, and Coromandal Fertilizers—produce 4,500 tonnes annually for domestic use and another 4,000 tonnes for export. Endosulfan is widely used in most of the plantation crops in India. The toxicity of endosulfan and health issues due to its bioaccumulation came under media attention when health issues precipitated in the Kasargod District (Endosulfan tragedy in Kerala) were publicised. This inspired protests, and the pesticide was banned in Kerala as early as 2001, following a report by the National Institute of Occupational Health. In the Stockholm Convention on Persistent Organic Pollutants of 2011, when an international consensus arose for the global ban of the pesticide, India opposed this move due to pressure from the endosulfan manufacturing companies. This flared up the protest, and while India still maintained its stance, the global conference decided on a global ban, for which India asked a remission for 10 years. Later, on a petition filed in the Supreme Court of India, the production, storage, sale and use of the pesticide was temporarily banned on 13 May 2011, and later permanently by the end of 2011. The Karnataka government also banned the use of endosulfan with immediate effect. Briefing presspersons after the State Cabinet meeting, Minister for Higher Education V.S. Acharya said the Cabinet discussed the harmful effects of endosulfan on the health of farmers and people living in rural areas. The government will now invoke the provisions of the Insecticides Act, 1968 (a Central act) and write a letter to the Union Government about the ban. Minister for Energy, and Food and Civil Supplies Shobha Karandlaje, who has been spearheading a movement seeking a ban on endosulfan, said, "I am grateful to Chief Minister B.S. Yeddyurappa and members of the Cabinet for approving the ban. Rajendra Singh Rana has written a letter to Prime Minister Manmohan Singh demanding the withdrawal of the National Institute of Occupational Health (NIOH) study on Endosulfan titled "Report Of The Investigation Of Unusual Illness" allegedly produced by the Endosulfan exposure in Padre village of Kasargod district in north Kerala. In his statement Mr. Rana said "The NIOH report is flawed. I'm in complete agreement with what the workers have to say on this. In fact, I have already made representation to the Prime Minister and concerned Union Ministers of health and environment demanding immediate withdrawal of the report," as reported by The Economic Times and Outlook India Mrs. Vibhavari Dave, local leader and Member of Legislative Assembly (MLA), from Bhavnagar, Gujarat, voiced her concerns on the impact of ban of endosulfan on families and workers of Bhavnagar. She was a part of the delegation with Bhavnagar MP, Rajendra Singh Rana, which submitted a memorandum to the district collector's office to withdraw the NIOH report calling for ban of endosulfan. The Pollution Control Board of the Government of Kerala, prohibited the use of endosulfan in the state of Kerala on 10 November 2010. On February 18, 2011, the Karnataka government followed suit and suspended the use of endosulfan for a period of 60 days in the state. Indian Union Minister of Agriculture Sharad Pawar has ruled out implementing a similar ban at the national level despite the fact that endosulfan has banned in 63 countries, including the European Union, Australia, and New Zealand. The Government of Gujarat had initiated a study in response to the workers' rally in Bhavnagar and representations made by Sishuvihar, an NGO based in Ahmadabad. The committee constituted for the study also included former Deputy Director of NIOH, Ahmadabad. The committee noted that the WHO, FAO, IARC and US EPA have indicated that endosulfan is not carcinogenic, not teratogenic, not mutagenic and not genotoxic. The highlight of this report is the farmer exposure study based on analysis of their blood reports for residues of endosulfan and the absence of any residues. This corroborates the lack of residues in worker-exposure studies. The Supreme Court passed interim order on May 13, 2011, in a Writ Petition filed by Democratic Youth Federation of India, (DYFI), a youth wing of Communist Party of India (Marxist) in the backdrop of the incidents reported in Kasargode, Kerala, and banned the production, distribution and use of endosulfan in India because the pesticide has debilitating effects on humans and the environment. The Centre for Science and Environment (CSE) welcomed this order, and called it a "resounding defeat" for the pesticide industry which has been promoting this deadly toxin. A 2001 study by CSE had linked the aerial spraying of the pesticide with the growing health disorders in Kasaragode. However some scientists have called this study flawed. KM Sreekumar of the Padannakkad College of Agriculture in Kasargod and Prathapan KD of the Kerala Agricultural University in a paper claim that the extensive spread of diseases in the area cannot be solely attributed to the use of Endosulfan and criticised the CSE for inflating the level of endosulfan reported in the blood. In July 2012, the Government asked the Supreme Court to allow use of the pesticide in all states except Kerala and Karnataka, as these states are ready to use it for pest control. But the court did not consider this request. India will phase out all endosulfan use by 2017. On January 10, 2017, The Supreme Court ordered the State Governments to release the remaining undisbursed payment of compensation quantified (Rupees Five lakhs each) to all the affected persons within three months. KM Sreekumar and Prathapan KD (2013) of Kerala Agricultural University critically examined the epidemiological studies on health conducted by the Calicut Medical College. Research design, health parameters, pesticide residue analysis, inconsistencies in the results, and conclusions of the study were questioned with data. A study by Embrandiri et al was also examined. The action of the CMC researchers in bringing out two different reports -one 15 paged and the other 55 paged on the same subject and opportunistic use of scientific claims against research ethics were exposed. The adverse impact of the baseless propaganda of health effects caused by endosulfan on the life of the people of Kasaragod was narrated. Sreekumar and Prathapan (2021) reviewed the literature on the toxicology of endosulfan including assessment of the various pesticide-regulating agencies worldwide, and found that doses of endosulfan recommended for agricultural purposes did not cause any public health issue anywhere in the world. The statistical analysis of the medical camp data and primary data of the 2015 Kerala Disability Census, does not indicate a higher prevalence any of the health problems in the endosulfan-sprayed areas adjoining Plantation Corporation of Kerala owned cashew estates, compared to the unsprayed areas in the same Grama Panchayath in Kasaragod and elsewhere in Kerala. New Zealand Endosulfan was banned in New Zealand by the Environmental Risk Management Authority effective January 2009 after a concerted campaign by environmental groups and the Green Party. Philippines A shipment of about 10 tonnes of endosulfan was illegally stowed on the ill-fated MV Princess of the Stars, a ferry that sank off the waters of Romblon (Sibuyan Island), Philippines, during a storm in June 2008. Search, rescue, and salvage efforts were suspended when the endosulfan shipment was discovered, and blood samples from divers at the scene were sent to Malaysia for analysis. The Department of Health of the Philippines has temporarily banned the consumption of fish caught in the area. Endosulfan is classified as a "Severe Marine Pollutant" by the International Maritime Dangerous Goods Code. United States In the United States, endosulfan is only registered for agricultural use, and these uses are being phased out. It has been used extensively on cotton, potatoes, tomatoes, and apples according to the EPA. The EPA estimates that of endosulfan were used annually from 1987 to 1997. The US exported more than of endosulfan from 2001 to 2003, mostly to Latin America, but production and export has since stopped. In California, endosulfan contamination from the San Joaquin Valley has been implicated in the extirpation of the mountain yellow-legged frog from parts of the nearby Sierra Nevada. In Florida, levels of contamination the Everglades and Biscayne Bay are high enough to pose a threat to some aquatic organisms. In 2007, the EPA announced it was rereviewing the safety of endosulfan. The following year, Pesticide Action Network and NRDC petitioned the EPA to ban endosulfan, and a coalition of environmental and labor groups sued the EPA seeking to overturn its 2002 decision to not ban endosulfan. In June 2010, the EPA announced it was negotiating a phaseout of all uses with the sole US manufacturer, Makhteshim Agan, and a complete ban on the compound. An official statement by Makhteshim Agan of North America (MANA) states, "From a scientific standpoint, MANA continues to disagree fundamentally with EPA's conclusions regarding endosulfan and believes that key uses are still eligible for re-registration." The statement adds, "However, given the fact that the endosulfan market is quite small and the cost of developing and submitting additional data high, we have decided to voluntarily negotiate an agreement with EPA that provides growers with an adequate time frame to find alternatives for the damaging insect pests currently controlled by endosulfan." Australia Australia banned endosulfan on October 12, 2010, with a two-year phase-out for stock of endosulfan-containing products. Australia had, in 2008, announced endosulfan would not be banned. Citing New Zealand's ban, the Australian Greens called for "zero tolerance" of endosulfan residue on food. Taiwan US apples with endosulfan are now allowed to be exported to Taiwan, although the ROC government denied any US pressure on it. Brazil Brazil decreed total ban of the substance from July 31, 2013, being forbidden imports of the product from July 31, 2011, date in which national production and utilization begins to be phased out gradually. References External links CDC - NIOSH Pocket Guide to Chemical Hazards 2009 Environmental Justice Foundation report detailing impacts of Endosulfan, highlighting why it should be banned globally Resources on Endosulfan, India Environment Portal Levels of endosulfan residues on food in the U.S. Endosulphan Victims in Kerala Protect Endosufan Network — Information about endosulfan from Protect Endosufan Network. State of endosulfan, Down To Earth Interim report on endosulfan submitted by expert committee to the Supreme Court of India, Aug 4, 2011 Weeping wombs of Kasaragod Tehelka Magazine, Vol 8, Issue 18, Dated 07 May 2011 Aromatase inhibitors Endocrine disruptors Convulsants Cyclopentenes Neurotoxins Nonsteroidal antiandrogens Organochloride insecticides Organosulfites Teratogens Xenoestrogens Persistent organic pollutants under the Stockholm Convention
Endosulfan
[ "Chemistry" ]
5,326
[ "Persistent organic pollutants under the Stockholm Convention", "Endocrine disruptors", "Neurochemistry", "Teratogens", "Neurotoxins" ]
2,987,970
https://en.wikipedia.org/wiki/List%20of%20Apple%20II%20application%20software
This is a list of Apple II applications including utilities and development tools. There is a separate List of Apple II games. 0–9 3D Art Graphics - 3D computer graphics software, a set of 3D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 A A2Command - Norton Commander style file manager ADTPro - telecom Apple Writer - word processor AppleWorks - integrated word processor, spreadsheet, and database suite (II & GS) ASCII Express - telecom B Bank Street Writer - word processor C CatFur - file transfer / chat software for the APPLE-CAT modem Cattlecar Galactica - Super Hi-Res Chess in its later, expanded version Contiki - 8-bit text web browser Copy II+ - copy and disk utilities Crossword Magic - Given clues and answers, software automatically arranges the answers into a crossword grid. D Dalton Disk Desintegrator - disk archiver Davex - Unix type shell Dazzle Draw - bitmap graphics editor Design Your Own Home - home design (GS) Disk Muncher - disk copy Diversi Copy - disk copy (GS) DOS.MASTER - DOS 3.3 -> ProDOS utility E Edisoft - text editor EasyMailer EasyWriter F Fantavision - vector graphics animation package G GEOS - integrated office suite GNO/ME - Unix type shell (GS) GraphicEdge - business graphics for AppleWorks spreadsheets (II & GS & Mac) Great American Probability Machine - first full-screen Apple II animations L Lock Smith - copy and disk utilities Logo - easy educational graphic programming language M Magic Window - one of the most popular Apple II word processors by Artsci Merlin 8 & 16 - assembler (II & GS) Micro-DYNAMO - simulation software to build system dynamics models MouseWrite and MouseWrite II - first mouse based word processor for Apple II (II & GS) O Omnis I, II, and III - database/file manager (II & GS) Orbit, Orbit II and Orbit II Plus - Satellite Simulation Software (II & GS) Marketed by Paul F. Kisak and KKI ORCA - program language suite (II & GS) P Point2Point - computer to computer communications program for chat and file transmission (II) PrintShop - sign, banner, and card maker (II & GS) ProSel - disk and file utilities (II & GS) ProTERM - telecom program and text editor PublishIT - desktop publishing (versions 1–4) R Rendezvous - shuttle orbital simulation game S ShrinkIt - disk and file compressor and archiver (II & GS) Spectrum Internet Suite - Internet tools and web browser (GS) Super Hi-Res Chess - early game aimed at programmers and "power users" SynthLAB - music composing software T TellStar - astronomy Twilight II - Apple IIGS screensaver (GS) V VisiCalc - spreadsheet W Word Juggler - word processor WordPerfect - word processor WordStar - word processor Z Z-Link - telecom Zardax - word processor ZBASIC - language - Zedcor Systems References Apple II application software
List of Apple II application software
[ "Technology" ]
642
[ "Computing-related lists", "Lists of software" ]
2,988,093
https://en.wikipedia.org/wiki/Kodaira%20vanishing%20theorem
In mathematics, the Kodaira vanishing theorem is a basic result of complex manifold theory and complex algebraic geometry, describing general conditions under which sheaf cohomology groups with indices q > 0 are automatically zero. The implications for the group with index q = 0 is usually that its dimension — the number of independent global sections — coincides with a holomorphic Euler characteristic that can be computed using the Hirzebruch–Riemann–Roch theorem. The complex analytic case The statement of Kunihiko Kodaira's result is that if M is a compact Kähler manifold of complex dimension n, L any holomorphic line bundle on M that is positive, and KM is the canonical line bundle, then for q > 0. Here stands for the tensor product of line bundles. By means of Serre duality, one also obtains the vanishing of for q < n. There is a generalisation, the Kodaira–Nakano vanishing theorem, in which , where Ωn(L) denotes the sheaf of holomorphic (n,0)-forms on M with values on L, is replaced by Ωr(L), the sheaf of holomorphic (r,0)-forms with values on L. Then the cohomology group Hq(M, Ωr(L)) vanishes whenever q + r > n. The algebraic case The Kodaira vanishing theorem can be formulated within the language of algebraic geometry without any reference to transcendental methods such as Kähler metrics. Positivity of the line bundle L translates into the corresponding invertible sheaf being ample (i.e., some tensor power gives a projective embedding). The algebraic Kodaira–Akizuki–Nakano vanishing theorem is the following statement: If k is a field of characteristic zero, X is a smooth and projective k-scheme of dimension d, and L is an ample invertible sheaf on X, then where the Ωp denote the sheaves of relative (algebraic) differential forms (see Kähler differential). showed that this result does not always hold over fields of characteristic p > 0, and in particular fails for Raynaud surfaces. Later give a counterexample for singular varieties with non-log canonical singularities, and also, gave elementary counterexamples inspired by proper homogeneous spaces with non-reduced stabilizers. Until 1987 the only known proof in characteristic zero was however based on the complex analytic proof and the GAGA comparison theorems. However, in 1987 Pierre Deligne and Luc Illusie gave a purely algebraic proof of the vanishing theorem in . Their proof is based on showing that the Hodge–de Rham spectral sequence for algebraic de Rham cohomology degenerates in degree 1. This is shown by lifting a corresponding more specific result from characteristic p > 0 — the positive-characteristic result does not hold without limitations but can be lifted to provide the full result. Consequences and applications Historically, the Kodaira embedding theorem was derived with the help of the vanishing theorem. With application of Serre duality, the vanishing of various sheaf cohomology groups (usually related to the canonical line bundle) of curves and surfaces help with the classification of complex manifolds, e.g. Enriques–Kodaira classification. See also Kawamata–Viehweg vanishing theorem Mumford vanishing theorem Ramanujam vanishing theorem Note References Phillip Griffiths and Joseph Harris, Principles of Algebraic Geometry Theorems in complex geometry Topological methods of algebraic geometry Theorems in algebraic geometry
Kodaira vanishing theorem
[ "Mathematics" ]
737
[ "Theorems in algebraic geometry", "Theorems in complex geometry", "Theorems in geometry" ]
2,988,185
https://en.wikipedia.org/wiki/Philopatry
Philopatry is the tendency of an organism to stay in or habitually return to a particular area. The causes of philopatry are numerous, but natal philopatry, where animals return to their birthplace to breed, may be the most common. The term derives from the Greek roots philo, "liking, loving" and patra, "fatherland", although in recent years the term has been applied to more than just the animal's birthplace. Recent usage refers to animals returning to the same area to breed despite not being born there, and migratory species that demonstrate site fidelity: reusing stopovers, staging points, and wintering grounds. Some of the known reasons for organisms to be philopatric would be for mating (reproduction), survival, migration, parental care, resources, etc.. In most species of animals, individuals will benefit from living in groups, because depending on the species, individuals are more vulnerable to predation and more likely to have difficulty finding resources and food. Therefore, living in groups increases a species' chances of survival, which correlates to finding resources and reproducing. Again, depending on the species, returning to their birthplace where that particular species occupies that territory is the more favorable option. The birthplaces for these animals serve as a territory for them to return for feeding and refuge, like fish from a coral reef. In an animal behavior study conducted by Paul Greenwood, overall female mammals are more likely to be philopatric, while male mammals are more likely to disperse. Male birds are more likely to be philopatric, while females are more likely to disperse. Philopatry will favor the evolution of cooperative traits because the direction of sex has consequences from the particular mating system. Breeding-site philopatry One type of philopatry is breeding philopatry, or breeding-site fidelity, and involves an individual, pair, or colony returning to the same location to breed, year after year . The animal can live in that area and reproduce although animals can reproduce anywhere but it can have a higher lifespan in their birth area. Among animals that are largely sedentary, breeding-site philopatry is common. It is advantageous to reuse a breeding site, as there may be territorial competition outside of the individual’s home range, and since the area evidently meets the requirements of breeding. Such advantages are compounded among species that invest heavily in the construction of a nest or associated courtship area. For example, the megapodes (large, ground-dwelling birds such as the Australian malleefowl, Leipoa ocellata) construct a large mound of vegetation and soil or sand to lay their eggs in. Megapodes often reuse the same mound for many years, only abandoning it when it is damaged beyond repair, or due to disturbance. Nest fidelity is highly beneficial as reproducing is time and energy consuming (malleefowl will tend a mound for five to six months per year). In colonial seabirds, it has been shown that nest fidelity depends on multi-scale information, including the breeding success of the focal breeding pair, the average breeding success of the rest of the colony, and the interaction of these two scales. Breeding fidelity is also well documented among species that migrate or disperse after reaching maturity. Birds, in particular, that disperse as fledglings will take advantage of exceptional navigational skills to return to a previous site. Philopatric individuals exhibit learning behaviour, and do not return to a location in following years if a breeding attempt is unsuccessful. The evolutionary benefits of such learning are evident: individuals that risk searching for a better site will not have lower fitness than those that persist with a poor site. Philopatry is not homogenous within a species, with individuals far more likely to exhibit philopatry if the breeding habitat is isolated. Similarly, non-migratory populations are more likely to be philopatric that those that migrate. In species that exhibit lifelong monogamous pair bonds, even outside of the breeding season, there is no bias in the sex that is philopatric. However, among polygynous species that disperse (including those that find only a single mate per breeding season), there is a much higher rate of breeding-site philopatry in males than females among birds, and the opposite bias among mammals. Many possible explanations for this sex bias have been posited, with the earliest accepted hypothesis attributing the bias to intrasexual competition, and territory choice. The most widely accepted hypothesis is that proposed by Greenwood (1980). Among birds, males invest highly in protecting resources – a territory – against other males. Over consecutive seasons, a male that returns to the same territory has higher fitness than one that is not philopatric. Females are free to disperse, and assess males. Conversely, in mammals, the predominant mating system is one of matrilineal social organisation. Males generally invest little in the raising of offspring, and compete with each other for mates rather than resources. Thus, dispersing can result in reproductive enhancement, as greater access to females is available. On the other hand, the cost of dispersal to females is high, and thus they are philopatric. This hypothesis also applies to natal philopatry, but is primarily concerned with breeding-site fidelity. A more recent hypothesis builds on Greenwood’s findings, suggesting that parental influence may play a large role. Because birds lay eggs, adult females are at risk of being cuckolded by their daughters, and thus would drive them out. On the other hand, young male mammals pose a threat to their dominant father, and so are driven to disperse while young. Natal philopatry This page discusses the evolutionary reasons for philopatry. For the mechanisms of philopatry, see Natal homing. Natal philopatry commonly refers to the return to the area the animal was born in, or to animals remaining in their natal territory. It is a form of breeding-site philopatry. The debate over the evolutionary causes remains unsettled. The outcomes of natal philopatry may be speciation, and, in cases of non-dispersing animals, cooperative breeding. Natal philopatry is the most common form of philopatry in females because it decreases competition for mating and increases the rate of reproduction and a higher survival rate for offspring. Natal philopatry also leads to a kin-structured population, which is when the population is more genetically related than less related between individuals in a species. This can also lead to inbreeding and a higher rate of natural and sexual selection within a population. Evolutionary causes of philopatry The exact causes for the evolution of natal philopatry are unknown. Two major hypotheses have been proposed. Shields (1982) suggested that philopatry was a way of ensuring inbreeding, in a hypothesis known as the optimal-inbreeding hypothesis. He argued that, since philopatry leads to the concentration of related individuals in their birth areas, and thus reduced genetic diversity, there must be some advantage to inbreeding – otherwise the process would have been evolutionary detrimental and would not be so prevalent. The major beneficial outcome under this hypothesis is the protection of a local gene complex that is finely adapted to the local environment. Another proposed benefit is the reduction of the cost of meiosis and recombination events. Under this hypothesis, non-philopatric individuals would be maladapted and over multi-generational time, philopatry within a species could become fixed. Evidence for the optimal-inbreeding hypothesis is found in outbreeding depression. Outbreeding depression involves reduced fitness as a result of random mating, which occurs due to the breakdown of coadapted gene complexes by combining allele that do not cross well with those from a different subpopulation. However, it is important to note that outbreeding depression becomes more detrimental the longer (temporally) that subpopulations have been separated, and that this does hypothesis does not provide an initial mechanism for the evolution of natal philopatry. A second hypothesis explains the evolution of natal philopatry as a method of reducing the high costs of dispersal among offspring. A review of records of natal philopatry among passerine birds found that migrant species showed significantly less site fidelity than sedentary birds. Among migratory species, the cost of dispersal is paid either way. If the optimal-inbreeding hypothesis was correct, the benefits of inbreeding should result in philopatry among all species. Inbreeding depression is a phenomenon whereby deleterious alleles become fixed more easily within an inbreeding population. Inbreeding depression is demonstrably costly and accepted by most scientists as a greater cost than those of outbreeding depression. Within a species, there has also been found to be variation in rates of philopatry, with migratory populations exhibiting low levels of philopatry – further suggesting that the ecological cost of dispersal, rather than genetic benefits of either inbreeding or outbreeding, is the driver of natal philopatry. A number of other hypotheses exist. One such is that philopatry is a method, in migratory species, of ensuring that the sexes interact in breeding areas, and that breeding actually occurs. A second is that philopatry provides a much higher chance of breeding success. Strict habitat requirements – whether due to a precisely adapted genome or not – mean that individuals that return to a site are more familiar with it, and may have more success in either defending it, or locating mates. This hypothesis does not justify whether philopatry is due to an innate behaviour in each individual, or to learning; however it has been shown that, in most species, older individuals show higher site fidelity. Neither of these hypotheses is as widely accepted as the optimal-inbreeding or dispersal hypotheses, but their existence indicates that the evolutionary causes of natal philopatry have still not been conclusively demonstrated. Consequences of philopatry Speciation A major outcome of multi-generational natal philopatry is genetic divergence and, ultimately, speciation. Without genetic exchange, geographically and reproductively isolated populations may undergo genetic drift. Such speciation is most evident on islands. For mobile island-breeding animals, finding a new breeding location may be beyond their means. In combination with a small population, as may occur due to recent colonisation, or simply restricted space, genetic drift can occur on shorter timescales than is observable in mainland species. The high levels of endemism on islands have been attributed to these factors. Substantial evidence for speciation due to natal philopatry has been gathered in studies of island-nesting albatross. Genetic difference is most often detected in microsatellites in mitochondrial DNA. Animals that spend much of their time at sea, but which return to land to breed exhibit high levels of natal philopatry and subsequent genetic drift between populations. Many species of albatross do not breed until 6–16 years of age. Between leaving their birth island, and their return, they fly hundreds of thousands of kilometres. High levels of natal philopatry have been demonstrated via mark-recapture data. For example, more than 99% of Laysan albatross (Phoebastria immutabilis) in a study returned to exactly the same nest in consecutive years. Such site-specificity can lead to speciation, and has also been observed in the earliest stages of this process. The shy albatross (Thalassarche [cauta] cauta) was shown to have genetic differences in its microsatellites between three breeding colonies located off the coast of Tasmania. The differences are not currently sufficient to propose identifying the populations as distinct species; however divergence is likely to continue without outbreeding. Not all isolated populations will show evidence of genetic drift. Genetic homogeneity can be attributed to one of two explanations, both of which indicate that natal philopatry is not absolute within a species. Firstly, a lack of divergence may be due to founder effects, which explains how individuals that start new populations carry the genes of their source population. If only a short (in evolutionary timescales) period of time has passed, insufficient divergence may have occurred. For example, study of mitochondrial DNA microsatellites found no significant difference between colonies of black-browed albatross (T. melanophrys) on the Falkland Islands and Campbell Island, despite the sites being thousands of kilometres apart. Observational evidence of white-capped albatross (T. [cauta] steadi) making attempts to build nests on a south Atlantic Island, where the species had never been previously recorded, demonstrate that range extension by roaming sub-adult birds is possible. Secondly, there may be sufficient gene exchange as to prevent divergence. For example, isolated (yet geographically close) populations of the Buller’s albatross (T. bulleri bulleri) have been shown to be genetically similar. This evidence has only recently, for the first time, been supported by mark-recapture data, which showed one bird marked on one of the two breeding islands was nesting on the other island. Due to the dispersal capabilities of albatross, distance between populations does not appear to be a determining factor in divergence. Actual speciation is likely to occur very slowly, as the selective pressures on the animals are the same for the vast majority of their lives, which is spent at sea. Small mutational changes in non-nuclear DNA that become fixed in small populations are likely to be the major driver of speciation. That there is minimal structural morphological difference between the genetically distinct populations is evidence for random genetic drift, rather than directional evolution due to natural selective pressure. Speciation through natal philopatry is a self-reinforcing process. Once genetic differences are sufficient, different species may be unable to interbreed to produce viable offspring. As a result, breeding could not occur anywhere except natal island, strengthening philopatry and ultimately leading to even greater genetic divergence. Cooperative breeding Philopatric species that do not migrate may evolve to breed cooperatively. Kin selection, of which cooperative breeding is a form, explains how individual offspring provide care for further offspring produced by their relatives. Animals that are philopatric to birthsites have increased association with family members, and, in situations where inclusive fitness is increased through cooperative breeding, may evolve such behaviour, as it will incur evolutionary benefits to families that do. Inclusive fitness is the sum of all direct and indirect fitness, where direct fitness is defined as the amount of fitness gained through producing offspring. Indirect fitness is defined as the amount of fitness gained through aiding related individuals offspring. Cooperative breeding is a hierarchical social system characterized by a dominant breeding pair surrounded by subordinate helpers. The dominant breeding pair and their helpers experience costs and benefits from using this system. Costs for helpers include a fitness reduction, increased territory defense, offspring guarding and an increased cost of growth. Benefits for helpers include a reduced chance of predation, increased foraging time, territory inheritance, increased environmental conditions and an inclusive fitness. For the breeding pair, costs include increased mate guarding and suppression of subordinate mating. Breeders receive benefits as reductions in offspring care and territory maintenance. Their primary benefit is an increased reproductive rate and survival. Cooperative breeding causes the reproductive success of all sexually mature adults to be skewed towards one mating pair. This means the reproductive fitness of the group is held within a select few breeding members and helpers have little to no reproductive fitness. With this system, breeders gain an increased reproductive, while helpers gain an increased inclusive fitness. Cooperative breeding, like speciation, can become a self-reinforcing process for a species. If the fitness benefits result in higher inclusive fitness of a family than the fitness of a non-cooperative family, the trait will eventually become fixed in the population. Over time, this may lead to the evolution of obligate cooperative breeding, as exhibited by the Australian mudnesters and Australo-Papuan babblers. Obligate cooperative breeding requires natally philopatric offspring to assist in raising offspring – breeding is unsuccessful without such help. Other variations Migrating animals also exhibit philopatry to certain important areas on their route; staging areas, stop-overs, molting areas and wintering grounds. Philopatry is generally believed to help maintain the adaptation of a population to a very specific environment (i.e., if a set of genes has evolved in a specific area, individuals that fail to return to that area may do poorly elsewhere, so natural selection will favor those who exhibit fidelity). The level of philopatry varies within migratory families and species. The term is sometimes also applied to animals that live in nests but do not remain in them during an unfavorable season (e.g., the winter in the temperate zone, or the dry season in the tropics), and leave to find hiding places nearby to pass the inactive period (common in various bees and wasps); this is not migration in the usual sense, as the location of the hiding place is effectively random and unique (never located or revisited except by accident), though the navigation skills required to relocate the old nest site may be similar to those of migrating animals. See also Cooperative breeding Kin selection Natal homing Salmon run References Ethology Population genetics Evolutionary biology
Philopatry
[ "Biology" ]
3,560
[ "Evolutionary biology", "Ethology", "Behavior", "Behavioural sciences" ]
2,988,398
https://en.wikipedia.org/wiki/Capital%20improvement%20plan
A capital improvement plan (CIP), or capital improvement program, is a short-range plan, usually four to ten years, that identifies capital projects and equipment purchases, provides a planning schedule and identifies options for financing the plan. External links Capital Project Management, British Columbia Ministry of Education Government Finance Officers Association Committee on Economic Development and Capital Planning Urban planning
Capital improvement plan
[ "Engineering" ]
73
[ "Urban planning", "Architecture" ]
2,988,563
https://en.wikipedia.org/wiki/Composite%20measure
Composite measure in statistics and research design refer to composite measures of variables, i.e. measurements based on multiple data items. An example of a composite measure is an IQ test, which gives a single score based on a series of responses to various questions. Three common composite measures include: indexes - measures that summarize and rank specific observations, usually on the ordinal scale; scales - advanced indexes whose observations are further transformed (scaled) due to their logical or empirical relationships; typologies - measures that classify observations in terms of their attributes on multiple variables, usually on a nominal scale. Indexes versus scales Indexes are often referred to as scales, but in fact not all indexes are scales. Whereas indexes are usually created by aggregating scores assigned to individual attributes of various variables, scales are more nuanced and take into account differences in intensity among the attribute of the same variable in question. Indexes and scales should provide an ordinal ranking of cases on a given variable, though scales are usually more efficient at this. While indexes are based on a simple aggregation of indicators of a variable, scales are more advanced, and their calculations may be more complex, using for example scaling procedures such as semantic differential. Composite measure validation A good composite measure will ensure that the indicators are independent of one another. It should also successfully predict other indicators of the variable. References Measurement
Composite measure
[ "Physics", "Mathematics" ]
286
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
2,988,582
https://en.wikipedia.org/wiki/Bus%20analyzer
A bus analyzer is a type of a protocol analysis tool, used for capturing and analyzing communication data across a specific interface bus, usually embedded in a hardware system. The bus analyzer functionality helps design, test and validation engineers to check, test, debug and validate their designs throughout the design cycles of a hardware-based product. It also helps in later phases of a product life cycle, in examining communication interoperability between systems and between components, and clarifying hardware support concerns. A bus analyzer is designed for use with specific parallel or serial bus architectures. Though the term bus analyzer implies a physical communication and interface that is being analyzed, it is sometimes used interchangeably with the term protocol analyzer or Packet Analyzer, and may be used also for analysis tools for Wireless interfaces like wireless LAN (like Wi-Fi), PAN (like Bluetooth, Wireless USB), and other, though these technologies do not have a “Wired” Bus. The bus analyzer monitors and captures the bus communication data, decodes and analyses it and displays the data and analysis reports to the user. It is essentially a logic analyzer with some additional knowledge of the underlying bus traffic characteristics. One of the key differences between a bus analyzer and a logic analyzer is notably its ability to filter and extract only relevant traffic that occurs on the analyzed bus. Some advanced logic analyzers present data storage qualification options that also allow to filter bus traffic, enabling bus analyzer-like features. Some key differentiators between bus and logic analyzers are: 1. Cost: Logic analyzers usually carry higher prices than bus analyzers. The converse of this fact is that a logic analyzer can be used with a variety of bus architectures, whereas a bus analyzer is only good with one architecture. 2. Targeted Capabilities and Preformatting of data: A bus analyzer can be designed to provide very specific context for data coming in from the bus. Analyzers for serial buses like USB for example take serial data that arrives as a serial stream of binary 1s and 0s and displays it as logical packets differentiated by chirp, headers, payload etc... 3. Ease of use: While a general purpose logic analyzer, may support multiple busses and interfaces, a bus analyzer is designed for a specific physical interface and usually allows the user to quickly connect the probing hardware to the bus that is tested, saving time and effort. From a user's perspective, a (greatly) simplified viewpoint may be that developers who want the most complete and most targeted capabilities for a single bus architecture may be best served with a bus analyzer, while users who work with several protocols in parallel may be better served with a Logic Analyzer that is less costly than several different bus analyzers and enables them to learn a single user interface vs several. Analyzers are now available for virtually all existing computer and embedded bus standards and form factors such as PCI Express, DDR, USB, PCI, CompactPCI, PMC, VMEbus, CANbus and LINbus, etc. Bus analyzers are used in the Avionics industry to analyze MIL-STD-1553, ARINC 429, AFDX, and other avionics databus protocols. Other bus analyzers are also used in the mass storage industry to analyze popular data transfer protocols between computers and drives. These cover popular data buses like NVMe, SATA, SAS, ATA/PI, SCSI, etc. These devices are typically connected in series between the host computer and the target drive, where they 'snoop' traffic on the bus, capture it and present it in human-readable format. Bus and protocol exerciser For many bus architectures like PCI Express, PCI, SAS, SATA, and USB, engineers also use a "Bus Exerciser" or “Protocol Exerciser”. Such exercisers can emulate partial or full communication stacks which comply with the specific bus communication standard, thus allowing engineers to surgically control and generate bus traffic to test, debug and validate their designs. These devices make it possible to also generate bad bus traffic as well as good so that the device error recovery systems can be tested. They are also often used to verify compliance with the standard to ensure interoperability of devices since they can reproduce known scenarios in a repeatable way. Exercisers are usually used in conjunction with analyzers, so the engineer gets full visibility of the communication data captured on the bus. Some exercisers are designed as stand-alone systems while others are combined into the same systems used for analysis. See also JTAG (boundary scan) References Computer buses Electronic test equipment Digital electronics
Bus analyzer
[ "Technology", "Engineering" ]
959
[ "Electronic engineering", "Electronic test equipment", "Measuring instruments", "Digital electronics" ]
2,988,743
https://en.wikipedia.org/wiki/Chromium%28II%29%20acetate
Chromium(II) acetate hydrate, also known as chromous acetate, is the coordination compound with the formula Cr2(CH3CO2)4(H2O)2. This formula is commonly abbreviated Cr2(OAc)4(H2O)2. This red-coloured compound features a quadruple bond. The preparation of chromous acetate once was a standard test of the synthetic skills of students due to its sensitivity to air and the dramatic colour changes that accompany its oxidation. It exists as the dihydrate and the anhydrous forms. Cr2(OAc)4(H2O)2 is a reddish diamagnetic powder, although diamond-shaped tabular crystals can be grown. Consistent with the fact that it is nonionic, Cr2(OAc)4(H2O)2 exhibits poor solubility in water and methanol. Structure The Cr2(OAc)4(H2O)2 molecule contains two atoms of chromium, two ligated molecules of water, and four acetate bridging ligands. The coordination environment around each chromium atom consists of four oxygen atoms (one from each acetate ligand) in a square, one water molecule (in an axial position), and the other chromium atom (opposite the water molecule), giving each chromium centre an octahedral geometry. The chromium atoms are joined by a quadruple bond, and the molecule has D4h symmetry (ignoring the position of the hydrogen atoms). The same basic structure is adopted by Rh2(OAc)4(H2O)2 and Cu2(OAc)4(H2O)2, although these species do not have such short M–M contacts. The quadruple bond between the two chromium atoms arises from the overlap of four d-orbitals on each metal with the same orbitals on the other metal: the dz2 orbitals overlap to give a sigma bonding component, the dxz and dyz orbitals overlap to give two pi bonding components, and the dxy orbitals give a delta bond. This quadruple bond is also confirmed by the low magnetic moment and short intermolecular distance between the two atoms of 236.2 ± 0.1 pm. The Cr–Cr distances are even shorter, 184 pm being the record, when the axial ligand is absent or the carboxylate is replaced with isoelectronic nitrogenous ligands. History Eugène-Melchior Péligot first reported a chromium(II) acetate in 1844. His material was apparently the dimeric Cr2(OAc)4(H2O)2. The unusual structure, as well as that of copper(II) acetate, was uncovered in 1951. Preparation The preparation usually begins with reduction of an aqueous solution of a Cr(III) compound using zinc. The resulting blue solution is treated with sodium acetate, which results in the rapid precipitation of chromous acetate as a bright red powder. 2 Cr3+ + Zn → 2 Cr2+ + Zn2+ 2 Cr2+ + 4 OAc− + 2 H2O → Cr2(OAc)4(H2O)2 The synthesis of Cr2(OAc)4(H2O)2 has been traditionally used to test the synthetic skills and patience of inorganic laboratory students in universities because the accidental introduction of a small amount of air into the apparatus is readily indicated by the discoloration of the otherwise bright red product. The anhydrous form of chromium(II) acetate, and also related chromium(II) carboxylates, can be prepared from chromocene: 4 RCO2H + 2 Cr(C5H5)2 → Cr2(O2CR)4 + 4 C5H6 This method provides anhydrous derivatives in a straightforward manner. Because it is so easily prepared, Cr2(OAc)4(H2O)2 is a starting material for other chromium(II) compounds. Also, many analogues have been prepared using other carboxylic acids in place of acetate and using different bases in place of the water. Applications Chromium(II) acetate has few practical applications. It has been used to dehalogenate organic compounds such as α-bromoketones and chlorohydrins. The reactions appear to proceed via 1e− steps, and rearrangement products are sometimes observed. Because the compound is a good reducing agent, it will reduce the O2 found in air and can be used as an oxygen scrubber. See also Chromium(III) acetate Chromium acetate hydroxide References Further reading External links http://www.molecules.org/coordcpds.html#Cr2OAc4H2O (outdated) http://wwwchem.uwimona.edu.jm/courses/chromium.pdf Chromium(II) compounds Acetates Coordination complexes Reducing agents Chemical compounds containing metal–metal bonds Chromium–oxygen compounds
Chromium(II) acetate
[ "Chemistry" ]
1,093
[ "Redox", "Reducing agents" ]
2,988,853
https://en.wikipedia.org/wiki/Northbound%20interface
In computer networking and computer architecture, a northbound interface of a component is an interface that allows the component to communicate with a higher level component, using the latter component's southbound interface. The northbound interface conceptualizes the lower level details (e.g., data or functions) used by, or in, the component, allowing the component to interface with higher level layers. In architectural overviews, the northbound interface is normally drawn at the top of the component it is defined in; hence the name northbound interface. A southbound interface decomposes concepts in the technical details, mostly specific to a single component of the architecture. Southbound interfaces are drawn at the bottom of an architectural overview. fa: خطوط ارتباط شمالی Typical use A northbound interface is typically an output-only interface (as opposed to one that accepts user input) found in carrier-grade network and telecommunications network elements. The languages or protocols commonly used include SNMP and TL1. For example, a device that is capable of sending out syslog messages but that is not configurable by the user is said to implement a northbound interface. Other examples include SMASH, IPMI, WSMAN, and SOAP. The term is also important for software-defined networking (SDN), to facilitate communication between the physical devices, the SDN software and applications running on the network. References Network architecture Computer networking Computer architecture
Northbound interface
[ "Technology", "Engineering" ]
293
[ "Computer networking", "Computer engineering", "Computer architecture", "Network architecture", "Computer networks engineering", "Computer science stubs", "Computer science", "Computing stubs", "Computers" ]
2,988,884
https://en.wikipedia.org/wiki/Metamizole
Metamizole or dipyrone is a painkiller, spasm reliever, and fever reliever drug. It is most commonly given by mouth or by intravenous infusion. It belongs to the ampyrone sulfonate family of medicines and was patented in 1922. Metamizole is marketed under various trade names. It was first used medically in Germany under the brand name "Novalgin", later becoming widely known in Slavic nations and India under the name "Analgin". Sale of Metamizole is restricted in some jurisdictions following studies in the 1970s which correlated it to severe adverse effects, including agranulocytosis. Other studies have disputed this judgement, instead claiming that it is a safer drug than other painkillers. Metamizole is popular in many countries, where it is typically available as an over-the-counter medication. Medical uses Metamizole, with its potent analgesic (pain relief), antipyretic (fever reduction), and spasmolytic (relax muscle contractions) properties, is utilized in the management of acute pain, fever, and pain caused by muscle spasms. It is primarily used for perioperative pain, acute injury, colic, cancer pain, other acute/chronic forms of pain and high fever unresponsive to other agents. Metamizole also effectively manages biliary and intestinal colic-like pain, and reduces the spasm of the smooth muscle of the sphincter of Oddi. Special populations Its use in pregnancy is advised against, although animal studies are reassuring in that they show minimal risk of birth defects. Its use in the elderly and those with liver or kidney impairment is advised against, but if these groups of people must be treated, a lower dose and caution is usually advised. Its use during lactation is advised against, as it is excreted in breast milk. Adverse effects While metamizole is a relatively safe medication, it is not entirely devoid of adverse effects. Metamizole has a potential of blood-related toxicity (blood dyscrasias), but causes less kidney, cardiovascular, and gastrointestinal toxicity than non-steroidal anti-inflammatory drugs (NSAIDs). Like NSAIDs, it can trigger bronchospasm or anaphylaxis, especially in those with asthma. Serious side effects include agranulocytosis, aplastic anaemia, hypersensitivity reactions (like anaphylaxis and bronchospasm), toxic epidermal necrolysis and it may provoke acute attacks of porphyria, as it is chemically related to the sulfonamides. The relative risk for agranulocytosis appears to greatly vary according to the country of estimates on said rate and opinion on the risk is strongly divided. Genetics may play a significant role in metamizole sensitivity. It is suggested that some populations are more prone to suffer from metamizole induced agranulocytosis than others. As an example, metamizole-related agranulocytosis seems to be an adverse effect more frequent in British population as opposed to Spaniards. An assessment report by the European Medicines Agency remarked that "the potential to induce agranulocytosis may be associated with genetic characteristics of the population studied". A 2015 meta-analysis concluded that on the evidence available "for short-term use in the hospital setting, metamizole seems to be a safe choice when compared to other widely used analgesics", but that the "results were limited by the mediocre overall quality of the reports" analysed. A systematic review from 2016 found that metamizole significantly increased the relative risk of upper gastrointestinal bleeding, by a factor of times. A study by one of the manufacturers of the drug found the risk of agranulocytosis within the first week of treatment to be a in a million, versus in a million for diclofenac. Therapeutic effect of metamizole on intestinal colic is attributed to its analgesic properties, with no evidence of interference in small bowel or colon motility. Metamizole has potential for hepatotoxicity. Contraindications Previous hypersensitivity (such as agranulocytosis or anaphylaxis) to metamizole or any of the excipients (e.g. lactose) in the preparation used, acute porphyria, impaired haematopoiesis (such as due to treatment with chemotherapy agents), third trimester of pregnancy (potential for adverse effects in the newborn), lactation, children with a body weight below 16 kg, history of aspirin-induced asthma and other hypersensitivity reactions to analgesics. In 2018, the European Medicines Agency (EMA) reviewed the safety of metamizole and concluded it to be generally safe for the general population. However, they advised against its use in the third trimester of pregnancy or while breastfeeding due to risks of renal impairment or ductus arteriosus to the fetus or infant. Interactions A clinically severe interaction has been identified between aspirin and metamizole for patients who regularly take aspirin to manage vascular disease: this interaction occurs due to steric hindrance at the active aspirin binding site of COX-1 by metamizole; to manage this interaction, it is recommended to make a delay between the intake of each of these drugs, with aspirin being taken at least 30 minutes before metamizole. Oral anticoagulants (blood thinners), lithium, captopril, triamterene and antihypertensives may also interact with metamizole, as other pyrazolones are known to interact adversely with these substances. Overdose It is considered fairly safe on overdose, but in these cases supportive measures are usually advised as well as measures to limit absorption (such as activated charcoal) and accelerate excretion (such as haemodialysis). Physicochemistry Metamizole is a sulfonic acid and comes in calcium, sodium and magnesium salt forms. Its sodium salt monohydrate form is a white/almost crystalline powder that is unstable in the presence of light, highly soluble in water and ethanol but practically insoluble in dichloromethane. Pharmacology Its precise mechanism of action of metamizole is unknown, although it is believed that metamizole generally exerts its action by inhibiting the COX-3 enzyme which is responsible for the biosynthesis of prostaglandins in the central nervous system (CNS)—in the brain and spinal cord. Prostaglandins are lipid compounds that play a role in inflammation, pain, and fever. By inhibiting the COX-3 enzyme in the CNS, metamizole reduces the production of prostaglandins, thereby alleviating pain, reducing fever, and potentially lessening inflammation. Activation of the endocannabinoid and opioidergic systems is also thought to play a role in analgesic effects of metamizole. Metamizole is classified as an atypical nonsteroidal anti-inflammatory drug (NSAID). Unlike typical NSAIDs, metamizole exhibits weak or no anti-inflammatory properties (at least in therapeutic doses), but possesses potent analgesic effects via its action in the CNS: this central action distinguishes it from other NSAIDs, which generally exert their effects peripherally. The inhibition of COX-1 and COX-2 by metamizole is less potent than the inhibition of these enzymes by traditional NSAIDs. Metamizole is metabolized in the liver, where it is converted into active metabolites through the process of N-demethylation. The mechanism of action of metamizole is believed to be exerted via its active metabolites, specifically, arachidonoyl-4-methylaminoantipyrine (ARA-4-MAA) and arachidonoyl-4-aminoantipyrine (ARA-4-AA). This mechanism of action has been compared to paracetamol and its active arachidonic acid metabolite AM404. The CB1 receptor inverse agonist AM-251 was able to reduce the cataleptic response and thermal analgesia of metamizole. Another study found its antihyperalgesic effect reversed by the CB2 inverse agonist AM-630 Although it seems to inhibit fevers caused by prostaglandins, especially prostaglandin E2, metamizole appears to produce its therapeutic effects by means of its metabolites, especially N-methyl-4-aminoantipyrine (MAA) and 4-aminoantipyrine (AA) which form through the FAAH enzyme to create arachidonoyl-4-methylaminoantipyrine (ARA-4-MAA) and arachidonoyl-4-aminoantipyrine (ARA-4-AA). Metamizole likely induces the CYP2B6 and CYP3A4 enzymes. History Ludwig Knorr was a student of Emil Fischer who won the Nobel Prize for his work on purines and sugars, which included the discovery of phenylhydrazine. In the 1880s, Knorr was trying to make quinine derivatives from phenylhydrazine, and instead made a pyrazole derivative, which after a methylation, he made into phenazone, also called antipyrine, which has been called "the 'mother' of all modern antipyretic analgesics." Sales of that drug exploded, and in the 1890s chemists at Teerfarbenfabrik Meister, Lucius & Co. (a precursor of Hoechst AG which is now Sanofi), made another derivative called pyramidon which was three times more active than antipyrine. In 1893, a derivative of antipyrine, aminopyrine, was made by Friedrich Stolz at Hoechst. Yet later, chemists at Hoechst made a derivative, melubrine (sodium antipyrine aminomethanesulfonate), which was introduced in 1913; finally in 1920, metamizole was synthesized. Metamizole is a methyl derivative of melubrine and is also a more soluble prodrug of pyramidon. Metamizole was first marketed in Germany as "Novalgin" in 1922. Society and culture Legal status Metamizole is banned in several countries, available by prescription in others (sometimes with strong warnings, sometimes without), and available over the counter in yet others. For example, approval was withdrawn in Sweden (1974), the US (1977), and India (2013, ban lifted in 2014). Although metamizole is banned in the US because of its risk of agranulocytosis, it was reported by small surveys that 28% of Hispanics in Miami have possession of it, and 38% of Hispanics in San Diego reported some usage. There were unauthorized sales and use of metamizole in horses in the US. After reviewing trial data on its safety, the FDA approved it for treating fever in equines. Amid the opioid crisis, a study pointed out that the legal status of metamizole has a relation to the consumption of oxycodone, showing the use of those drugs were inversely correlated. Its use could be beneficial when adjusted for the addictive risk of opioids, especially on limited and controlled use of metamizole. A 2019 Israeli conference also justified the approved status as a preventive to opioid dependence, and metamizole being safer than most analgesics for renal impaired patients. Metamizole is the most sold medication in São Paulo, Brazil, accounting for 488 tons in 2016. Given this contrasting consumption compared to other countries, the Brazilian Health Regulatory Agency (ANVISA) convened an international panel for evaluating its safety in 2001, and the conclusion was that the benefits substantially outweighed the risks, and imposing restrictions would lead to significant negative consequences to the population. It is also highly popular in Latin America overall. In 2022 in Brazil alone over 215 million doses were administered. The Bulgarian pharmaceutical Sopharma produces it under the brand Analgin, which as of 2014, has been the top-selling analgesic in Bulgaria for over a decade. In Germany, the drug is the most commonly prescribed painreliever. In 2012, headache accounts for 70% of its use in Indonesia. In 2018, investigators in Spain looked into Nolotil (as metamizole is known in Spain) after the death of several British people in Spain. A possible factor in these deaths might have been a side effect of metamizole that can cause agranulocytosis (a lowering of white blood cell count). Brand names Metamizole is the international nonproprietary name, and in countries where it is marketed, it is available under many brand names. In Romania metamizole is available as the original marketed pharmaceutical product by Zentiva as Algocalmin, as 500 mg immediate release tablets. It's also available as an injection with 1 g of metamizole sodium dissolved in 2 ml of solvent. In Israel it is sold under the brand name "Optalgin" (), manufactured by Teva. In Czechia it is available under the brand name "Algifen Neo" in the form of drops containing 500 mg/ml of Metamizole and 5 mg/ml of Pitofenone, manufactured by Teva. It is known as Sulpyrin and Sulpyrine in South Korea () and Japan. () Analgin Analgin () is a generic name used in the former USSR pharmacopeia, continuing in use in Slavic nations. A firm in Russia tried unsuccessfully in 2011 to claim the name as their trademark. In Bulgaria, Sopharma succeeded in registering Analgin as a trademark in 2004. Analgin is also the common term used in the Indian pharmacopeia. References Analgesics Antipyretics CYP3A4 inducers Drugs with unknown mechanisms of action Pyrazolones Sulfonates Withdrawn drugs
Metamizole
[ "Chemistry" ]
3,007
[ "Drug safety", "Withdrawn drugs" ]
2,988,986
https://en.wikipedia.org/wiki/Centration
In psychology, centration is the tendency to focus on one salient aspect of a situation and neglect other, possibly relevant aspects. Introduced by the Swiss psychologist Jean Piaget through his cognitive-developmental stage theory, centration is a behaviour often demonstrated in the preoperational stage. Piaget claimed that egocentrism, a common element responsible for preoperational children's unsystematic thinking, was causal to centration. Research on centration has primarily been made by Piaget, shown through his conservation tasks, while contemporary researchers have expanded on his ideas. Conservation tasks Piaget used a number of tasks to test children's scientific thinking and reasoning, many of which specifically tested conservation. Conservation refers to the ability to determine that a certain quantity will remain the same despite adjustment of the container, shape, or apparent size. Other conservation tasks include conservation of number, substance, weight, volume, and length. Perhaps the most famous task indicative of centration is the conservation of liquids task. In one version, the child is shown two glasses, A1 and A2, that are filled to the same height. The child is asked if the two glasses contain the same amount of liquid, in which the child almost always agrees that they do. Next, the experimenter pours the liquid from A2 to glass P, which is lower and wider. The child is then asked if the amount of liquid is still the same. At the preoperational stage, children will respond that the amount is not the same, with either the taller glass or the wider glass containing more liquid. Once the child has reached the concrete operational stage, however, the child will conclude the amount of liquid is still the same. Here, centration is demonstrated in the fact that the child pays attention to one aspect of the liquid, either the height or the width, and is unable to conserve because of it. With achievement of the concrete operational stage, the child is able to reason about the two dimensions simultaneously and recognize that a change in one dimension cancels out a change in the other. In the conservation of numbers task, Piaget gave children a row of egg cups and a bunch of eggs, placing them in rows of equal length, but not equal number. Piaget then asked the children to take just enough eggs to fill the cups, and when the children attempted to do so, they were surprised to find that they had too many or too few eggs. Again, centration is present here, where the child pays attention to the length of the rows and not the numbers within each row. Children demonstrated conservation of weight and length through a similar task. In this one, children were shown two balls of Playdoh that were equal in size. When asked whether they were the same or not, all children answered that yes, they were. Afterwards, Piaget rolled one of the balls into a longer string and asked the same question: “Which one is bigger?”. Children who experienced centration focused on the length of the newly shaped Playdoh, or the width of the old Playdoh, and often said that one or the other was bigger. Those children who were able to focus on both dimensions, both length and width, were able to say that both clumps of Playdoh were still the same size. Egocentrism Piaget believed that in each period of development, a deficit in cognitive thinking could be attributed to the concept of egocentrism. Egocentrism, then, refers to the inability to distinguish one's own perspective from that of others, but does not necessarily imply selfishness or conceit. In speech, children are egocentric when they consider matters only from their own perspective. For example, a young egocentric boy might want to buy his mother a toy car for her birthday. This would not be a selfish act, as he would be getting her a present, but it would be an action that did not take into account the fact that the mother might not like the car. The child would assume that his mother would be thinking the same thing as himself, and would therefore love to receive a toy car as a gift. Animism – the attribution of life to physical objects – also stems from egocentrism; children assumed that everything functions just as they do. As long as children are egocentric, they fail to realize the extent to which each person has private, subjective experiences. In terms of moral reasoning, young children regard rules from one perspective, as absolutes handed down from adults or authority figures. Just as the egocentric child views things from a single perspective, the child who fails to conserves focuses on only one aspect of the problem. For example, when water is poured from one glass into a shorter, broader one, the child ‘centers’ on a single striking dimension – the difference in height. The child cannot ‘decenter’ and consider two aspects of the situation at once. Centration, essentially, can be seen as a form of egocentrism in specific tasks involving scientific reasoning. Perseveration While centration is a general tendency for children within various cognitive tasks, perseveration, on the other hand, is centration in excess. Perseveration can be defined as the continual repetition of a particular response (such as a word, phrase, or gesture) despite the absence or cessation of a stimulus. It is usually caused by brain injury or other organic disorder. In a broader sense, perseveration is used to describe a wide range of functionless behaviours that arise from a failure of the brain to either inhibit prepotent responses or to allow its usual progress to a different behavior. This includes impairment in set shifting and task switching in social and other contexts. Perseveration and centration are connected, in that centration is a basis for perseveration, but perseveration itself is seen to be a symptom of injury. Where perseveration is more of an issue when seen in adults, centration is a deficit in children's thinking that can be overcome more easily, through typical developmental gains. Decentration Children generally achieve conservation of liquids at about 7 years. When they do so, they are entering the stage of concrete operations. Overcoming centration can be seen in three main forms. First, the child might use the identity argument – that you haven't added or take any away, so it has to be the same. Second, the argument of compensation might be used, where the child states that tallness of the one glass and the wideness of the other glass cancel each other out. Third, an inversion reasoning is possible, where the child might suggest they are still the same because you can pour water from the wide glass back into the tall glass to create two equal looking glasses once again. Underlying these arguments are logical operations – mental actions that are reversible. Since these are mental actions, the child does not actually need to perform or have seen the transformations they are talking about. Piaget argued that children master centration and conservation spontaneously. The crucial moment comes when the child is in a state of internal contradiction. This is shown when the child first says that one glass has more because it's taller, than says the other has more because it is wider, and then becomes confused. Once this internal contradiction is resolved by the child themselves, by taking into account multiple aspects of the problem, they decenter and move up onto the concrete operational stage. Multitasking, seen through cognitive flexibility and set-shifting, requires decentration so that attention may be shifted between multiple salient objects or situations. As well, decentration is essential to reading and math skills in order for children to move beyond the individual letters and to the words and meanings presented. Other research As shown earlier, the aspect of quantitative understanding that most interested Piaget was the child's ability to conserve quantities in the face of perceptual change. Later studies have not disproved Piaget's contention that a full understanding of conservation is a concrete operational achievement. Recent work does suggest, however, that there may be earlier, partial forms of understanding that were missed in his studies. Investigators have simplified conservation tasks in various ways. They have reduced the usual verbal demands, for example, by allowing the child to pick candies to eat or juice to drink rather than answer questions about “same” or “more.” Or they have made the context for the question more natural and familiar by embedding the task within an ongoing game. Although such changes do not eliminate the non-conservation error completely, they often result in improved performance by supposedly preoperational 4- and 5-year-olds. Indeed, in simple situations, even 3-year-olds can demonstrate some knowledge of the invariance of number. A study by Rochel Gelman provides a nice example. In her study, the 3-year-old participants first played a game in which they learned, over a series of trials, that a plate with three toy mice affixed to was a “winner” and a plate with two toy mice was a “loser.” Then, in a critical test trial, the three-mice plate was surreptitiously transformed while hidden. In some cases, the length of the row was changed; in other cases one of the mice was removed. The children were unfazed by the change in length, continuing to treat the plate was a winner. An actual change in number, however, was responded to quite differently, eliciting search behaviours and various attempt at an explanation. The children thus showed a recognition that number, at least in this situation, should remain invariant. One should note, however, that studies purporting to show earlier competence on conservation tasks have themselves been criticized. In particular, these critiques suggest that methodological changes in the early competence studies may bias younger children to conserve due to lower level mechanisms. Children's complete of these tasks, therefore, may be due more to perceptual mechanisms rather than cognitive mechanisms of true conservation and an understanding of invariance. Thus, children may simply be sensitive to discriminating the delete or addition of information, rather than conserving information across changes in the display. See also Conservation (psychology) References Developmental psychology
Centration
[ "Biology" ]
2,103
[ "Behavioural sciences", "Behavior", "Developmental psychology" ]
2,989,154
https://en.wikipedia.org/wiki/LinuxTLE
LinuxTLE (, ) is a discontinued Thai Linux distribution based on Ubuntu and developed by the Thailand National Electronics and Computer Technology Center (NECTEC). TLE stands for Thai Language Extension, as it was originally a Thai extension for Red Hat Linux. The pronunciation "talay" is a homophone of the Thai word ทะเล (the sea). Version history References External links OpenTLE/LinuxTLE website (Thai) DistroWatch Weekly, Issue 79, 13 December 2004 (Featured distribution of the week) Discontinued Ubuntu derivatives Language-specific Linux distributions State-sponsored Linux distributions Linux distributions Thai-language computing
LinuxTLE
[ "Technology" ]
131
[ "Natural language and computing", "Language-specific Linux distributions" ]
2,989,225
https://en.wikipedia.org/wiki/Butyl%20acetate
n-Butyl acetate is an organic compound with the formula . A colorless, flammable liquid, it is the ester derived from n-butanol and acetic acid. It is found in many types of fruit, where it imparts characteristic flavors and has a sweet smell of banana or apple. It is used as an industrial solvent. The other three isomers (four, including stereoisomers) of butyl acetate are isobutyl acetate, tert-butyl acetate, and sec-butyl acetate (two enantiomers). Production and use Butyl acetate is commonly manufactured by the Fischer esterification of butanol (or its isomer to make an isomer of butyl acetate) and acetic acid with the presence of sulfuric acid: Butyl acetate is mainly used as a solvent for coatings and inks. It is a component of fingernail polish. Occurrence in nature Apples, especially of the 'Red Delicious' variety, are flavored in part by this chemical. The alarm pheromones emitted by the Koschevnikov gland of honey bees contain butyl acetate. References External links Ethylene and other chemicals in fruit Material Safety Data Sheet CDC - NIOSH Pocket Guide to Chemical Hazards Ester solvents Flavors Acetate esters Commodity chemicals Sweet-smelling chemicals Butyl esters
Butyl acetate
[ "Chemistry" ]
288
[ "Commodity chemicals", "Products of chemical industry" ]
2,989,336
https://en.wikipedia.org/wiki/DBc
dBc (decibels relative to the carrier) is the power ratio of a signal to a carrier signal, expressed in decibels. For example, phase noise is expressed in dBc/Hz at a given frequency offset from the carrier. dBc can also be used as a measurement of Spurious-Free Dynamic Range (SFDR) between the desired signal and unwanted spurious outputs resulting from the use of signal converters such as a digital-to-analog converter or a frequency mixer. If the dBc figure is positive, then the relative signal strength is greater than the carrier signal strength. If the dBc figure is negative, then the relative signal strength is less than carrier signal strength. Although the decibel (dB) is permitted for use alongside SI units, the dBc is not. Example If a carrier (reference signal) has a power of , and noise signal has power of . Power of reference signal expressed in decibel is : Power of noise expressed in decibel is : The calculation of dBc difference between noise signal and reference signal is then as follows: It is also possible to compute the dBc power of noise signal with respect to reference signal directly as logarithm of their ratio as follows: . References External links Encyclopedia of Laser Physics and Technology Units of measurement Radio frequency propagation Telecommunications engineering Logarithmic scales of measurement
DBc
[ "Physics", "Mathematics", "Engineering" ]
280
[ "Physical phenomena", "Telecommunications engineering", "Spectrum (physical sciences)", "Physical quantities", "Radio frequency propagation", "Quantity", "Electromagnetic spectrum", "Waves", "Logarithmic scales of measurement", "Electrical engineering", "Units of measurement" ]
2,989,523
https://en.wikipedia.org/wiki/Secure%20environment
In computing, a secure environment is any system which implements the controlled storage and use of information. In the event of computing data loss, a secure environment is used to protect personal or confidential data. Often, secure environments employ cryptography as a means to protect information. Some secure environments employ cryptographic hashing, simply to verify that the information has not been altered since it was last modified. See also Data recovery Cleanroom Mandatory access control (MAC) Trusted computing Homomorphic encryption Computer security
Secure environment
[ "Technology" ]
99
[ "Computing stubs", "Computer science", "Computer science stubs" ]
2,989,638
https://en.wikipedia.org/wiki/Stephen%20Paget
Stephen Paget (17 July 1855 – 8 May 1926) was an English surgeon and pro-vivisection campaigner. On the basis of the works of Fuchs (see below), he proposed the "seed and soil" theory of metastasis, which claims the distribution of cancers are not coincidental. He was the son of the distinguished surgeon and pathologist Sir James Paget. Biography Paget was born on 17 July 1855 at Cavendish Square, London. He was the fifth child and fourth son of surgeon and pathologist Sir James Paget (1814–1899). Paget was educated at Shrewsbury School. He matriculated at Christ Church, Oxford in 1874, graduating B.A. in 1878 (M.A. 1886). He was a student at St Bartholomew's Hospital and obtained the F.R.C.S. in 1885. He was elected assistant surgeon to the Metropolitan Hospital and was surgeon at West London Hospital. He was surgeon to the Throat and Ear Department at Middlesex Hospital. Paget died in Limpsfield on 8 May 1926. Proposed theory Paget has long been credited with proposing the seed and soil theory of metastasis, though in his paper "The Distribution Of Secondary Growths In Cancer Of The Breast" he states "…the chief advocate of this theory of the relation between the embolus and the tissues which receive it is Fuchs…". Ernst Fuchs (1851–1930) an Austrian ophthalmologist, physician and researcher however, does not refer to the phenomenon as "seed and soil", but defines it as a "predisposition" of an organ to be the recipient of specific growths. In his paper, Paget presents and analyzes 735 fatal cases of breast cancer, complete with autopsy, as well as many other cancer cases from the literature and argues that the distribution of metastases cannot be due to chance, concluding that although "the best work in pathology of cancer is done by those who… are studying the nature of the seed…" [the cancer cell], the "observations of the properties of the soil" [the secondary organ] "may also be useful..." Approbation of Louis Pasteur In addition to other publications, he also wrote a book about Louis Pasteur titled Pasteur and After Pasteur. Pasteur's life is discussed from his early life through his accomplishments. Paget wrote the book in memoriam of Pasteur's life, and in the preface he states, "It has been arranged to publish this manual on September 28th, the day of Pasteur's death. That is a day which all physicians and surgeons -- and not they alone -- ought to mark on their calendars; and it falls this year with special significance to us, now that his country and ours are fighting side by side to bring back the world's peace." Vivisection After his retirement from medical practice in 1910, Paget devoted much time to justifying vivisection. He was secretary of the Research Defence Society. He authored The Case Against Anti-Vivisection in 1904 and was the editor of For and Against Experiments on Animals, 1912. Paget was heavily criticized by anti-vivisectionists. In 1962, Archibald Hill noted that "Stephen Paget's death indeed was claimed by anti-vivisectionists as a direct consequence of their prayers." Criticism of Christian Science In 1909, Paget authored a book, The Faith and Works of Christian Science, which exposed the fallacies, inconsistencies, and dangers of Christian Science. Selected publications The Case Against Anti-Vivisection. 1904. Ambroise Paré and His Times, 1510-1590 (1897) Confessio Medici. 1908. The Faith and Works of Christian Science. 1909. For and Against Experiments on Animals. 1912. References External links 1855 births 1926 deaths Alumni of Christ Church, Oxford Critics of Christian Science English surgeons People educated at Shrewsbury School Vivisection activists Younger sons of baronets
Stephen Paget
[ "Chemistry" ]
823
[ "Vivisection activists", "Vivisection" ]
6,968,234
https://en.wikipedia.org/wiki/Sybase%20iAnywhere
Sybase iAnywhere, is a subsidiary of Sybase specializing in mobile computing, management and security and enterprise database software. SQL Anywhere, formerly known as SQL Anywhere Studio or Adaptive Server Anywhere (ASA), is the company's flagship relational database management system (RDBMS). SQL Anywhere powers popular applications such as Intuit, Inc.'s QuickBooks, and the devices of 140,000 census workers during the 2010 United States Census. The product's customers include Brinks, Kodak, Pepsi Bottling Group (PBG), MICROS Systems, Inc. and the United States Navy. In August 2008. Sybase iAnywhere mobility products include Sybase Unwired Platform. a platform for mobile enterprise application development. It combines tooling and integration with standard development environments. Afaria provides mobile device management and security capabilities to ensure that mobile data and devices are up-to-date, reliable and secure. Afaria is currently being used by Novartis and United Utilities among others. iAnywhere Mobile Office, formerly known as OneBridge, is specifically designed to securely extend email and business processes to wireless devices. RFID Anywhere, is a software platform designed to simplify radio frequency identification (RFID) projects, including the development, deployment and management of highly distributed, multi-site networks. Through the 2006 acquisition of Extended Systems, Inc., Sybase iAnywhere is now providing wireless connectivity, device management and data synchronization software. Its software development kits (SDKs) for Bluetooth, IrDA, OMA (Open Mobile Alliance) Device Management and OMA Data Synchronization protocols are used by cellphone and automobile manufacturers worldwide in original equipment manufacturer (OEM) and original design manufacturer (ODM) applications. XTNDConnect PC, available for OEM/ODM applications, as well as for direct purchase, is a software application based on this technology that helps millions of consumers sync their mobile phones and devices with PC applications. History Watcom International Corporation was founded in 1981 in Waterloo, Ontario, Canada. Watcom produced a variety of tools, including the well-known Watcom C compiler introduced in 1988. In 1994 Powersoft acquired Watcom, merging with Sybase just one year later. In 2000, Sybase iAnywhere was formed as a wholly owned subsidiary consisting of the former mobile and embedded computing division. iAnywhere played an important role in Sybase's Unwired Enterprise strategy, which focuses on managing and mobilizing information from the data center to the point of action. In 2010 Sybase (along with iAnywhere) was acquired by SAP and is now known as Sybase, an SAP Company. Products Database Advantage Database Server (ADS): A client/server back-end for shared, networked, standalone, mobile and Internet database applications. Advantage provides both ISAM table-based and SQL based data access. SQL Anywhere: A data management and enterprise data synchronization for development of applications for mobile, remote and small to medium-sized business environments. Management and security Afaria: Allows companies to centrally manage and secure technology, mobile data and devices used at the front lines of business. RemoteWare: Retail polling, file transfer, and content distribution to move sales data between retail or hospitality sites and the headquarters of an organization. Enterprise mobile Sybase unwired platform: A platform for mobile enterprise application development. It provides a set of services that allow customers to mobilize data and business processes for enterprises using a variety of mobile devices. iAnywhere mobile office: Secure mobile email and business process mobilization, combines infrastructure support with enhanced on-device security, usability and performance. M-Business Anywhere: A scalable platform for delivering Web-based applications and content to mobile devices. Supports both secure wireless and off-line access to information. M-Business Anywhere is the technology behind the AvantGo internet service. Answers Anywhere: A middleware platform with context understanding and natural language capabilities, allowing end-users to ask for information and transactions in their own words, and in any language, using a wireless phone, a handheld PDA, a customized console, or a desktop computer. See also IvanAnywhere SQL Anywhere References External links iAnywhere.com (no longer functions) Sybase.com (redirects to SAP.com SAP SE Mobile computers Radio-frequency identification
Sybase iAnywhere
[ "Engineering" ]
915
[ "Radio-frequency identification", "Radio electronics" ]
6,968,451
https://en.wikipedia.org/wiki/Concept%20learning
Concept learning, also known as category learning, concept attainment, and concept formation, is defined by Bruner, Goodnow, & Austin (1956) as "the search for and testing of attributes that can be used to distinguish exemplars from non exemplars of various categories". More simply put, concepts are the mental categories that help us classify objects, events, or ideas, building on the understanding that each object, event, or idea has a set of common relevant features. Thus, concept learning is a strategy which requires a learner to compare and contrast groups or categories that contain concept-relevant features with groups or categories that do not contain concept-relevant features. The concept of concept attainment requires the following 5 categories: the definition of task; the nature of the examples encountered; the nature of validation procedures; the consequences of specific categorizations; and the nature of imposed restrictions. In a concept learning task, a human classifies objects by being shown a set of example objects along with their class labels. The learner simplifies what has been observed by condensing it in the form of an example. This simplified version of what has been learned is then applied to future examples. Concept learning may be simple or complex because learning takes place over many areas. When a concept is difficult, it is less likely that the learner will be able to simplify, and therefore will be less likely to learn. Colloquially, the task is known as learning from examples. Most theories of concept learning are based on the storage of exemplars and avoid summarization or overt abstraction of any kind. In machine learning, this theory can be applied in training computer programs. Concept learning: Inferring a Boolean-valued function from training examples of its input and output. A concept is an idea of something formed by combining all its features or attributes which construct the given concept. Every concept has two components: Attributes: features that one must look for to decide whether a data instance is a positive one of the concept. A rule: denotes what conjunction of constraints on the attributes will qualify as a positive instance of the concept. Types of concepts Concept learning must be distinguished from learning by reciting something from memory (recall) or discriminating between two things that differ (discrimination). However, these issues are closely related, since memory recall of facts could be considered a "trivial" conceptual process where prior exemplars representing the concept are invariant. Similarly, while discrimination is not the same as initial concept learning, discrimination processes are involved in refining concepts by means of the repeated presentation of exemplars. Concept attainment is rooted in inductive learning. So, when designing a curriculum or learning through this method, comparing like and unlike examples are key in defining the characteristics of a topic. Concrete or perceptual concepts vs abstract concepts Concrete concepts are objects that can be perceived by personal sensations and perceptions. These are objects like chairs and dogs where personal interactions occur with them and create a concept. Concepts become more concrete as the word we use to associate with it has a perceivable entity. According to Paivio’s dual -coding theory, concrete concepts are the one that is remembered easier from their perceptual memory codes. Evidence has shown that when words are heard they are associated with a concrete concept and are re-enact any previous interaction with the word within the sensorimotor system. Examples of concrete concepts in learning are early educational math concepts like adding and subtracting. Abstract concepts are words and ideas that deal with emotions, personality traits and events. Terms like "fantasy" or "cold" have a more abstract concept within them. Every person has their personal definition, which is ever changing and comparing, of abstract concepts. For example, cold could mean the physical temperature of the surrounding area or it could define the action and personality of another person. While within concrete concepts there is still a level of abstractness, concrete and abstract concepts can be seen on a scale. Some ideas like chair and dog are more cut and dry in their perceptions but concepts like cold and fantasy can be seen in a more obscure way. Examples of abstract concept learning are topics like religion and ethics. Abstract-concept learning is seeing the comparison of the stimuli based on a rule (e.g., identity, difference, oddity, greater than, addition, subtraction) and when it is a novel stimulus. With abstract-concept learning  have three criteria’s to rule out any alternative explanations to define the novelty of the stimuli. One transfer stimuli has to be novel to the individual. This means it needs to be a new stimulus to the individual. Two, there is no replication of the transfer stimuli. Third and lastly, to have a full abstract learning experience there has to be an equal amount of baseline performance and transfer performance. Binder, Westbury, McKiernan, Possing, and Medler (2005) used fMRI to scan individuals' brains as they made lexical decisions on abstract and concrete concepts. Abstract concepts elicited greater activation in the left precentral gyrus, left inferior frontal gyrus and sulcus, and left superior temporal gyrus, whereas concrete concepts elicited greater activation in bilateral angular gyri, the right middle temporal gyrus, the left middle frontal gyrus, bilateral posterior cingulate gyri, and bilateral precunei. In 1986 Allan Paivio hypothesized the Dual Coding Theory, which states that both verbal and visual information is used to represent information. When thinking of the concept “dog” thoughts of both the word dog and an image of a dog occur. Dual Coding Theory assumes that abstract concepts involve the verbal semantic system and concrete concepts are additionally involved with the visual imaginary system. Defined (or relational) and associated concepts Relational and associated concepts are words, ideas and thoughts that are connected in some form. For relational concepts they are connected in a universal definition. Common relational terms are up-down, left-right, and food-dinner. These ideas are learned in our early childhood and are important for children to understand. These concepts are integral within our understanding and reasoning in conservation tasks. Relational terms that are verbs and prepositions have a large influence on how objects are understood. These terms are more likely to create a larger understanding of the object and they are able to cross over to other languages. Associated concepts are connected by the individual’s past and own perception. Associative concept learning (also called functional concept learning) involves categorizing stimuli based on a common response or outcome regardless of perceptual similarity into appropriate categories. This is associating these thoughts and ideas with other thoughts and ideas that are understood by a few or the individual. An example of this is in elementary school when learning the direction of the compass North, East, South and West. Teacher have used “Never Eat Soggy Waffles”, “Never Eat Sour Worms” and students were able to create their own version to help them learn the directions. Complex concepts Constructs such as a schema and a script are examples of complex concepts. A schema is an organization of smaller concepts (or features) and is revised by situational information to assist in comprehension. A script on the other hand is a list of actions that a person follows in order to complete a desired goal. An example of a script would be the process of buying a CD. There are several actions that must occur before the actual act of purchasing the CD and a script provides a sequence of the necessary actions and proper order of these actions in order to be successful in purchasing the CD. Concept attainment learning plan development Concept attainment for in education and learning is an active learning method. Therefore, learning plans, methods, and goals can be chosen to implement concept attainment. David Perkin's Work on Knowledge as Design, Perkin's 4 Questions outline learning plan questions: 1) What are the critical attributes of the concept? 2) What are the purposes of the concept? 3) What model cases of the concept? 4) What are the arguments for learning the concept? Bias in concept attainment Concept learning has been historically studied with deep influences from goals and functions that concepts are assumed to have. Research has investigated how function of concepts influences the learning process, which focuses on the external function. Focusing on different models for concept attainment research would expand studies in this field. When reading articles and studies, noticing potential bias and qualifying the resource is required in this topic. Inductive learning and ML conflict with concept learning In general, the theoretical issues underlying concept learning for machine learning are those underlying induction. These issues are addressed in many diverse publications, including literature on subjects like Version Spaces, Statistical Learning Theory, PAC Learning, Information Theory, and Algorithmic Information Theory. Some of the broad theoretical ideas are also discussed by Watanabe (1969, 1985), Solomonoff (1964a, 1964b), and Rendell (1986); see the reference list below. Modern psychological theories It is difficult to make any general statements about human (or animal) concept learning without already assuming a particular psychological theory of concept learning. Although the classical views of concepts and concept learning in philosophy speak of a process of abstraction, data compression, simplification, and summarization, currently popular psychological theories of concept learning diverge on all these basic points. The history of psychology has seen the rise and fall of many theories about concept learning. Classical conditioning (as defined by Pavlov) created the earliest experimental technique. Reinforcement learning as described by Watson and elaborated by Clark Hull created a lasting paradigm in behavioral psychology. Cognitive psychology emphasized a computer and information flow metaphor for concept formation. Neural network models of concept formation and the structure of knowledge have opened powerful hierarchical models of knowledge organization such as George Miller's Wordnet. Neural networks are based on computational models of learning using factor analysis or convolution. Neural networks also are open to neuroscience and psychophysiological models of learning following Karl Lashley and Donald Hebb. Rule-based Rule-based theories of concept learning began with cognitive psychology and early computer models of learning that might be implemented in a high level computer language with computational statements such as if:then production rules. They take classification data and a rule-based theory as input which are the result of a rule-based learner with the hopes of producing a more accurate model of the data (Hekenaho 1997). The majority of rule-based models that have been developed are heuristic, meaning that rational analyses have not been provided and the models are not related to statistical approaches to induction. A rational analysis for rule-based models could presume that concepts are represented as rules, and would then ask to what degree of belief a rational agent should be in agreement with each rule, with some observed examples provided (Goodman, Griffiths, Feldman, and Tenenbaum). Rule-based theories of concept learning are focused more so on perceptual learning and less on definition learning. Rules can be used in learning when the stimuli are confusable, as opposed to simple. When rules are used in learning, decisions are made based on properties alone and rely on simple criteria that do not require a lot of memory ( Rouder and Ratcliff, 2006). Example of rule-based theory: "A radiologist using rule-based categorization would observe whether specific properties of an X-ray image meet certain criteria; for example, is there an extreme difference in brightness in a suspicious region relative to other regions? A decision is then based on this property alone." (see Rouder and Ratcliff 2006) Prototype The prototype view of concept learning holds that people abstract out the central tendency (or prototype) of the examples experienced and use this as a basis for their categorization decisions. The prototype view of concept learning holds that people categorize based on one or more central examples of a given category followed by a penumbra of decreasingly typical examples. This implies that people do not categorize based on a list of things that all correspond to a definition, but rather on a hierarchical inventory based on semantic similarity to the central example(s). Exemplar Exemplar theory is the storage of specific instances (exemplars), with new objects evaluated only with respect to how closely they resemble specific known members (and nonmembers) of the category. This theory hypothesizes that learners store examples verbatim. This theory views concept learning as highly simplistic. Only individual properties are represented. These individual properties are not abstract and they do not create rules. An example of what exemplar theory might look like is, "water is wet". It is simply known that some (or one, or all) stored examples of water have the property wet. Exemplar based theories have become more empirically popular over the years with some evidence suggesting that human learners use exemplar based strategies only in early learning, forming prototypes and generalizations later in life. An important result of exemplar models in psychology literature has been a de-emphasis of complexity in concept learning. One of the best known exemplar theories of concept learning is the Generalized Context Model (GCM). A problem with exemplar theory is that exemplar models critically depend on two measures: similarity between exemplars, and having a rule to determine group membership. Sometimes it is difficult to attain or distinguish these measures. Multiple-prototype More recently, cognitive psychologists have begun to explore the idea that the prototype and exemplar models form two extremes. It has been suggested that people are able to form a multiple prototype representation, besides the two extreme representations. For example, consider the category 'spoon'. There are two distinct subgroups or conceptual clusters: spoons tend to be either large and wooden, or small and made of metal. The prototypical spoon would then be a medium-size object made of a mixture of metal and wood, which is clearly an unrealistic proposal. A more natural representation of the category 'spoon' would instead consist of multiple (at least two) prototypes, one for each cluster. A number of different proposals have been made in this regard (Anderson, 1991; Griffiths, Canini, Sanborn & Navarro, 2007; Love, Medin & Gureckis, 2004; Vanpaemel & Storms, 2008). These models can be regarded as providing a compromise between exemplar and prototype models. Explanation-based The basic idea of explanation-based learning suggests that a new concept is acquired by experiencing examples of it and forming a basic outline. Put simply, by observing or receiving the qualities of a thing the mind forms a concept which possesses and is identified by those qualities. The original theory, proposed by Mitchell, Keller, and Kedar-Cabelli in 1986 and called explanation-based generalization, is that learning occurs through progressive generalizing. This theory was first developed to program machines to learn. When applied to human cognition, it translates as follows: the mind actively separates information that applies to more than one thing and enters it into a broader description of a category of things. This is done by identifying sufficient conditions for something to fit in a category, similar to schematizing. The revised model revolves around the integration of four mental processes – generalization, chunking, operationalization, and analogy. Generalization is the process by which the characteristics of a concept which are fundamental to it are recognized and labeled. For example, birds have feathers and wings. Anything with feathers and wings will be identified as ‘bird’. When information is grouped mentally, whether by similarity or relatedness, the group is called a chunk. Chunks can vary in size from a single item with parts or many items with many parts. A concept is operationalized when the mind is able to actively recognize examples of it by characteristics and label it appropriately. Analogy is the recognition of similarities among potential examples. This particular theory of concept learning is relatively new and more research is being conducted to test it. Bayesian Taking a mathematical approach to concept learning, Bayesian theories propose that the human mind produces probabilities for a certain concept definition, based on examples it has seen of that concept. The Bayesian concept of Prior Probability stops being overly specific, while the likelihood of a hypothesis ensures the definition is not too broad. For example- say a child is shown three horses by a parent and told these are called "horses"- she needs to work out exactly what the adult means by this word. She is much more likely to define the word "horses" as referring to either this type of animal or all animals, rather than an oddly specific example like "all horses except Clydedales", which would be an unnatural concept. Meanwhile, the likelihood of 'horses' meaning 'all animals' when the three animals shown are all very similar is low. The hypothesis that the word "horse" refers to all animals of this species is most likely of the three possible definitions, as it has both a reasonable prior probability and likelihood given examples. Bayes' theorem is important because it provides a powerful tool for understanding, manipulating and controlling data5 that takes a larger view that is not limited to data analysis alone6. The approach is subjective, and this requires the assessment of prior probabilities6, making it also very complex. However, if Bayesians show that the accumulated evidence and the application of Bayes' law are sufficient, the work will overcome the subjectivity of the inputs involved7. Bayesian inference can be used for any honestly collected data and has a major advantage because of its scientific focus6. One model that incorporates the Bayesian theory of concept learning is the ACT-R model, developed by John R. Anderson. The ACT-R model is a programming language that defines the basic cognitive and perceptual operations that enable the human mind by producing a step-by-step simulation of human behavior. This theory exploits the idea that each task humans perform consists of a series of discrete operations. The model has been applied to learning and memory, higher level cognition, natural language, perception and attention, human-computer interaction, education, and computer generated forces. In addition to John R. Anderson, Joshua Tenenbaum has been a contributor to the field of concept learning; he studied the computational basis of human learning and inference using behavioral testing of adults, children, and machines from Bayesian statistics and probability theory, but also from geometry, graph theory, and linear algebra. Tenenbaum is working to achieve a better understanding of human learning in computational terms and trying to build computational systems that come closer to the capacities of human learners. Component display theory M. D. Merrill's component display theory (CDT) is a cognitive matrix that focuses on the interaction between two dimensions: the level of performance expected from the learner and the types of content of the material to be learned. Merrill classifies a learner's level of performance as: find, use, remember, and material content as: facts, concepts, procedures, and principles. The theory also calls upon four primary presentation forms and several other secondary presentation forms. The primary presentation forms include: rules, examples, recall, and practice. Secondary presentation forms include: prerequisites, objectives, helps, mnemonics, and feedback. A complete lesson includes a combination of primary and secondary presentation forms, but the most effective combination varies from learner to learner and also from concept to concept. Another significant aspect of the CDT model is that it allows for the learner to control the instructional strategies used and adapt them to meet his or her own learning style and preference. A major goal of this model was to reduce three common errors in concept formation: over-generalization, under-generalization and misconception. See also Sample exclusion dimension Notes and references Further reading Cognitive psychology Educational psychology Learning theory (education)
Concept learning
[ "Biology" ]
4,084
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
6,968,491
https://en.wikipedia.org/wiki/Busy-hour%20call%20attempts
In telecommunications, busy-hour call attempts (BHCA) is a teletraffic engineering measurement used to evaluate and plan capacity for telephone networks. BHCA is the number of telephone calls attempted at the sliding 60-minute period during which occurs the maximum total traffic load in a given 24-hour period (BHCA), and the higher the BHCA, the higher the stress on the network processors. BHCA is not to be confused with busy hour call completion (BHCC) which measures the throughput capacity of the network. If a bottleneck in the network exists with a capacity lower than the estimated BHCA, then congestion will occur resulting in many failed calls and customer dissatisfaction. BHCA is usually used when planning telephone switching capacities and frequently goes side by side with the Erlang unit capacity calculation. As an example, a telephone exchange with a capacity of one million BHCA is estimated to handle 250,000 subscribers. The overall calculation is more complex however, and involves accounting for available circuits, desired blocking rates, and Erlang capacity allocated to each subscriber. Determination The busy hour is determined by fitting a horizontal line segment equivalent to one hour under the traffic load curve about the peak load point. If the service time interval is less than 60 minutes, the busy hour is the 60-minute interval that contains the service timer interval. In cases where more than one busy hour occurs in a 24-hour period, i.e., when saturation occurs, the busy hour or hours most applicable to the particular situation are used. References See also Call volume (telecommunications) Calls per second Teletraffic Telecommunications engineering
Busy-hour call attempts
[ "Engineering" ]
341
[ "Electrical engineering", "Telecommunications engineering" ]
6,968,798
https://en.wikipedia.org/wiki/Headway
Headway is the distance or duration between vehicles in a transit system. The minimum headway is the shortest such distance or time achievable by a system without a reduction in the speed of vehicles. The precise definition varies depending on the application, but it is most commonly measured as the distance from the tip (front end) of one vehicle to the tip of the next one behind it. It can be expressed as the distance between vehicles, or as time it will take for the trailing vehicle to cover that distance. A "shorter" headway signifies closer spacing between the vehicles. Airplanes operate with headways measured in hours or days, freight trains and commuter rail systems might have headways measured in parts of an hour, metro and light rail systems operate with headways on the order of 90 seconds to 20 minutes, and vehicles on a freeway can have as little as 2 seconds headway between them. Headway is a key input in calculating the overall route capacity of any transit system. A system that requires large headways has more empty space than passenger capacity, which lowers the total number of passengers or cargo quantity being transported for a given length of line (railroad or highway, for instance). In this case, the capacity has to be improved through the use of larger vehicles. On the other end of the scale, a system with short headways, like cars on a freeway, can offer relatively large capacities even though the vehicles carry few passengers. The term is most often applied to rail transport and bus transport, where low headways are often needed to move large numbers of people in mass transit railways and bus rapid transit systems. A lower headway requires more infrastructure, making lower headways expensive to achieve. Modern large cities require passenger rail systems with tremendous capacity, and low headways allow passenger demand to be met in all but the busiest cities. Newer signalling systems and moving block controls have significantly reduced headways in modern systems compared to the same lines only a few years ago. In principle, automated personal rapid transit systems and automobile platoons could reduce headways to as little as fractions of a second. Description Different measures There are a number of different ways to measure and express the same concept, the distance between vehicles. The differences are largely due to historical development in different countries or fields. The term developed from railway use, where the distance between the trains was very great compared to the length of the train itself. Measuring headway from the front of one train to the front of the next was simple and consistent with timetable scheduling of trains, but constraining tip-to-tip headway does not always ensure safety. In the case of a metro system, train lengths are uniformly short and the headway allowed for stopping is much longer, so tip-to-tip headway may be used with a minor safety factor. Where vehicle size varies and may be longer than their stopping distances or spacing, as with freight trains and highway applications, tip-to-tail measurements are more common. The units of measure also vary. The most common terminology is to use the time of passing from one vehicle to the next, which closely mirrors the way the headways were measured in the past. A timer is started when one train passes a point, and then measures time until the next one passes, giving the tip-to-tip time. This same measure can also be expressed in terms of vehicles-per-hour, which is used on the Moscow Metro for instance. Distance measurements are somewhat common in non-train applications, like vehicles on a road, but time measurements are common here as well. Railway examples Train movements in most rail systems are tightly controlled by railway signalling systems. In many railways drivers are given instructions on speeds, and routes through the rail network. Trains can only accelerate and decelerate relatively slowly, so stopping from anything but low speeds requires several hundred metres or even more. The track distance required to stop is often much longer than the range of the driver's vision. If the track ahead is obstructed, for example a train is at stop there, then the train behind it will probably see it far too late to avoid a collision. Signalling systems serve to provide drivers with information on the state of the track ahead, so that a collision may be avoided. A side effect of this important safety function is that the headway of any rail system is effectively determined by the structure of the signalling system, and particularly the spacing between signals and the amount of information that can be provided in the signal. Rail system headways can be calculated from the signalling system. In practice there are a variety of different methods of keeping trains apart, some which are manual such as train order working or systems involving telegraphs, and others which rely entirely on signalling infrastructure to regulate train movements. Manual systems of working trains are common in area with low numbers of train movements, and headways are more often discussed in the context of non-manual systems. For automatic block signalling (ABS), the headway is measured in minutes, and calculated from the time from the passage of a train to when the signalling system returns to full clear (proceed). It is not normally measured tip to tip. An ABS system divides the track into block sections, into which only one train can enter at a time. Commonly trains are kept two to three block sections apart, depending on how the signalling system is designed, and so the length of the block section will often determine the headway. To have visual contact as a method to avoid collision (such as during shunting) is done only at low speeds, like 40 km/h. A key safety factor of train operations is to space the trains out by at least this distance, the "brick-wall stop" criterion. In order to signal the trains in time to allow them to stop, the railways placed workmen on the lines who timed the passing of a train, and then signalled any following trains if a certain elapsed time had not passed. This is why train headways are normally measured as tip-to-tip times, because the clock was reset as the engine passed the workman. As remote signalling systems were invented, the workmen were replaced with signal towers at set locations along the track. This broke the track into a series of block sections between the towers. Trains were not allowed to enter a section until the signal said it was clear. This had the side-effect of limiting the maximum speed of the trains to the speed where they could stop in the distance of one block section. This was an important consideration for the Advanced Passenger Train in the United Kingdom, where the lengths of block sections limited speeds and demanded a new braking system be developed. There is no perfect block-section size for the block-control approach. Longer sections, using as few signals as possible, are advantageous because signals are expensive and are points of failure, and they allow higher speeds because the trains have more room to stop. On the other hand, they also increase the headway, and thus reduce the overall capacity of the line. These needs have to be balanced on a case-by-case basis. Other examples In the case of automobile traffic, the key consideration in braking performance is the user's reaction time. Unlike the train case, the stopping distance is generally much shorter than the spotting distance. That means that the driver will be matching their speed to the vehicle in front before they reach it, eliminating the "brick-wall" effect. Widely used numbers are that a car traveling at 60 mph will require about 225 feet to stop, a distance it will cover just under 6 seconds. Nevertheless, highway travel often occurs with considerable safety with tip-to-tail headways on the order of 2 seconds. That's because the user's reaction time is about 1.5 seconds so 2 seconds allows for a slight overlap that makes up for any difference in braking performance between the two cars. Various personal rapid transit systems in the 1970s considerably reduced the headways compared to earlier rail systems. Under computer control, reaction times can be reduced to fractions of a second. Whether traditional headway regulations should apply to PRT and car train technology is debatable. In the case of the Cabinentaxi system developed in Germany, headways were set to 1.9 seconds because the developers were forced to adhere to the brick-wall criterion. In experiments, they demonstrated headways on the order of half of a second. In 2017, in the UK, 66% of cars and Light Commercial Vehicles, and 60% of motorcycles left the recommended two-second gap between themselves and other vehicles. Low-headway systems Headway spacing is selected by various safety criteria, but the basic concept remains the same – leave enough time for the vehicle to safely stop behind the vehicle in front of it. The "safely stop" criterion has a non-obvious solution, however; if a vehicle follows immediately behind the one in front, the vehicle in front simply cannot stop quickly enough to damage the vehicle behind it. An example would be a conventional train, where the vehicles are held together and have only a few millimetres of "play" in the couplings. Even when the locomotive applies emergency braking, the cars following do not suffer any damage because they quickly close the gap in the couplings before the speed difference can build up. There have been many experiments with automated driving systems that follow this logic and greatly decrease headways to tenths or hundredths of a second in order to improve safety. Today, modern CBTC railway signalling systems are able to significantly reduce headway between trains in the operation. Using automated "car follower" cruise control systems, vehicles can be formed into platoons (or flocks) that approximate the capacity of conventional trains. These systems were first employed as part of personal rapid transit research, but later using conventional cars with autopilot-like systems. Paris Métro Line 14 runs with headways as low as 85 seconds, while several lines of the Moscow Metro have peak hour headways of 90 seconds. Headway and route capacity Route capacity is defined by three figures; the number of passengers (or weight of cargo) per vehicle, the maximum safe speed of the vehicles, and the number of vehicles per unit time. Since the headway factors into two of the three inputs, it is a primary consideration in capacity calculations. The headway, in turn, is defined by the braking performance, or some external factor based on it, like block sizes. Following the methods in Anderson: Minimum safe headway The minimum safe headway measured tip-to-tail is defined by the braking performance: where: is the minimum safe headway, in seconds is the speed of the vehicles is the reaction time, the maximum time it takes for a following vehicle to detect a malfunction in the leader, and to fully apply the emergency brakes. is the minimum braking deceleration of the follower. is the maximum braking deceleration of the leader. For brick-wall considerations, is infinite and this consideration is eliminated. is an arbitrary safety factor, greater than or equal to 1. The tip-to-tip headway is simply the tip-to-tail headway plus the length of the vehicle, expressed in time: where: time for vehicle and headway to pass a point is the vehicle length Capacity The vehicular capacity of a single lane of vehicles is simply the inverse of the tip-to-tip headway. This is most often expressed in vehicles-per-hour: where: is the number of vehicles per hour is the minimum safe headway, in seconds The passenger capacity of the lane is simply the product of vehicle capacity and the passenger capacity of the vehicles: where: is the number of passengers per hour is the maximum passenger capacity per vehicle is the minimum safe headway, in seconds Examples Consider these examples: 1) freeway traffic, per lane: 100 km/h (~28 m/s) speeds, 4 passengers per vehicle, 4 meter vehicle length, 2.5 m/s^2 braking (1/4 g), 2 second reaction time, brick-wall stop, of 1.5; = 10.5 seconds ; = 7,200 passengers per hour if 4 people per car and 2 seconds headway is assumed, or 342 passengers per hour if 1 person per car and 10,5 seconds headway is assumed. The headway used in reality is much less than 10.5 seconds, since the brick-wall principle is not used on freeways. In reality, 1.5 persons per car and 2 seconds headway can be assumed, giving 1800 cars or 2700 passengers per lane and hour. For comparison, the Marin County, California (near San Francisco) states that peak flow on the three-lane Highway 101 is about 7,200 vehicles per hour. This is about the same number of passengers per lane. Notwithstanding these formulas it is widely known that reducing headway increases risk of collision in standard private automobile settings and is often referred to as tailgating. 2) metro system, per line: 40 km/h (~11 m/s) speeds, 1000 passengers, 100 meter vehicle length, 0.5 m/s^2 braking, 2 second reaction time, brick-wall stop, of 1.5; = 28 seconds ; = 130,000 passengers per hour Note that most signalling systems used on metros place an artificial limit on headway that is not dependent on braking performance. Also the time needed for station stops limits the headway. Using a typical figure of 2 minutes (120 seconds): = 30,000 passengers per hour Since the headway of a metro is constrained by signalling considerations, not vehicle performance, reductions in headway through improved signalling have a direct impact on passenger capacity. For this reason, the London Underground system has spent a considerable amount of money on upgrading the SSR Network, Jubilee and Central lines with new CBTC signalling to reduce the headway from about 3 minutes to 1, while preparing for the 2012 Olympics. 3) automated personal rapid transit system, 30 km/h (~8 m/s) speeds, 3 passengers, 3 meter vehicle length, 2.5 m/s^2 braking (1/4 g), 0.01 second reaction time, brake-failure on lead vehicle for 1 m/s slowing, bot 2.5, m/s if lead vehicle breaks. of 1.1; = 3 seconds ; = 28,000 passengers per hour This number is similar to the ones proposed by the Cabinentaxi system, although they predicted that actual use would be much lower. Although PRTs have less passenger seating and speeds, their shorter headways dramatically improve passenger capacity. However, these systems are often constrained by brick-wall considerations for legal reasons, which limits their performance to a car-like 2 seconds. In this case: = 5,400 passengers per hour Headways and ridership Headways have an enormous impact on ridership levels above a certain critical waiting time. Following Boyle, the effect of changes in headway are directly proportional to changes in ridership by a simple conversion factor of 1.5. That is, if a headway is reduced from 12 to 10 minutes, the average rider wait time will decrease by 1 minute, the overall trip time by the same one minute, so the ridership increase will be on the order of 1 x 1.5 + 1 or about 2.5%. Also see Ceder for an extensive discussion. References Notes Bibliography John Edward Anderson, "Transit Systems Theory", Lexington Books, 1978 John Edward Anderson, "The Capacity of a Personal Rapid Transit System", 13 May 1997 Daniel Boyle, "Fixed Route Transit Ridership Forecasting and Service Planning Methods", Synthesis of Transit Practice, Volume 66 (2006), Transportation Research Board, Jon Carnegie, Alan Voorhees and Paul Hoffman, "Viability of Personal Rapid Transit In New Jersey", February 2007 Avishai Ceder, "Public transit planning and operation: theory, modelling and practice", Butterworth-Heinemann, 2007, Tom Parkinson and Ian Fisher, "Rail Transit Capacity", Transportation Research Board, 1996, Rail technologies Public transport Temporal rates Transportation planning Scheduling (transportation)
Headway
[ "Physics" ]
3,289
[ "Temporal quantities", "Temporal rates", "Physical quantities" ]
6,968,894
https://en.wikipedia.org/wiki/Ultra%20sheer
Ultra sheer refers to very light deniers of stockings or pantyhose, usually 10 or less. The denier of a stocking refers to the thickness of the nylon yarn used in the fabric. The greater the denier, the more durable the material and less prone to tearing (or "getting a run"). Ultra sheer stockings have a very light transparency and a high sheen. References See also Net (textile) See-through clothing Lingerie Hosiery Transparent materials
Ultra sheer
[ "Physics" ]
99
[ "Physical phenomena", "Optical phenomena", "Materials", "Transparent materials", "Matter" ]
6,968,975
https://en.wikipedia.org/wiki/Kirkwood%20approximation
The Kirkwood superposition approximation was introduced in 1935 by John G. Kirkwood as a means of representing a discrete probability distribution. The Kirkwood approximation for a discrete probability density function is given by where is the product of probabilities over all subsets of variables of size i in variable set . This kind of formula has been considered by Watanabe (1960) and, according to Watanabe, also by Robert Fano. For the three-variable case, it reduces to simply The Kirkwood approximation does not generally produce a valid probability distribution (the normalization condition is violated). Watanabe claims that for this reason informational expressions of this type are not meaningful, and indeed there has been very little written about the properties of this measure. The Kirkwood approximation is the probabilistic counterpart of the interaction information. Judea Pearl (1988 §3.2.4) indicates that an expression of this type can be exact in the case of a decomposable model, that is, a probability distribution that admits a graph structure whose cliques form a tree. In such cases, the numerator contains the product of the intra-clique joint distributions and the denominator contains the product of the clique intersection distributions. References Jakulin, A. & Bratko, I. (2004), Quantifying and visualizing attribute interactions: An approach based on entropy, Journal of Machine Learning Research, (submitted) pp. 38–43. Discrete distributions Statistical approximations
Kirkwood approximation
[ "Mathematics" ]
308
[ "Statistical approximations", "Mathematical relations", "Approximations" ]
6,969,110
https://en.wikipedia.org/wiki/Citrinitas
Citrinitas, or sometimes xanthosis, is a term given by alchemists to "yellowness." It is one of the four major stages of the alchemical magnum opus. In alchemical philosophy, citrinitas stood for the dawning of the "solar light" inherent in one's being, and that the reflective "lunar or soul light" was no longer necessary. The other three alchemical stages were nigredo (blackness), albedo (whiteness), and rubedo (redness). Psychologist Carl Jung is credited with interpreting the alchemical process as analogous to modern-day psychoanalysis. In the Jungian archetypal schema, nigredo is the Shadow; albedo refers to the anima and animus (contrasexual soul images); citrinitas is the wise old man (or woman) archetype; and rubedo is the Self archetype which has achieved wholeness. References Nigel Hamilton (1985), The Alchemical Process of Transformation C. G. Jung, Psychology and Alchemy 2nd. ed. (Transl. by R. F. C. Hull) E. J. Holmyard, Alchemy New York. Dower Publications. 1990 Notes Alchemical processes
Citrinitas
[ "Chemistry" ]
264
[ "Alchemical processes" ]
6,969,587
https://en.wikipedia.org/wiki/Small%20intensely%20fluorescent%20cell
Small intensely fluorescent cells (SIF cells) are the interneurons of the sympathetic ganglia (postganglionic neurons) of the Sympathetic division of the autonomic nervous system (ANS). The neurotransmitter for these cells is dopamine. They are a neural crest derivative and share a common sympathoadrenal precursor cell with sympathetic neurons and chromaffin cells (adrenal medulla). Although an autonomic ganglion is the site where preganglionic fibers synapse on postganglionic neurons, the presence of small interneurons has been recognized. These cells exhibit catecholamine fluorescence and are referred to as small intensely fluorescent (SIF) cells. In some ganglia, these intemeurons receive preganglionic cholinergic fibers and may modulate ganglionic transmission. In other ganglia, they receive collateral branches and may serve some Integrative function. Many SIF cells contain dopamine, which Is thought to be their transmitter. References Further reading Cell biology Autonomic nervous system
Small intensely fluorescent cell
[ "Biology" ]
227
[ "Cell biology" ]
6,969,727
https://en.wikipedia.org/wiki/High-dimensional%20model%20representation
High-dimensional model representation is a finite expansion for a given multivariable function. The expansion was first described by Ilya M. Sobol as The method, used to determine the right hand side functions, is given in Sobol's paper. A review can be found here: High Dimensional Model Representation (HDMR): Concepts and Applications. The underlying logic behind the HDMR is to express all variable interactions in a system in a hierarchical order. For instance represents the mean response of the model . It can be considered as measuring what is left from the model after stripping down all variable effects. The uni-variate functions , however represents the "individual" contributions of the variables. For instance, is the portion of the model that can be controlled only by the variable . For this reason, there can not be any constant in because all constants are expressed in . Going further into higher interactions,the next stop is bivariate functions which represents the cooperative effect of variables and together. Similar logic applies here: the bivariate functions do not contain univarite functions nor constants as it violates the construction logic of HDMR. As we go into higher interactions, the number of interactions are increasing and at last we reach the residual term representing the contribution only if all variable act together. HDMR as an Approximation The hierarchical representation model of HDMR brings an advantage if one needs to replace an existing model with a simpler one usually containing only univariate or bivariate terms. If the target model does not contain higher level of variable interactions, this approach can yield good approximations with the additional advantage of providing a clearer view of variable interactions. See also Variance-based sensitivity analysis Volterra series References Functions and mappings
High-dimensional model representation
[ "Mathematics" ]
358
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations" ]
6,970,644
https://en.wikipedia.org/wiki/Phono%20input
Phono input is a set of input jacks, usually mini jacks or RCA connectors, located on the rear panel of a preamp, mixer or amplifier, especially on early radio sets, to which a phonograph or turntable is attached. Modern phono cartridges give a very low level output signal of the order of a few millivolts which the circuitry amplifies and equalizes. Phonograph recordings are made with high frequencies boosted and the low frequencies attenuated: during playback the frequency response changes are reversed. This reduces background noise, including clicks or pops, and also conserves the amount of physical space needed for each groove, by reducing the size of the larger low-frequency undulations. This is accomplished in the amplifier with a phono input that incorporates standardized RIAA equalization circuitry. Through at least the 1980s, the phono input was widely available on consumer stereo equipment—even some larger boomboxes had them. By the 2000s only very sophisticated and expensive stereo receivers retained the phono input, since most users were expected to use digital music formats such as CD or satellite radio. Some newer low-cost turntables include built-in amplifiers to produce line-level (one volt) outputs; devices are available that perform this conversion for use with computers; or older amplifiers or radio receivers can be used. Nearly all DJ mixers have two or more phono inputs, together with two or more one-volt line inputs that also use RCA connectors. This "phono input" designed for the millivolt signal from an unamplified turntable should not be confused with the modern standard one-volt line input and output that also uses RCA connectors and is found on video cameras, recorders and similar modern equipment. References Audio engineering Audiovisual connectors
Phono input
[ "Engineering" ]
373
[ "Electrical engineering", "Audio engineering" ]
6,970,787
https://en.wikipedia.org/wiki/Autler%E2%80%93Townes%20effect
In spectroscopy, the Autler–Townes effect (also known as AC Stark effect), is a dynamical Stark effect corresponding to the case when an oscillating electric field (e.g., that of a laser) is tuned in resonance (or close) to the transition frequency of a given spectral line, and resulting in a change of the shape of the absorption/emission spectra of that spectral line. The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes. It is the AC equivalent of the static Stark effect which splits the spectral lines of atoms and molecules in a constant electric field. Compared to its DC counterpart, the AC Stark effect is computationally more complex. While generally referring to atomic spectral shifts due to AC fields at any (single) frequency, the effect is more pronounced when the field frequency is close to that of a natural atomic or molecular dipole transition. In this case, the alternating field has the effect of splitting the two bare transition states into doublets or "dressed states" that are separated by the Rabi frequency. Alternatively, this can be described as a Rabi oscillation between the bare states which are no longer eigenstates of the atom–field Hamiltonian. The resulting fluorescence spectrum is known as a Mollow triplet. The AC Stark splitting is integral to several phenomena in quantum optics, such as electromagnetically induced transparency and Sisyphus cooling. Vacuum Rabi oscillations have also been described as a manifestation of the AC Stark effect from atomic coupling to the vacuum field. History The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes while at Columbia University and Lincoln Labs at the Massachusetts Institute of Technology. Before the availability of lasers, the AC Stark effect was observed with radio frequency sources. Autler and Townes' original observation of the effect used a radio frequency source tuned to 12.78 and 38.28 MHz, corresponding to the separation between two doublet microwave absorption lines of OCS. The notion of quasi-energy in treating the general AC Stark effect was later developed by Nikishov and Ritis in 1964 and onward. This more general method of approaching the problem developed into the "dressed atom" model describing the interaction between lasers and atoms. Prior to the 1970s there were various conflicting predictions concerning the fluorescence spectra of atoms due to the AC Stark effect at optical frequencies. In 1974 the observation of Mollow triplets verified the form of the AC Stark effect using visible light. General semiclassical approach In a semiclassical model where the electromagnetic field is treated classically, a system of charges in a monochromatic electromagnetic field has a Hamiltonian that can be written as: where , , and are respectively the position, momentum, mass, and charge of the -th particle, and is the speed of light. The vector potential of the field, , satisfies . The Hamiltonian is thus also periodic: Now, the Schrödinger equation, under a periodic Hamiltonian is a linear homogeneous differential equation with periodic coefficients, where here represents all coordinates. Floquet's theorem guarantees that the solutions to an equation of this form can be written as Here, is the "bare" energy for no coupling to the electromagnetic field, and has the same time-periodicity as the Hamiltonian, or with the angular frequency of the field. Because of its periodicity, it is often further useful to expand in a Fourier series, obtaining or where is the frequency of the laser field. The solution for the joint particle-field system is, therefore, a linear combination of stationary states of energy , which is known as a quasi-energy state and the new set of energies are called the spectrum of quasi-harmonics. Unlike the DC Stark effect, where perturbation theory is useful in a general case of atoms with infinite bound states, obtaining even a limited spectrum of shifted energies for the AC Stark effect is difficult in all but simple models, although calculations for systems such as the hydrogen atom have been done. Examples General expressions for AC Stark shifts must usually be calculated numerically and tend to provide little insight. However, there are important individual examples of the effect that are informative. Analytical solutions in these specific cases are usually obtained assuming the detuning is small compared to a characteristic frequency of the radiating system. Two level atom dressing An atom driven by an electric field with frequency close to an atomic transition frequency (that is, when ) can be approximated as a two level quantum system since the off resonance states have low occupation probability. The Hamiltonian can be divided into the bare atom term plus a term for the interaction with the field as: In an appropriate rotating frame, and making the rotating wave approximation, reduces to Where is the Rabi frequency, and are the strongly coupled bare atom states. The energy eigenvalues are , and for small detuning, The eigenstates of the atom-field system or dressed states are dubbed and . The result of the AC field on the atom is thus to shift the strongly coupled bare atom energy eigenstates into two states and which are now separated by . Evidence of this shift is apparent in the atom's absorption spectrum, which shows two peaks around the bare transition frequency, separated by (Autler-Townes splitting). The modified absorption spectrum can be obtained by a pump-probe experiment, wherein a strong pump laser drives the bare transition while a weaker probe laser sweeps for a second transition between a third atomic state and the dressed states. Another consequence of the AC Stark splitting here is the appearance of Mollow triplets, a triple peaked fluorescence profile. Historically an important confirmation of Rabi flopping, they were first predicted by Mollow in 1969 and confirmed in the 1970s experimentally. Optical Dipole Trap (Far-Off-Resonance Trap) For ultracold atoms experiments utilizing the optical dipole force from AC Stark shift, the light is usually linearly polarized to avoid the splitting of different magnetic substates with different , and the light frequency is often far detuned from the atomic transition to avoid heating the atoms from the photon-atom scattering; in turn, the intensity of the light field (i.e. AC electric field) is typically high to compensate for the large detuning. Typically, we have , where the atomic transition has a natural linewidth and a saturation intensity: Note the above expression for saturation intensity does not apply to all cases. For example, the above applies for the D2 line transition of Li-6, but not the D1 line, which obeys a different sum rule in calculating the oscillator strength. As a result, the D1 line has a saturation intensity 3 times larger than the D2 line. However, when the detuning from these two lines is much larger than the fine-structure splitting, the overall saturation intensity takes the value of the D2 line. In the case where the light's detuning is comparable to the fine-structure splitting but still much larger than the hyperfine splitting, the D2 line contributes twice as much dipole potential as the D1 line, as shown in Equation (19) of. The optical dipole potential is therefore: Here, the Rabi frequency is related to the (dimensionless) saturation parameter , and is the real part of the complex polarizability of the atom, with its imaginary counterpart representing the dissipative optical scattering force. The factor of 1/2 takes into account that the dipole moment is an induced, not a permanent one. When , the rotating wave approximation applies, and the counter-rotating term proportional to can be omitted; However, in some cases, the ODT light is so far detuned that counter-rotating term must be included in calculations, as well as contributions from adjacent atomic transitions with appreciable linewidth . Note that the natural linewidth here is in radians per second, and is the inverse of lifetime . This is the principle of operation for Optical Dipole Trap (ODT, also known as Far Off Resonance Trap, FORT), in which case the light is red-detuned . When blue-detuned, the light beam provides a potential bump/barrier instead. The optical dipole potential is often expressed in terms of the recoil energy, which is the kinetic energy imparted in an atom initially at rest by "recoil" during the spontaneous emission of a photon: where is the wavevector of the ODT light ( when detuned). The recoil energy, along with related recoil frequency , are crucial parameters in understanding the dynamics of atoms in light fields, especially in the context of atom optics and momentum transfer. In applications that utilize the optical dipole force, it is common practice to use a far-off-resonance light frequency. This is because a smaller detuning would increase the photon-atom scattering rate much faster than it increases the dipole potential energy, leading to undesirable heating of the atoms. Quantitatively, the scattering rate is given by: Adiabatic elimination In quantum system with three (or more) states, where a transition from one level, to another can be driven by an AC field, but only decays to states other than , the dissipative influence of the spontaneous decay can be eliminated. This is achieved by increasing the AC Stark shift on through large detuning and raising intensity of the driving field. Adiabatic elimination has been used to create comparatively stable effective two level systems in Rydberg atoms, which are of interest for qubit manipulations in quantum computing. Electromagnetically induced transparency Electromagnetically induced transparency (EIT), which gives some materials a small transparent area within an absorption line, can be thought of as a combination of Autler-Townes splitting and Fano interference, although the distinction may be difficult to determine experimentally. While both Autler-Townes splitting and EIT can produce a transparent window in an absorption band, EIT refers to a window that maintains transparency in a weak pump field, and thus requires Fano interference. Because Autler-Townes splitting will wash out Fano interference at stronger fields, a smooth transition between the two effects is evident in materials exhibiting EIT. See also Stark effect Stark spectroscopy Electromagnetically induced transparency Fano interference Rabi cycle References Further reading Cohen-Tannoudji et al., Quantum Mechanics, Vol 2, p 1358, trans. S. R. Hemley et al., Hermann, Paris 1977 Atomic physics Quantum optics
Autler–Townes effect
[ "Physics", "Chemistry" ]
2,168
[ "Quantum optics", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
6,971,691
https://en.wikipedia.org/wiki/Jessica%20Mink
Jessica Mink (formerly Douglas John Mink) is an American software developer and a data archivist at the Center for Astrophysics Harvard & Smithsonian. She was part of the team that discovered the rings around the planet Uranus. Early life and career Mink was born in Lincoln, Nebraska, in 1951 and graduated from Dundee Community High School in 1969. She earned an S.B. degree (1973) and an S.M. degree (1974) in Planetary Science from the Massachusetts Institute of Technology (MIT). She worked at Cornell University from 1976 to 1979 as an astronomical software developer. It was during this time that she was part of the team that discovered the rings around Uranus. Within the team she was responsible for the data reduction software and the data analysis. After working at Cornell she moved back to MIT, where she did work that contributed to the discovery of the rings of Neptune. She has written a number of commonly used software packages for astrophysics, including WCSTools and RVSAO. Despite not having a PhD, Mink is a member of the American Astronomical Society and the International Astronomical Union. Personal life Mink is an avid bicycle user. She has served as an officer and director of the Massachusetts Bicycle Coalition and has been the route planner for the Massachusetts portion of the East Coast Greenway since 1991. Mink is a transgender woman, and she publicly came out in 2011 at the age of 60. She has since spoken out about her experiences transitioning. She was also featured in two articles about the experiences of transitioning in a professional environment. She was a co-organiser of the 2015 Inclusive Astronomy conference at Vanderbilt University. Mink currently lives in Massachusetts (USA), and has a daughter. References External links Jessica Mink's Homepage 1951 births Living people People from Lincoln, Nebraska Harvard University staff Massachusetts Institute of Technology alumni LGBTQ people from Nebraska American LGBTQ scientists American transgender women Transgender scientists American planetary scientists American women planetary scientists LGBTQ astronomers
Jessica Mink
[ "Astronomy" ]
402
[ "Astronomers", "LGBTQ astronomers" ]
6,972,416
https://en.wikipedia.org/wiki/Shift%20register%20lookup%20table
A shift register lookup table, also shift register LUT or SRL, refers to a component in digital circuitry. It is essentially a shift register of variable length. The length of SRL is set by driving address pins high or low and can be changed dynamically, if necessary. The SRL component is used in FPGA devices. The SRL can be used as a programmable delay element. See also Lookup table Shift register References Digital electronics Electronic engineering Digital systems Logic gates Computer memory Digital registers
Shift register lookup table
[ "Technology", "Engineering" ]
106
[ "Computer engineering", "Digital electronics", "Digital systems", "Information systems", "Electronic engineering", "Electrical engineering" ]
6,973,036
https://en.wikipedia.org/wiki/List%20of%20Sun%20Microsystems%20employees
Sun Microsystems, from its inception in 1982 to its acquisition by Oracle Corporation in 2010, became known for being "something of a farm system for Silicon Valley." It had a number of employees credited with notable achievements before, during or after their tenure there. A Brian Aker, MySQL Director of Technology Ken Arnold, Sun Microsystems Laboratories, co-author of The Java Programming Language B Carol Bartz, head of SunFed, Sun service and worldwide operations; Autodesk CEO, Yahoo! CEO Andy Bechtolsheim, Sun co-founder, systems designer and Silicon Valley investor Joshua Bloch, author of Effective Java Jon Bosak, chair of the original XML working group Jeff Bonwick, slab-allocator, vmem and ZFS Steve Bourne, creator of the Bourne shell Tim Bray, Sun Director of Web Technologies David J. Brown, SUN workstation at Stanford; Solaris at Sun Paul Buchheit, engineer at Sun from May 1997 to August 1997; Creator of Gmail C Bryan Cantrill, of 2005 Technology Review "Top 35 Young Innovators", co-inventor of DTrace Alfred Chuang, co-founder of BEA Systems Danny Cohen, co-creator of Cohen-Sutherland line clipping algorithms; coined the computer terms "Big Endians" and "Little Endians" (Endianness) Bill Coleman, co-founder of BEA Systems Danese Cooper, open source specialist D James Duncan Davidson, creator of the Tomcat web container and the Ant build tool L. Peter Deutsch, founder of Aladdin Enterprises and creator of Ghostscript Whitfield Diffie, Chief Security Officer, co-inventor of public-key cryptography Robert Drost, one of Technology Review'''s 2004 "Top 100 Young Innovators" F Dan Farmer, computer security researcher Marc Fleury, creator of the JBoss application server Ned Freed, email systems researcher, co-author of several MIME RFCs G Richard P. Gabriel, Lisp expert and founder of Lucid, Inc. John Gage, Chief Researcher and former Science Officer; first Sun salesman John Gilmore, co-founder of the Electronic Frontier Foundation and Cygnus Solutions Gary Ginstling, music industry executive James Gosling, co-inventor of Java; creator of NeWS networked extensible window system; author of the first (proprietary) Unix implementation of the Emacs text editor Todd Greanier, software architect, author and instructor Brendan Gregg, author of DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X and FreeBSD, Systems Performance: Enterprise and the Cloud'' J Kim Jones, Vice President of Global Education, Government and Health Sciences; CEO of Sun UK from 2007; CEO of Curriki Bill Joy, Sun co-founder and architect of BSD Unix; author of the vi text editor K Vinod Khosla, Sun co-founder and Silicon Valley investor L Susan Landau, mathematician and cybersecurity expert Adam Leventhal, co-inventor of DTrace Peter van der Linden, former manager of kernel group, author of numerous Java and C books M Chris Malachowsky, co-founder of NVIDIA Clark Masters EVP, Enterprise Systems and Father of the E10K, President of SunFed Craig McClanahan, creator or the Apache Struts framework and architect of Tomcat's servlet container, Catalina Scott McNealy, co-founder and Chairman of the Board of Sun; CEO from 1984-2006 Larry McVoy, CEO of BitMover Björn Michaelsen, Director at The Document Foundation Mårten Mickos, CEO of MySQL AB from 2001 until Sun acquisition in 2008 Jim Mitchell, Vice President and Sun Fellow Ian Murdock, Vice President of Developer and Community Marketing, founder of Debian N Satya Nadella, CEO of Microsoft Patrick Naughton, co-creator of Java Jakob Nielsen, web-design usability authority Peter Norvig, Director of Research, Google O John Ousterhout, inventor of the Tcl scripting language P Greg Papadopoulos, Executive Vice President and CTO Radia Perlman, sometimes known as the "Mother of the Internet" Simon Phipps, Chief Open Source Officer Kim Polese, prominent dot-com era executive Curtis Priem, co-founder of NVIDIA R George Reyes, former CFO of Google, Inc. David S. H. Rosenthal, early X Window System developer and original designer of the ICCCM Wayne Rosing, project lead for the Apple Lisa; Sun hardware development manager and manager of Sun Labs S Bob Scheifler, leader of X Window System development from 1984 to 1996 Eric Schmidt, former Sun Chief Technology Officer, chairman and former CEO of Google, Inc., and co-developer of lex Jonathan I. Schwartz, former Sun President and CEO Ed Scott, co-founder of BEA Systems Mike Shapiro, co-inventor of DTrace Bob Sproull, computer graphics pioneer Guy L. Steele, Jr., co-inventor of the Scheme programming language and member of IEEE standards committees of many programming languages Bert Sutherland, manager of Sun Labs, Xerox PARC, BBN Computer Science Division Ivan Sutherland, computer graphics pioneer T Bruce Tognazzini, computer usability consultant Marc Tremblay, microprocessor architect and Sun's employee with the most awarded patents Bud Tribble, former VP of software development at NeXT, VP of software technology at Apple W Jim Waldo, lead architect of Jini Michael Widenius, original author of MySQL Y William Yeager, software architect, inventor of the multi-protocol router Z Ed Zander, former president of Sun Microsystems; former CEO of Motorola References Lists of people by company Computing-related lists Employees San Francisco Bay Area-related lists Employees by company
List of Sun Microsystems employees
[ "Technology" ]
1,195
[ "Computing-related lists" ]
6,973,644
https://en.wikipedia.org/wiki/440%20%28number%29
440 (four hundred [and] forty) is the natural number following 439 and preceding 441. In mathematics 440 has the factorization 440 is: Even The sum of the first 17 prime numbers A Harshad number An abundant number A happy number In science A440 (pitch standard) widely used for the musical note A4, the A above Middle C, often used for tuning instruments References Integers
440 (number)
[ "Mathematics" ]
82
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
6,973,815
https://en.wikipedia.org/wiki/Avonite
Avonite Surfaces is an acrylic solid surface material brand. External links "Slabs of Color", Popular Science, June 1989 "High-Tech Countertop: How to build a new countertop with the latest material", Popular Mechanics, April 1990 Cabinets and Countertops, Taunton Press, 2006 "A Solid History: Reviewing 40 Years Of Solid Surface", Surface Fabrication, November 2007 The professional practice of architectural detailing, Wiley, 1987 Building materials Kitchen countertops
Avonite
[ "Physics", "Engineering" ]
95
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
6,974,596
https://en.wikipedia.org/wiki/Land%20cover
Land cover is the physical material at the land surface of Earth. Land covers include flora, concrete, built structures, bare ground, and temporary water. Earth cover is the expression used by ecologist Frederick Edward Clements that has its closest modern equivalent being vegetation. The expression continues to be used by the United States Bureau of Land Management. There are two primary methods for capturing information on land cover: field survey, and analysis of remotely sensed imagery. Land change models can be built from these types of data to assess changes in land cover over time. One of the major land cover issues (as with all natural resource inventories) is that every survey defines similarly named categories in different ways. For instance, there are many definitions of "forest"—sometimes within the same organisation—that may or may not incorporate a number of different forest features (e.g., stand height, canopy cover, strip width, inclusion of grasses, and rates of growth for timber production). Areas without trees may be classified as forest cover "if the intention is to re-plant" (UK and Ireland), while areas with many trees may not be labelled as forest "if the trees are not growing fast enough" (Norway and Finland). Distinction from "land use" "Land cover" is distinct from "land use", despite the two terms often being used interchangeably. Land use is a description of how people utilize the land and of socio-economic activity. Urban and agricultural land uses are two of the most commonly known land use classes. At any one point or place, there may be multiple and alternate land uses, the specification of which may have a political dimension. The origins of the "land cover/land use" couplet and the implications of their confusion are discussed in Fisher et al. (2005). Types Following table is Land Cover statistics by Food and Agriculture Organization (FAO) with 14 classes. Mapping Land cover change detection using remote sensing and geospatial data provides baseline information for assessing the climate change impacts on habitats and biodiversity, as well as natural resources, in the target areas. Land cover change detection and mapping is a key component of interdisciplinary land change science, which uses it to determine the consequences of land change on climate. Application of land cover mapping Local and regional planning Disaster management Vulnerability and Risk Assessments Ecological management Monitoring the effects of climate change Wildlife management Alternative landscape futures and conservation Environmental forecasting Environmental impact assessment Policy development See also Geo-Wiki Land change modeling Pedosphere Cryosphere Hydrosphere References Further reading Ivan Balenovic; et al. (2015). "Quality assessment of high density digital surface model over different land cover classes".PERIODICUM BIOLOGORUM. VOL. 117, No 4, pp. 459–470, 2015 External links Global land cover maps for 2015 with a spatial resolution of 100 metres based on data from the Copernicus programme Annual Regional Land Cover Monitoring System or Hindu Kush Himalaya with a spatial resolution of 30 metres based on Landsat images Biogeography Natural resources
Land cover
[ "Biology" ]
615
[ "Biogeography" ]
6,974,619
https://en.wikipedia.org/wiki/Incurred%20but%20not%20reported
In insurance, incurred but not reported (IBNR) claims is the amount owed by an insurer to all valid claimants who have had a covered loss but have not yet reported it. Since the insurer knows neither how many of these losses have occurred, nor the severity of each loss, IBNR is necessarily an estimate. The sum of IBNR losses plus reported losses yields an estimate of the total eventual liabilities the insurer will cover, known as ultimate losses. IBNR and IBNER The term "IBNR" is sometimes ambiguous, as it is not always clear whether it includes development on reported claims. Pure IBNR refers to only unreported claims, not any development on reported claims. Incurred but not enough reported (IBNER), in contrast, refers to development on reported claims. For example, when a claim is first reported, a $100 payment might be made, and a $900 case reserve might be established, for a total initial reported amount of $1000. However, the claim may later settle for a larger amount, resulting in $2000 of payments from the insurer to the claimant before the claim is closed. The estimated amount of this future development on reported claims is known as IBNER. In some cases, the term "IBNR" refers only to pure IBNR; in other case, it is understood to be the sum of pure IBNR and IBNER. Methods of estimation Actuarial loss reserving methods including the chain-ladder method, Bornhuetter–Ferguson method, expected claims technique, and others are used to estimate IBNR and, hence, ultimate losses. Since the implementation of Solvency II, stochastic claims reserving methods have become more common. See also Loss reserving Actuarial science References Actuarial science
Incurred but not reported
[ "Mathematics" ]
364
[ "Applied mathematics", "Actuarial science" ]
6,974,695
https://en.wikipedia.org/wiki/RAD%20Group
RAD Group is a number of independent companies that develop, manufacture and market solutions for diverse segments of the networking and telecommunications industry. Each company operates independently, without a holding company, but is guided by the group founders under a collective strategic umbrella. Companies share technology, engage in joint marketing activities and benefit from a common management structure. Four RAD Group companies are traded on NASDAQ in the U.S.: Ceragon Networks, Radware, RADCOM, and Silicom. The others are privately held by the Group's founders and several venture capital firms. History The RAD Group was founded by brothers Yehuda (1942–2024) and Zohar (1949–2023) Zisapel, in Tel Aviv, Israel. Both brothers studied electrical engineering at the Technion – Israel Institute of Technology. Yehuda started his career in the 1960s working for Motorola Israel but in 1973 decided to start his own business importing and distributing computer networking equipment, a company called Bitcom. Later Yehuda parted company with his initial business partner and started a new company, Bynet. The company's main business was distributing Codex Corporation products, and the company soon became a market leader in Israel. In 1977 Codex Corporation was acquired by Motorola, but due to its success Bynet maintain the distribution rights for its products; however in 1981 Motorola decided not to renew the distribution agreement with Bynet, and Codex Corporation began to sell in Israel directly. The experience of losing the distribution rights of Codex made Yehuda realize that his business should never rely on one product line, and in 1981 he asked his brother, Zohar, to join him at Bynet to start working on the development of their products. They started a new company in a corner of the Bynet offices and gave it the name RAD Data Communications, RAD being the acronym of Research And Development. RAD's first successful product was a miniature (by 1980s standards) computer modem. By 1985, RAD's annual revenues reached US$5.5 million. RAD Data Communications is now the largest company in the RAD Group. In 1985, RAD provided initial funding and support to entrepreneur Benny Hanigal to start LANNET Data Communications, which developed a pioneering Ethernet switch, one of the first to offer Ethernet switching over simple twisted pair telephone cables rather than expensive coaxial cables. In 1991 LANNET had an initial public offering on NASDAQ, but in 1995, as their market was consolidating, it was decided to merge with Madge Networks, in a deal valuing LANNET at $300 million USD. By the end of 1995, the merged Madge-LANNET had 1,400 employees and achieved revenues of more than $400 million, but throughout 1996-1997 there were disagreements about strategy. Benny Hanigal left the company and joined the Israeli Venture Capital fund Star Ventures. In late 1997 Madge Networks spun off its Ethernet division into a separate subsidiary, once again named LANNET, and then sold it to Lucent Technologies for $117 million in July 1998. During the 1990s the RAD Group was involved in establishing 12 different technology companies. Some became publicly listed companies on NASDAQ and some were later sold to other companies. The group typically has a similar approach for starting new ventures: a business idea of an entrepreneur (an existing company employee or an outsider) or from the company's management team forms the basis of a start-up. Initial funding is provided by the company together with other venture capital funds. In this way companies such as: RADCOM was established in 1990 and received funding from Star Venture and Pitango Venture Capital funds and Radvision in 1992, which was founded by Ami Amir and Eli Doron and received external funding from Evergreen and Clal venture capital funds, as well as from Siemens. Importance According to research conducted by Prof. Shmuel Ellis, Chair of the Management Department at Tel Aviv University's Faculty of Management, together with Prof. Israel Drori of the School of Business Administration at the College of Management Academic Studies and Prof. Zur Shapira, Chair of the Management and Organizations Department at New York University, the RAD Group has been "the most fertile ground" for creating Israeli entrepreneurs, having produced 56 "serial entrepreneurs" who established more than one start-up each. RAD Group "graduates" were responsible for the establishment of a total of 111 significant hi-tech initiatives. Awards and recognition RAD Group companies have won many awards including from Business Red Herring, the Fierce Innovation Cybersecurity Award, the Internet Telephony Conference Best-in-Show Award, the Network Virtualization Industry Award, Telecom Asia's Reader's Choice Award, multiple Carrier Ethernet Awards, and Editor's Choice Awards from industry magazines such as Network Computing. The RAD Group also sponsors Protocols.com, a leading site for network and computer science information and reference materials. Zohar Zisapel has been called the "Bill Gates" of Israel. In 2005, RAD Group was ranked 14 on the list of "The 29 Best Business Ideas in the World" by Business 2.0 magazine in August 2005. RAD Group Companies The RAD Group currently consists of 10 companies, four of which are traded on the Nasdaq stock market. The group's total revenue in 2016 was $1.328 billion. Bynet - system integrator, established 1973 RAD Data Communications - access solutions for carriers and corporate networks, established 1981 Silicom Connectivity Solutions - hi-end adapters for servers and security appliances, established 1987 RADCOM - providers of monitoring, analysis and troubleshooting systems for Next Generation networks, established 1991 Ceragon Networks - wireless broadband, established 1996 (as Giganet) Radware - intelligent application switching, established 1997 Radwin - broadband wireless solutions, established 1997 PacketLight Networks - DWDM/OTN solutions for fiber optic networks and data center interconnect, transporting high rate data, storage, video, and voice applications, with optional Layer-1 encryption, established 2000 Radiflow - cyber-security solutions for ICS/SCADA, established 2009 SecurityDAM - detection and mitigation of Distributed Denial of Service (DDoS) attacks, established 2012 Former members Former members of the RAD Group include: LANNET - sold to Madge Networks in 1995 and then to Lucent in 1998 RADLINX - sold to VocalTec in 1998 Armon Networking - sold to Bay Networks in 1996 RADNET - sold to Siemens AG and Newbridge Networks in 1997 RADLAN - sold to Marvell in 2003 RND - sold to USR Electronics in 2003 RiT Technologies - sold to Stis Coman Corporation in 2008 SANRAD - sold to OCZ Technology in 2012 Radvision - sold to Avaya in 2012 RadView Software See also List of Israeli companies quoted on the Nasdaq Science and technology in Israel Silicon Wadi Economy of Israel References External links RAD Group Home Page Technology companies of Israel Computer hardware companies Companies listed on the Nasdaq Telecommunications equipment vendors Networking hardware companies Electronics companies of Israel Manufacturing companies based in Tel Aviv
RAD Group
[ "Technology" ]
1,468
[ "Computer hardware companies", "Computers" ]
6,974,837
https://en.wikipedia.org/wiki/Systems%20Biology%20Ontology
The Systems Biology Ontology (SBO) is a set of controlled, relational vocabularies of terms commonly used in systems biology, and in particular in computational modeling. Motivation The rise of systems biology, seeking to comprehend biological processes as a whole, highlighted the need to not only develop corresponding quantitative models but also to create standards allowing their exchange and integration. This concern drove the community to design common data formats, such as SBML and CellML. SBML is now largely accepted and used in the field. However, as important as the definition of a common syntax is, it is also necessary to make clear the semantics of models. SBO tries to give us a way to label models with words that describe how they should be used in a large group of models that are commonly used in computational systems biology. The development of SBO was first discussed at the 9th SBML Forum Meeting in Heidelberg on October 14–15, 2004. During the forum, Pedro Mendes mentioned that modellers possessed a lot of knowledge that was necessary to understand the model and, more importantly, to simulate it, but this knowledge was not encoded in SBML. Nicolas Le Novère proposed to create a controlled vocabulary to store the content of Pedro Mendes' mind before he wandered out of the community. The development of the ontology was announced more officially in a message from Le Novère to Michael Hucka and Andrew Finney on October 19. Structure SBO is currently made up of seven different vocabularies: systems description parameter (catalytic constant, thermodynamic temperature...) participant role (substrate, product, catalyst...) modelling framework (discrete, continuous...) mathematical expression (mass-action rate law, Hill-type rate law...) occurring entity representation (biochemical process, molecular or genetic interaction...) physical entity representation (transporter, physical compartment, observable...) metadata representation (annotation) Resources To curate and maintain SBO, a dedicated resource has been developed and the public interface of the SBO browser can be accessed at http://www.ebi.ac.uk/sbo. A relational database management system (MySQL) at the back-end is accessed through a web interface based on Java Server Pages (JSP) and JavaBeans. Its content is encoded in UTF-8, therefore supporting a large set of characters in the definitions of terms. Distributed curation is made possible by using a custom-tailored locking system allowing concurrent access. This system allows a continuous update of the ontology with immediate availability and suppress merging problems. Several exports formats (OBO flat file, SBO-XML and OWL) are generated daily or on request and can be downloaded from the web interface. To allow programmatic access to the resource, Web Services have been implemented based on Apache Axis for the communication layer and Castor for the validation. The libraries, full documentation, samples and tutorial are available online. The SourceForge project can be accessed at http://sourceforge.net/projects/sbo/. SBO and SBML Since Level 2 Version 2 SBML provides a mechanism to annotate model components with SBO terms, therefore increasing the semantics of the model beyond the sole topology of interaction and mathematical expression. Modelling tools such as SBMLsqueezer interpret SBO terms to augment the mathematics in the SBML file. Simulation tools can check the consistency of a rate law, convert reaction from one modelling framework to another (e.g., continuous to discrete), or distinguish between identical mathematical expressions based on different assumptions (e.g., Michaelis–Menten vs. Briggs–Haldane). To add missing SBO terms to models, software such as SBOannotator can be used. Other tools such as semanticSBML can use the SBO annotation to integrate individual models into a larger one. The use of SBO is not restricted to the development of models. Resources providing quantitative experimental information such as SABIO Reaction Kinetics will be able to annotate the parameters (what do they mean exactly, how were they calculated) and determine relationships between them. SBO and SBGN All the graphical symbols used in the SBGN languages are associated with an SBO term. This permits, for instance, to help generate SBGN maps from SBML models. SBO and BioPAX The Systems Biology Pathway Exchange (SBPAX) allows SBO terms to be added to Biological Pathway Exchange (BioPAX). This links BioPAX to information useful for modelling, especially by adding quantitative descriptions described by SBO. Organization of SBO development SBO is built in collaboration by the Computational Neurobiology Group (Nicolas Le Novère, EMBL-EBI, United-Kingdom) and the SBMLTeam (Michael Hucka, Caltech, USA). Funding for SBO SBO has benefited from the funds of the European Molecular Biology Laboratory and the National Institute of General Medical Sciences. References External links www.biomodels.net Bioinformatics Free biosimulation software Genetics Molecular biology Proteins Science and technology in Cambridgeshire South Cambridgeshire District Systems biology
Systems Biology Ontology
[ "Chemistry", "Engineering", "Biology" ]
1,068
[ "Biomolecules by chemical classification", "Biological engineering", "Genetics", "Bioinformatics", "Molecular biology", "Biochemistry", "Proteins", "Systems biology" ]
6,974,982
https://en.wikipedia.org/wiki/Sailing%20stones
Sailing stones (also called sliding rocks, walking rocks, rolling stones, and moving rocks) are part of the geological phenomenon in which rocks move and inscribe long tracks along a smooth valley floor without animal intervention. The movement of the rocks occurs when large, thin sheets of ice floating on an ephemeral winter pond move and break up due to wind. Trails of sliding rocks have been observed and studied in various locations, including Little Bonnie Claire Playa, in Nevada, and most famously at Racetrack Playa, Death Valley National Park, California, where the number and length of tracks are notable. Description The Racetrack's stones speckle the playa floor, predominantly in the southern portion. Historical accounts identify some stones around from shore, yet most of the stones are found relatively close to their respective originating outcrops. Three lithologic types are identified: syenite, found most abundant on the west side of the playa; dolomite, subrounded blue-gray stones with white bands; black dolomite, the most common type, found almost always in angular joint blocks or slivers. This dolomite composes nearly all stones found in the southern half of the playa, and originates at a steep promontory, high, paralleling the east shore at the south end of the playa. Intrusive igneous rock originates from adjacent slopes (most of those being tan-colored feldspar-rich syenite). Tracks are often up to long, about wide, and typically much less than deep. Most moving stones range from about in diameter. Stones with rough bottoms leave straight striated tracks, while those with smooth bottoms tend to wander. Stones sometimes turn over, exposing another edge to the ground and leaving a different track in the stone's wake. Trails differ in both direction and length. Rocks that start next to each other may travel parallel for a time, before one abruptly changes direction to the left, right, or even back to the direction from which it came. Trail length also varies – two similarly sized and shaped rocks may travel uniformly, then one could move ahead or stop in its track. A balance of specific conditions is thought to be needed for stones to move: A flooded surface A thin layer of clay Wind Ice floes Warming temperatures causing ice breakup Research history At Racetrack Playa, these tracks have been studied since the early 1900s, yet the origins of stone movement were not confirmed and remained the subject of research for which several hypotheses existed. However, as of August 2014, timelapse video footage of rocks moving has been published, showing the rocks moving at high wind speeds within the flow of thin, melting sheets of ice. The scientists have thus identified the cause of the moving stones to be ice shove. Early investigation The first documented account of the sliding rock phenomenon dates to 1915, when a prospector named Joseph Crook from Fallon, Nevada, visited the Racetrack Playa site. In the following years, the Racetrack sparked interest from geologists Jim McAllister and Allen Agnew, who mapped the bedrock of the area in 1948 and published the earliest report about the sliding rocks in a Geologic Society of America Bulletin. Their publication gave a brief description of the playa furrows and scrapers, stating that no exact measurements had been taken and suggesting that furrows were the remnants of scrapers propelled by strong gusts of wind – such as the variable winds that produce dust-devils – over a muddy playa floor. Controversy over the origin of the furrows prompted the search for the occurrence of similar phenomena at other locations. Such a location was found at Little Bonnie Claire Playa in Nye County, Nevada, and the phenomenon was studied there, as well. Naturalists from the National Park Service later wrote more detailed descriptions and Life magazine featured a set of photographs from the Racetrack. In 1952, a National Park Service Ranger named Louis G. Kirk recorded detailed observations of furrow length, width, and general course. He sought simply to investigate and record evidence of the moving rock phenomenon, not to hypothesize or create an extensive scientific report. Speculation about how the stones move started at this time. Various and sometimes idiosyncratic possible explanations have been put forward over the years that have ranged from the supernatural to the complex. Most hypotheses favored by interested geologists posit that strong winds when the mud is wet are at least in part responsible. Some stones weigh as much as a human, which some researchers, such as geologist George M. Stanley, who published a paper on the topic in 1955, feel is too heavy for the area's winds to move. After extensive track mapping and research on rotation of the tracks in relation to ice floe rotation, Stanley maintained that ice sheets around the stones either help to catch the wind or that ice floes initiate rock movement. Progress in the 1970s Bob Sharp and Dwight Carey started a Racetrack stone movement monitoring program in May 1968. Eventually, 30 stones with fresh tracks were labeled and stakes were used to mark their locations. Each stone was given a name and changes in the stones' positions were recorded over a seven-year period. Sharp and Carey also tested the ice floe hypothesis by corralling selected stones. A corral in diameter was made around a wide, track-making stone with seven rebar segments placed apart. If a sheet of ice around the stones either increased wind-catching surface area or helped move the stones by dragging them along in ice floes, then the rebar should at least slow down and deflect the movement. Neither appeared to occur; the stone barely missed a rebar as it moved to the northwest out of the corral in the first winter. Two heavier stones were placed in the corral at the same time; one moved five years later in the same direction as the first, but its companion did not move during the study period. This indicated that if ice played a part in stone movement, then ice collars around stones must be small. Ten of the initial 30 stones moved in the first winter with Mary Ann (stone A) covering the longest distance at . Two of the next six monitored winters also had multiple stones move. No stones were confirmed to have moved in the summer, and in some winters, none or only a few stones moved. In the end, all but two of the monitored stones moved during the seven-year study. At in diameter, Nancy (stone H) was the smallest monitored stone. It also moved the longest cumulative distance, , and the greatest single winter movement, . The largest stone to move was . Karen (stone J) is a block of dolomite and weighs an estimated . Karen did not move during the monitoring period. The stone may have created its long, straight and old track from momentum gained from its initial fall onto the wet playa. However, Karen disappeared sometime before May 1994, possibly during the unusually wet winter of 1992 to 1993. Removal by artificial means is considered unlikely due to the lack of associated damage to the playa that a truck and winch would have caused. A possible sighting of Karen was made in 1994, from the playa. Karen was rediscovered by San Jose geologist Paula Messina in 1996. Continued research in the 1990s Professor John Reid led six research students from Hampshire College and the University of Massachusetts Amherst in a follow-up study in 1995. They found highly congruent trails from stones that moved in the late 1980s and during the winter of 1992–93. At least some stones were proved beyond a reasonable doubt to have been moved in ice floes that may be up to wide. Physical evidence included swaths of lineated areas that could only have been created by moving thin sheets of ice. Consequently, both wind alone and wind in conjunction with ice floes are thought to be motive forces. Physicists Bacon et al. studying the phenomenon in 1996, informed by studies in Owens Dry Lake Playa, discovered that winds blowing on playa surfaces can be compressed and intensified because of a playa's smooth, flat surfaces. They also found that boundary layers (the region just above ground where winds are slower due to ground drag) on these surfaces can be as low as . As a result, stones just a few centimeters high feel the full force of ambient winds and their gusts, which can reach in winter storms. Such gusts are thought to be the initiating force, while momentum and sustained winds keep the stones moving, possibly as fast as a moderate run. Wind and ice both are the favored hypothesis for these sliding rocks. Noted in "Surface Processes and Landforms", Don J. Easterbrook mentions that because of the lack of parallel paths between some rock paths, this could be caused by degenerating ice floes resulting in alternate routes. Though the ice breaks up into smaller blocks, it is still necessary for the rocks to slide. 21st-century developments Further understanding of the geologic processes at work in Racetrack Playa goes hand in hand with technological development. In 2009, development of inexpensive time-lapse digital cameras allowed the capturing of transient meteorological phenomena including dust devils and playa flooding. These cameras were aimed at capturing various stages of the previously mentioned phenomena, though discussion of the sliding stones ensued. The developers of photographic technology describe the difficulty of capturing the Racetrack's stealthy rocks, as movements only occur about once every three years, and they believed, lasted about 10 seconds. Their next identified advancement was wind-triggered imagery, vastly reducing the ten million seconds of nontransit time they had to sift through. It was postulated that small rafts of ice form around the rocks and the rocks are buoyantly floated off the soft bed, thus reducing the reaction and friction forces at the bed. Since this effect depends on reducing friction, and not on increasing the wind drag, these ice cakes need not have a particularly large surface area if the ice is adequately thick, as the minimal friction allows the rocks to be moved by arbitrarily light winds. Reinforcing the "ice raft" theory, a research study pointed out narrowing trails, intermittent springs, and trail ends having no rocks. The study identified that water drained from higher area into the Playa while ice covered the intermittent lake. This suggests that this water buoyantly lifts the ice floes with embedded rocks until friction with the playa bed is reduced sufficiently for wind to move them and cause the observed tracks. The study also analyses an artificial ditch intended to prevent visitors from driving on the playa, and concludes that it may interfere with rock sliding. In 2020, NASA ruled out the potential reasons for the stones moving results from the microbial mats and wind-generated water waves based on a fossil of dinosaur footprints. Explanation News articles reported the mystery solved when researchers observed rock movements using GPS and time-lapse photography. The largest rock movement the research team witnessed and documented was on December 20, 2013 and involved more than 60 rocks, with some rocks moving up to 224 metres (245 yards) between December 2013 and January 2014 in multiple movement events. These observations contradicted earlier hypotheses of strong winds or thick ice floating rocks off the surface. Instead, rocks move when large ice sheets a few millimeters thick floating in an ephemeral winter pond start to break up during sunny mornings. These thin floating ice panels, frozen during cold winter nights, are driven by light winds and shove rocks at up to 5 m/min (0.3 km/h; 0.2 mph). Some GPS-measured moves lasted up to 16 minutes, and a number of stones moved more than five times during the existence of the playa pond in the winter of 2013–14. Possible influence of climate change Because rock movement relies on a rare set of circumstances, the usually dry playa being flooded and the water freezing, drier winters and warmer winter nights would cause such circumstances to occur less often. A statistical study by Ralph Lorenz and Brian Jackson examining published reports of rock movements suggested (with 4:1 odds) an apparent decline between the 1960s–1990s, and the 21st century. Theft and vandalism of rocks On May 30, 2013, the Los Angeles Times reported that park officials were looking into the theft of several of the rocks from the Death Valley National Park. In August 2016, around of tire tracks were left in the playa by someone driving around it illegally. A photographer visiting in September also noted the initials 'D' and 'K' newly carved into one of the rocks. Although reports at the time suggested investigators had identified a suspect, the vandal had not been identified in March 2018, when a team of volunteers cleaned the tire tracks from the Racetrack using gardening tools and of water. See also Rocking stone References Further reading Messina, P., 1998, The Sliding Rocks of Racetrack Playa, Death Valley National Park, California: Physical and Spatial Influences on Surface Processes. Published doctoral dissertation, Department of Earth and Environmental Sciences, City University of New York, New York. University Microfilms, Incorporated, 1998. Messina, P., Stoffer, P., and Clarke, K. C. Mapping Death Valley's Wandering Rocks. , April, 1997: pp. 34–44 Sharp, R.P., and A.F. Glazner, 1997, Geology Underfoot in Death Valley and Owens Valley. Mountain Press Publishing Company, Missoula. External links "How Do Death Valley's 'Sailing Stones' Move Themselves Across the Desert?" , Smithsonian Magazine, June 2013. National Geographic: "What Drives Death Valley's Roving Rocks?" Racetrackplaya.org: The Racetrack Playa Blog – homepage SJSU.edu: "The Sliding Rocks of Racetrack Playa" – by Paula Messina. Smith.edu: "The Mystery of the Rocks on the Racetrack at Death Valley" – by Lena Fletcher and Anne Nester. Physics Forums.com: "The Sliding Rock Phenomenon" – online discussion. Earth Surface Dynamics Discussions: "Trail formation by ice-shoved 'sailing stones' observed at Racetrack Playa, Death Valley National Park" YouTube: Moving Rocks of Death Valley's Racetrack Playa – video by Brian Dunning. Fox News.com: Why Are Death Valley's Rocks Moving Themselves? – by Philip Schewe. Plosone.org: "Sliding Rocks on Racetrack Playa, Death Valley National Park: First Observation of Rocks in Motion" Death Valley National Park Death Valley Natural history of the Mojave Desert Rock formations of California Rock formations of Nevada Rocks Stones
Sailing stones
[ "Physics" ]
2,958
[ "Stones", "Rocks", "Physical objects", "Matter" ]
6,976,021
https://en.wikipedia.org/wiki/Operation%20Chastity
Operation Chastity was a World War II plan by the Allies to seize Quiberon Bay, France, enabling the construction of an artificial harbor to support Allied operations in Northern France in 1944. The artificial harbor was not developed, as the US VIII Corps failed to capture German-held areas that threatened the port. By the end of August 1944, Allied forces had captured all of Brittany except for the critical areas, preventing the further development of the operation. Following the capture of Antwerp and its port facilities in early September 1944, Operation Chastity was officially cancelled on 7 September. The non-completion and eventual cancellation of Operation Chastity exacerbated strains on the Allied logistical system, may have prevented an Allied victory over Germany in 1944, and has been described as "the critical error of World War II", although other historians believe that priority was rightly given to the pursuit of the routing German forces. Background One of the primary concerns when the Allies were planning Operation Overlord was the acquisition of deep-water ports. This was because the vital factor in its success was logistics, particularly American logistics, as the Twelfth United States Army Group would be operating further from the English Channel coast than the British 21st Army Group. The Overlord plan envisaged the Allies establishing a secure lodgement west of the Seine and north of the Loire. From this base, the Allies would advance to Paris and then Germany, once sufficient forces and supplies were available. The Overlord plan assumed that in the first weeks after D-Day, the Allied forces would be supported over the invasion beaches, and through two Mulberry harbours that would be assembled - "A" at Omaha Beach and "B" at Gold Beach. However, in the longer term and in order to handle the large quantities of reinforcements and supplies needed for the campaign, quayside discharge of Liberty ships would be necessary. Therefore, one of the first objectives of the Overlord plan was to seize the port of Cherbourg. After this foothold was secured, the most important single strategic objective would be the capture and development of major ports. Seven ports in Normandy were expected to be captured and opened in the first four weeks after D-Day: Isigny-sur-Mer, Cherbourg, Grandcamp-Maisy, Saint-Vaast-sur-Seulles, Barfleur, Granville, Manche and Saint-Malo. Except for Cherbourg, all were small and tidal, making Cherbourg the only major port supplying the Allied force. However, even Cherbourg was planned to develop a capacity of no more than 8 or 9,000 tons per day, while the minor ports were intended only as a stopgap. The inadequate port capacity in Normandy meant that the Brittany ports would have to play the key logistical role. In fact, the overall success of Operation Overlord was predicated on organizing Brittany as the principal supply base for US forces and the importance of Brittany in the Overlord plan "can hardly be exaggerated". Therefore, the Overlord plan called for Brittany to be isolated and its ports captured. Only after the ports were seized would operations towards Paris and Germany begin. However, all the ports were expected to have facilities destroyed, harbors blocked and approaches mined by the Germans, requiring extensive repairs before they could be used efficiently. Therefore, the planners concluded that the logistical capacity of the captured French ports would not be adequate to support the Allied liberation of France, and a new port had to be built somewhere in Normandy or Brittany. Plan The solution to the Allies logistical problem was devised in April 1944 and given the name Operation Chastity. It involved constructing a brand-new, deep-water port at Quiberon Bay. Quiberon Bay, a large estuary with four small ports, between Lorient and Saint-Nazaire on the southwest coast of Brittany, was sheltered by the Quiberon peninsula and a line of small islands, and the larger Belle Île. The Auray river had scoured a -long -deep pool between and wide with nearly vertical sides near the port of Locmariaquer. Operation Chastity planned to construct floating piers in the pool, allowing large ships to tie up alongside, with causeways to carry cargo and troops to the shore. There would be room for offloading five ships simultaneously, providing a capability of 2,500 tons of supplies per day directly onto vehicles. A further 7,500 tons per day could be offloaded using lighters carrying supplies directly to the shore from thirty further ships moored in the pool. The key advantage of Operation Chastity would be the ability to off-load Liberty ships sailing directly from the United States. A further advantage would be access to the relatively undamaged rail network outside the Normandy region, once a spur line and marshalling yard were constructed. The beaches of Quiberon Bay would also allow the unloading of LSTs at low tide. Other attractive features of Operation Chastity were the sheltered anchorage in Quiberon Bay, that the port required only a fraction of the labor and materials committed to the Mulberry ports, and that it used standard components and available equipment. However, the approaches to Quiberon Bay were covered by German coastal artillery at Lorient and on Belle Île and in the view of the planners, unless Brest, Lorient and Belle Île were captured, shipping cargo via Quiberon Bay would be, "impossible due to naval interference." Quiberon Bay was scheduled to be captured by D+40 with Brest and Lorient to fall by D+50. Planners thought that by D+60, British forces would be supported by the Mulberry ports, while American forces would be supported from Cherbourg, Saint-Malo, and Quiberon Bay. Approval of operation The final plan envisaged the capture of Lorient, Brest, St. Malo and Quiberon Bay where the new port would be developed. Troops travelling direct from the United States were to disembark at Brest, and Quiberon Bay was to be developed into a major supply port. Between them, the four ports were expected to have a capacity of about 17,500 tons/day, with Quiberon Bay expected to land 10,000 tons/day. Supreme Headquarters Allied Expeditionary Force (SHAEF) approved Operation Chastity on 22 April 1944. As it offered the greatest potential to solve the Allies' logistical problems, it was given highest priority, and US Third Army was given the objective of capturing Brittany. The addition of Operation Chastity was the final major revision to the invasion plan. Events The Omaha Beach Mulberry was abandoned after it was damaged by a storm on 19–22 June, however this loss was mitigated as the amount of supplies landed directly over the beach was far in excess of expectations. Although Cherbourg was captured on 27 June, the port had been destroyed and required rebuilding, resulting in it receiving only a trickle of supplies by the end of July. In early July, an alternative to capturing Brittany was proposed: moving eastward to surround and defeat German forces west of the Seine, with supplies coming from captured ports on the Seine. Although the planners could see the advantages of the scheme, they insisted that US logistics depended on the Brittany ports and the Seine ports could not replace them. Therefore, both General Eisenhower and General Montgomery continued to stress the necessity of capturing the Brittany ports, recognizing that without them Allied logistics would be inadequate. The capture of the Quiberon Bay area was deemed important enough that consideration was given to a combined airborne and amphibious operation, Operation Hands Up to capture the area but, as the operation was deemed risky, it was agreed that it would only be attempted in the event that the advance into Brittany was delayed into September. Advance to Quiberon Bay By 1 August, after the success of Operation Cobra, Major General Troy H. Middleton's US VIII Corps of the US Third Army was advancing into Brittany and the US 4th Armored Division, led by Major General John S. Wood was thrusting south-westward from Pontaubault toward Rennes. On 2 August, the 4th Armored Division was assembled north of Rennes. At this point, Wood proposed blocking the base of the Brittany peninsula at Angers rather than Quiberon, preparatory to moving his division eastward towards Chartres. After sending Middleton his proposal on the morning of 3 August, and anticipating no objection, Wood ordered his plan executed. That afternoon, Middleton instructed Wood to "Secure Rennes before you continue", implying consent for the alteration. On 4 August, Middleton ordered the division to secure a line along the Vilaine River from Rennes to the coast, particularly the bridges at Redon and La Roche-Bernard, but with forces oriented eastward. This left the 4th Armored Division approximately ten miles away from Quiberon Bay, despite facing little opposition between there and its objective. These dispositions were countermanded by Third Army's chief of staff, Major General Hugh J. Gaffey who ordered the 4th Armored Division to push "the bulk of the division to the west and southwest to the Quiberon area, including the towns of Vannes and Lorient in accordance with the Army plan." The same day, Lieutenant General George S. Patton, commander of US Third Army, wrote in his diary, "Wood got bull headed and turned east after passing Rennes, and we had to turn him back on his objectives, which are Vannes and Lorient, but his overenthusiasm wasted a day." Third Army's orders were complied with, and by 5 August US 4th Armored Division had reached the base of the Quiberon peninsula. Disorganized German forces were retreating into Lorient, Saint-Nazaire and up the Quiberon peninsula. Failure to secure critical locations By 9 August, the 4th Armored Division had captured Rennes and was probing Lorient's defenses, but reported that they were too strong to be quickly captured. However, the German commander, General der Artillerie Wilhelm Fahrmbacher, later stated that had the Americans attacked Lorient in force between 6 and 9 August, they would probably have succeeded. The 4th Armored Division contained the German forces in Lorient and the Quiberon peninsula until 13 August, when it passed from the control of the VIII Corps, Patton turning it eastward without capturing its key objectives and without replacing the armor with infantry. Meanwhile, the rest of VIII Corps, along with other Allied forces, had liberated much of the rest of Brittany, with the Battle of Saint-Malo ending with the city's defenders surrender on 2 September. The Battle for Brest started on 25 August. However, the other critical location, Belle Île, was not cleared by late August. Cancellation and aftermath By the end of August, all of Brittany except for the fortified areas of Brest, Lorient, Saint-Nazaire and the Quiberon peninsula were cleared. However, none of Operation Chastity's prerequisite locations had been captured, and their defenses were fully operational. Therefore, development of Quiberon Bay could not commence. On 3 September, SHAEF ordered a general pursuit of German forces towards Germany regardless of the lack of the required logistical capacity. The Allied armies advanced swiftly, but the available port capacity supporting the pursuit was only about half of what was required. On 4 September, the British seized Antwerp with its port facilities intact. On 7 September, SHAEF cancelled the Quiberon Bay plan, and two days later, Eisenhower determined that none of the Brittany ports were needed. However, Antwerp was inland, up the Scheldt estuary, which was heavily mined, and the approaches were still in German hands. II Canadian Corps fought the Battle of the Scheldt from 2 October to 8 November, and the first Allied ships docked in Antwerp on 28 November. A lack of supplies during the September to November period limited the Allies' ability to exploit the German collapse, and by December, the Allied drive towards Germany had come to a standstill. Brest's defenders surrendered on 18 September 1944, but the port facilities were totally destroyed and were not brought back into operation. Lorient was never captured, surrendering only after the end of the war in Europe. Debate on Operation Chastity The wisdom of the decision to abandon Operation Chastity has been the subject of debate, but the preceding failure to seize Quiberon Bay has been overlooked by many historians. Negative opinions Those who regard Operation Chastity as a missed opportunity state that if the port on Quiberon Bay had been established, it would have given the 12th Army Group a high-capacity supply base with direct rail lines to the east, solving their logistical problems. For example, Norman R. Denny of the U.S. Army Command and General Staff College stated that the logistics shortfalls that plagued the Allied campaign in Europe, due to the failure to implement Operation Chastity, "helped eliminate" the possibility of Germany surrendering in 1944. He further argued that the actions of Wood, Middleton, Patton, Lieutenant General Omar Bradley and Eisenhower made the advance into Brittany futile and endangered the Allied drive towards Germany. In particular, he argued that Wood's failure to capture the Quiberon peninsula was "tragic". Similarly, Lieutenant Colonel Harold L. Mack, of the Communications Zone staff, described the failure to implement Operation Chastity as "the critical error of World War II". Mack places the blame for failing to capture Quiberon Bay primarily on Wood, who "had set his heart on participating in the main drive for Paris, where he could achieve fame and glory" and only half-heartedly carried out his orders, but accuses all Wood's superiors in the chain of command of failing to appreciate the "supreme need of taking Quiberon Bay". Patrick S. Williams argues that SHAEF gambled that devoting the bulk of US Third Army to outflanking the German force in the Falaise pocket would mean a decisive victory over Germany could be attained before lack of logistics stopped the Allied advance, that a larger force should have been devoted to seizing the Brittany ports and that the decision to cancel Operation Chastity exacerbated the strain on an already fragile Allied logistical system. He states that the failure to implement Operation Chastity "may have prevented an Allied victory over Germany in 1944." Positive opinions Those who disagree that Operation Chastity was a missed opportunity believe that priority was rightly given to the encirclement and subsequent pursuit of the routing German forces. They further contend that lack of supplies was not the limiting logistical factor, transporting supplies to a front that was rapidly moving east was, and Operation Chastity would not have greatly improved the transport situation. For example, historian Basil Liddell-Hart said, "American spearheads could have driven eastward unopposed. But the Allied High Command threw away the best chance of exploiting this great opportunity by sticking to the outdated pre-invasion programme, in which a westward move to capture Brittany ports was to be the next step." Historian Russell Weigley regarded the commitment to Brittany as wasteful of resources better spent supporting the drive to the east. Martin van Creveld argues that although Patton and his subordinates "ignored plans" and "refused to be tied down by logisticians tables", their rapid eastwards advance threatened to cut off the German forces, causing them to rout out of France. He also suggests that the Allied planners were overly pessimistic about the consumption of supplies and the capabilities of the logistics system. Additionally, the Allies possessed more motor transport than any other army and were operating in summer, over a good road network, without enemy air interdiction and amongst a friendly population. After the war, Wood stood by his attempt to turn his division east instead of taking Quiberon Bay and Lorient, stating, "We were forced to adhere to the original plan... It was one of the colossally stupid decisions of the war." See also Allied advance from Paris to the Rhine American logistics in the Normandy campaign Atlantic pockets – French, Belgian and Dutch ports fortified to deny their capacity to the Allies British logistics in the Normandy campaign Broad front versus narrow front controversy in World War II Operation Kinetic – Naval operation in support of the advance into Brittany References Notes Bibliography Operation Overlord Coastal construction Cancelled military operations of World War II Cancelled military operations involving the United States Military logistics of the United States
Operation Chastity
[ "Engineering" ]
3,337
[ "Construction", "Coastal construction" ]
6,976,526
https://en.wikipedia.org/wiki/CD155
CD155 (cluster of differentiation 155), also known as the poliovirus receptor, is a protein that in humans is encoded by the PVR gene. It is a transmembrane protein that is involved in forming junctions between neighboring cells. It is also the molecule that poliovirus uses to enter cells. The gene is specific to the primates. Function CD155 is a Type I transmembrane glycoprotein in the immunoglobulin superfamily. Its normal cellular function is in the establishment of intercellular adherens junctions between epithelial cells. The external domain mediates cell attachment to the extracellular matrix molecule vitronectin, while its intracellular domain interacts with the dynein light chain Tctex-1/DYNLT1. The role of CD155 in the immune system is unclear, though it may be involved in intestinal humoral immune responses. Subsequent data has also suggested that CD155 may also be used to positively select MHC-independent T cells in the thymus. Polio Commonly known as Poliovirus Receptor (PVR), the protein serves as a cellular receptor for poliovirus in the first step of poliovirus replication. Transgenic mice that express the PVR gene have been constructed in order to study polio experimentally. Structure CD155 is a transmembrane protein with 3 extracellular immunoglobulin-like domains, D1-D3, where D1 is recognized by the virus. Low resolution structures of CD155 complexed with poliovirus have been obtained using electron microscopy while a high resolution structures of the ectodomain D1 and D2 of CD155 were solved by x-ray crystallography. References External links Further reading Clusters of differentiation Glycoproteins Polio
CD155
[ "Chemistry" ]
380
[ "Glycoproteins", "Glycobiology" ]
6,976,689
https://en.wikipedia.org/wiki/Significant%20wave%20height
In physical oceanography, the significant wave height (SWH, HTSGW or Hs) is defined traditionally as the mean wave height (trough to crest) of the highest third of the waves (H1/3). It is usually defined as four times the standard deviation of the surface elevation – or equivalently as four times the square root of the zeroth-order moment (area) of the wave spectrum. The symbol Hm0 is usually used for that latter definition. The significant wave height (Hs) may thus refer to Hm0 or H1/3; the difference in magnitude between the two definitions is only a few percent. SWH is used to characterize sea state, including winds and swell. Origin and definition The original definition resulted from work by the oceanographer Walter Munk during World War II. The significant wave height was intended to mathematically express the height estimated by a "trained observer". It is commonly used as a measure of the height of ocean waves. Time domain definition Significant wave height H1/3, or Hs or Hsig, as determined in the time domain, directly from the time series of the surface elevation, is defined as the average height of that one-third of the N measured waves having the greatest heights: where Hm represents the individual wave heights, sorted into descending order of height as m increases from 1 to N. Only the highest one-third is used, since this corresponds best with visual observations of experienced mariners, whose vision apparently focuses on the higher waves. Frequency domain definition Significant wave height Hm0, defined in the frequency domain, is used both for measured and forecasted wave variance spectra. Most easily, it is defined in terms of the variance m0 or standard deviation ση of the surface elevation: where m0, the zeroth-moment of the variance spectrum, is obtained by integration of the variance spectrum. In case of a measurement, the standard deviation ση is the easiest and most accurate statistic to be used. Another wave-height statistic in common usage is the root-mean-square (or RMS) wave height Hrms, defined as: with Hm again denoting the individual wave heights in a certain time series. Statistical distribution of the heights of individual waves Significant wave height, scientifically represented as Hs or Hsig, is an important parameter for the statistical distribution of ocean waves. The most common waves are lower in height than Hs. This implies that encountering the significant wave is not too frequent. However, statistically, it is possible to encounter a wave that is much higher than the significant wave. Generally, the statistical distribution of the individual wave heights is well approximated by a Rayleigh distribution. For example, given that Hs is , statistically: 1 in 10 will be larger than 1 in 100 will be larger than 1 in 1000 will be larger than This implies that one might encounter a wave that is roughly double the significant wave height. However, in rapidly changing conditions, the disparity between the significant wave height and the largest individual waves might be even larger. Other statistics Other statistical measures of the wave height are also widely used. The RMS wave height, which is defined as square root of the average of the squares of all wave heights, is approximately equal to Hs divided by 1.4. For example, according to the Irish Marine Institute: "… at midnight on 9/12/2007 a record significant wave height was recorded of 17.2m at with [sic] a period of 14 seconds." Measurement Although most measuring devices estimate the significant wave height from a wave spectrum, satellite radar altimeters are unique in measuring directly the significant wave height thanks to the different time of return from wave crests and troughs within the area illuminated by the radar. The maximum ever measured wave height from a satellite is during a North Atlantic storm in 2011. Weather forecasts The World Meteorological Organization stipulates that certain countries are responsible for providing weather forecasts for the world's oceans. These respective countries' meteorological offices are called Regional Specialized Meteorological Centers, or RSMCs. In their weather products, they give ocean wave height forecasts in significant wave height. In the United States, NOAA's National Weather Service is the RSMC for a portion of the North Atlantic, and a portion of the North Pacific. The Ocean Prediction Center and the Tropical Prediction Center's Tropical Analysis and Forecast Branch (TAFB) issue these forecasts. RSMCs use wind-wave models as tools to help predict the sea conditions. In the U.S., NOAA's Wavewatch III model is used heavily. Generalization to wave systems A significant wave height is also defined similarly, from the wave spectrum, for the different systems that make up the sea. We then have a significant wave height for the wind-sea or for a particular swell. See also Ocean Prediction Center Rogue wave: a wave of over twice the significant wave height Sea state Notes External links Current global map of significant wave height and period NOAA Wavewatch III NWS Environmental Modeling Center Envirtech solid state payload for directional waves measurement Naval architecture Physical oceanography Shipbuilding Water waves
Significant wave height
[ "Physics", "Chemistry", "Engineering" ]
1,041
[ "Naval architecture", "Physical phenomena", "Applied and interdisciplinary physics", "Water waves", "Shipbuilding", "Waves", "Physical oceanography", "Marine engineering", "Fluid dynamics" ]
6,976,885
https://en.wikipedia.org/wiki/Defence%20School%20of%20Aeronautical%20Engineering
The Defence School of Aeronautical Engineering (DSAE) is a Defence Training Establishment (DTEs) of the British Ministry of Defence. It was formed on 1 April 2004 and provides training for aircraft engineering officers and tradesmen across the three British armed forces. The school comprises a headquarters, No. 1 School of Technical Training and the Aerosystems Engineer and Management Training School (now No. 2 School of Technical Training), all based at RAF Cosford, the Royal Naval Air Engineering and Survival Equipment School (RNAESS) at , with elements also based at RAF Cranwell and MOD St. Athan (No. 4 School of Technical Training). History The school was formed on 1 April 2004 as the Defence College of Aeronautical Engineering (DCAE) and was one of five federated defence colleges formed after the Defence Training Review. In 2012, it joined three other technical training colleges under a combined organisation, the Defence College of Technical Training, and reverted in title to being a Defence School. On 17 January 2007, Secretary of State for Defence Des Browne announced that Metrix UK, a joint venture between Qinetiq and Land Securities, had been selected as preferred bidder for Package One of Defence training. This would locate all Aeronautical Engineering training for all three services at MOD St Athan in 2017. The project was terminated in 2010 as part of the Strategic Defence and Security Review, undertaken by the Conservative-Liberal Democrat coalition government. Constituent elements The school comprises a headquarters and four affiliated schools. Headquarters The DSAE headquarters is based at the RAF Cosford in Shropshire. The school reports to the Defence College of Technical Training (DCTT) which, in turn, is part of the Royal Air Force's No. 22 Group. Between 2004 and 2009 the station at Cosford was known as DCAE Cosford. No. 1 School of Technical Training The RAF's No. 1 School of Technical Training is based at RAF Cosford and provides RAF personnel with mechanical, avionics, weapons and survival equipment training. The school trains around 2,000 students per year. No. 2 School of Technical Training Professional and management training is provided to RAF personnel by the Aerosystems Engineer and Management Training School (AE&MTS) based at RAF Cosford. Royal Naval Air Engineering and Survival Equipment School Based at located at Gosport in Hampshire, the Royal Navy's Air Engineering and Survival Equipment School provides aeronautical engineering and survival equipment training to Royal Navy personnel. The school is divided into six elements – a headquarters, 764 Initial Training Squadron, the Advanced Training Group, the Common Training Group, the Specialist Training Group and the Training Support Group. School of Army Aeronautical Engineering Based at MOD Lyneham in Wiltshire, the Army's aviation engineering school delivers aeronautical engineering training to British Army personnel in the Royal Electrical and Mechanical Engineers (REME). SAAE trains potential aeronautical Technicians, Supervisor, Artificers and Engineering Officers for frontline Joint Helicopter Command roles in order to sustain REME Aviation. References External links DCAE Website DCAE Cranwell No. 22 Training Group Aeronautical engineering schools Engineering education in the United Kingdom Military training establishments of the United Kingdom Military units and formations established in 2004 Organisations based in Shropshire Organisations based in Lincolnshire Aeronautics organizations Defence agencies of the United Kingdom 2004 establishments in the United Kingdom
Defence School of Aeronautical Engineering
[ "Engineering" ]
663
[ "Aeronautics organizations", "Aeronautical engineering schools", "Engineering universities and colleges" ]
6,977,160
https://en.wikipedia.org/wiki/Tetraxenonogold%28II%29
Tetraxenonogold(II), gold tetraxenide(II) or AuXe is a cationic complex consisting of a central gold atom surrounded by four xenon atoms. It is a covalent complex with a square planar configuration of atoms. The complex is found in the compound AuXe(SbF) (tetraxenonogold(II) undecafluorodiantimonate). This comopund, which exists in triclinic and tetragonal crystal modifications, has the AuXe ion is stabilised by interactions with the fluoride atoms of the counterion. The Au−Xe bond length is . Tetraxenonogold(II) is unusual in that it is a coordination complex of xenon, which is weakly basic. It is also unusual in that it contains gold in the +2 oxidation state. It can be produced by reduction of AuF3 by xenon in the presence of fluoroantimonic acid. The salt crystallises at low temperature. Four xenon atoms bond with the gold(II) ion to make this complex. It was the first description of a compound between a noble gas and a noble metal. It was first described in 2000 by Konrad Seppelt and Stefan Seidel. Several related compounds containing gold(III)–xenon and gold(I)–xenon bonds have since been isolated. A compound containing a mercury–xenon bond [HgXe]2+[Sb2F11]–[SbF6]– (xenonomercury(II) undecafluorodiantimonate hexafluoroantimonate) has also been isolated. References Cations Gold compounds Xenon compounds
Tetraxenonogold(II)
[ "Physics", "Chemistry" ]
371
[ "Matter", "Inorganic compounds", "Inorganic compound stubs", "Cations", "Ions" ]
6,977,357
https://en.wikipedia.org/wiki/Reboiler
Reboilers are heat exchangers typically used to provide heat to the bottom of industrial distillation columns. They boil the liquid from the bottom of a distillation column to generate vapors which are returned to the column to drive the distillation separation. The heat supplied to the column by the reboiler at the bottom of the column is removed by the condenser at the top of the column. Proper reboiler operation is vital to effective distillation. In a typical classical distillation column, all the vapor driving the separation comes from the reboiler. The reboiler receives a liquid stream from the column bottom and may partially or completely vaporize that stream. Steam usually provides the heat required for the vaporization. Types of reboilers The most critical element of reboiler design is the selection of the proper type of reboiler for a specific service. Most reboilers are of the shell and tube heat exchanger type and normally steam is used as the heat source in such reboilers. However, other heat transfer fluids like hot oil or Dowtherm (TM) may be used. Fuel-fired furnaces may also be used as reboilers in some cases. Commonly used heat exchanger type reboilers are: Kettle Type reboilers Kettle reboilers (Image 1) are very simple and reliable. they are similar to shell and tube type heat exchangers. They may require pumping of the column bottoms liquid into the kettle, or there may be sufficient liquid head to deliver the liquid into the reboiler. In this reboiler type, steam flows through the tube bundle and exits as condensate. The liquid from the bottom of the tower, commonly called the bottoms, flows through the shell side. There is a retaining wall or overflow weir separating the tube bundle from the reboiler section where the residual reboiled liquid (called the bottoms product) is withdrawn, so that the tube bundle is kept covered with liquid and reduce the amount of low-boiling compounds in the bottoms product. Thermosyphon reboilers Thermosyphon reboilers (Image 2) do not require pumping of the column bottoms liquid into the reboiler. Natural circulation is obtained by using the density difference between the reboiler inlet column bottoms liquid and the reboiler outlet liquid-vapor mixture to provide sufficient liquid head to deliver the tower bottoms into the reboiler. Thermosyphon reboilers (also known as calandrias) are more complex than kettle reboilers and require more attention from the plant operators. There are many types of thermosyphon reboilers including vertical, horizontal, once-through or recirculating. Fired reboiler Fired heaters (Image 3), also known as furnaces, may be used as a distillation column reboiler. A pump is required to circulate the column bottoms through the heat transfer tubes in the furnace's convection and radiant sections. The heat source for the fired heater reboiler may be either fuel gas or fuel oil. Forced circulation reboilers A forced circulation reboiler (Image 4) uses a pump to circulate the column bottoms liquid through the reboilers. This is useful if the reboiler must be located far from the column, or if the bottoms product is extremely viscous. Some fluids are temperature sensitive such as those subject to polymerization by contact with high temperature heat transfer tube walls. High liquid recirculation rates are used to reduce tube wall temperatures, thereby reducing polymerization on the tube and associated fouling. See also Vaporization Boiling Further reading External links Column reboilers Sketches and discussion of various reboiler types Heat exchangers Distillation Industrial equipment pl:Podgrzewacz cwu
Reboiler
[ "Chemistry", "Engineering" ]
820
[ "Separation processes", "Chemical equipment", "Distillation", "Heat exchangers", "nan" ]
6,978,503
https://en.wikipedia.org/wiki/Omega%20Fornacis
Omega Fornacis, which is Latinized from ω Fornacis, is a wide binary star system in the southern constellation of Fornax. It has a blue-white hue and is faintly visible to the naked eye as a fifth-magnitude star. The system lies at a distance of approximately 470 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +10 km/s. The dual nature of this system was discovered in 1836 by John Herschel. As of 2013, the two components had an angular separation of along a position angle of 246°. This corresponds to a projected separation of . The magnitude 4.95 primary, designated component A, is a chemically peculiar B-type main-sequence star with a stellar classification of B9V It has 3.4 times the Sun's mass and is radiating around 268 times the luminosity of the Sun from its photosphere at an effective temperature of 10,910 K. Component B, the magnitude 7.71 secondary, is an A-type main-sequence star with a class of A3V. It is smaller than the primary, but has a higher projected rotational velocity. References B-type main-sequence stars A-type main-sequence stars Chemically peculiar stars Binary stars Fornax Fornacis, Omega Durchmusterung objects 016046 011918 0749
Omega Fornacis
[ "Astronomy" ]
288
[ "Fornax", "Constellations" ]
6,978,562
https://en.wikipedia.org/wiki/Choquequirao
Choquequirao (possibly from Quechua chuqi metal, k'iraw crib, cot) is an Incan site in southern Peru, similar in structure and architecture to Machu Picchu. The ruins are buildings and terraces at levels above and below Sunch'u Pata, the truncated hill top. The hilltop was anciently leveled and ringed with stones to create a 30 by 50 m platform. Choquequirao at an elevation of is in the spurs of the Vilcabamba mountain range in the Mollepata district, Anta province of the Cusco Region. The complex is 1,800 hectares, of which 30–40% is excavated. The site overlooks the Apurimac River canyon that has an elevation of . The site is reached by a two-day hike from outside Cusco. Choquequirao has topped in the prestigious Lonely Planet's Best in Travel 2017 Top Regions list. History Choquequirao is a 15th- and 16th-century settlement associated with the Inca Empire, or more correctly Tahuantinsuyo. The site had two major growth stages. This could be explained if Pachacuti founded Choquequirao and his son, Tupac Inca Yupanqui, remodeled and extended it after becoming the Sapa Inca. Choquequirao is located in the area considered to be Pachacuti’s estate; which includes the areas around the rivers Amaybamba, Urubamba, Vilcabamba, Victos, and Apurímac. Other sites in this area are Sayhuite, Machu Picchu, Chachabamba (Chachapampa), Choquesuysuy (Chuqisuyuy), and Guamanmarca (Wamanmarka); all of which share similar architectural styles with Choquequirao. The architectural style of several important features appears to be of Chachapoya design, suggesting that Chachapoya workers were probably involved in the construction. This suggests that Tupaq Inka probably ordered the construction. Colonial documents also suggest that Tupac Inca ruled Choquequirao since his great grandson, Tupa Sayri, claimed ownership of the site and neighboring lands during Spanish colonization. It was one of the last bastions of resistance and refuge of the Son of the Sun (the "Inca"), Manco Inca Yupanqui, who fled Cusco after his siege of the city failed in 1535. According to the Peruvian Tourism Office, "Choquequirao was probably one of the entrance check points to the Vilcabamba, and also an administrative hub serving political, social, and economic functions. Its urban design has followed the symbolic patterns of the imperial capital, with ritual places dedicated to Inti (the Incan sun god) and the ancestors, to the earth, water, and other divinities, with mansions for administrators and houses for artisans, warehouses, large dormitories or kallankas, and farming terraces belonging to the Inca or the local people. Spreading over 700 meters, the ceremonial area drops as much as 65 meters from the elevated areas to the main square." The city also played an important role as a link between the Amazon Jungle and the city of Cusco. Discovery According to Ethan Todras-Whitehill of the New York Times, Choquequirao's first non-Incan visitor was the explorer Juan Arias Díaz in 1710. The first written site reference in 1768 was made by , but was ignored at the time. The Prefect of the Province of Apurimac, J.J. Nuñez, encouraged Hiram Bingham to visit the 'Cradle of Gold', in order to discover any Incan treasure. Bingham was a delegate to the 1908 First Pan American Scientific Congress and was in Cusco at the time. Bingham decided to visit Choquequirao in 1909 to determine if it was Vilcapampa, the Capital of the last four Incas. He found three groups of buildings, mummified bodies, and places where dynamite had been used in the search of treasure. Visitors who had recorded their names included Count de Sartiges, Jose Maria Tejada and Marcelino Leon, 1834, Jose Benigno Samanez, Juan Manuel Rivas Plata and Mariana Cisneros, 1861, and three Almanzas, Pio Mogrovejo, and their treasure hunting workmen, 1885. However, Bingham decided it was merely a frontier fortress, and it tempted him to search further. Location and layout Choquequirao is situated at an elevation of 3,000 m above sea level on a southwest-facing spur of a glaciated peak above the Apurimac River. The region is characterized by mountain topography and covered with Amazonian flora and fauna. It is 98 km west of Cusco, in the Vilcabamba range. The complex covers 6 km2. Architecturally it is similar to Machu Picchu. The main structures, such as temples, huacas, elite residences, and fountain/bath systems are concentrated around two plazas along the crest of the ridge, which encompass approximately 2 km2 and follow Inca urban design. Also there is a conglomeration of common buildings clustered away from the plaza. Excavations and surface items suggest they were probably used for workshops and food preparation. Most buildings are well-preserved and well-restored; restoration continues. The terrain around the site was greatly modified. The central area of the site was leveled artificially and the surrounding hillsides were terraced to allow cultivation and small residential areas. The typical Inca terraces form the largest constructions on site. Many of the ceremonial structures are associated with water. There are two unusual temple wak'a sites that lie several hundred meters lower than the two plazas. These are carefully crafted step terraces down a steep slope are designed around water. The site also contains a number of ceremonial structures such as the large usnu built on a truncated hill, the Giant Staircase, and an aqueduct providing water to the water shrines. Sectors The archaeological complex of Choquequirao is divided into 12 sectors. While the contents of each sector are different, terraces used for various purposes are common throughout. It seems that most of the buildings here were either for ceremonial purposes, residences of the priests, or used to store food. Sector I is the highest and most northerly portion of the site. There are five buildings constructed on terraces at varying levels, a temple and a plaza, as well as a smaller plaza in the uppermost area of the sector. Two of the buildings appear to be qullqas (warehouses). The three long buildings, called kallankas were likely priests’ residences. Sector II is where a majority of the qullqanpatas, or depositories are located. In one part of this sector there are 16 ceremonial platforms with canal routes in between that branch off from the main water way. Sector III is between the hanan (high) area and the urin (low) area of the complex and contains what is believed to be the Haucaypata (Hawkaypata), or main plaza. At the periphery of the plaza there are one-story and two-story buildings. To the north, there is a Sunturwasi and a single level kallanka likely used for ceremony. To the east are the buildings with two levels. The main plaza is discussed in more detail in the section called Ceremonial Center of this page. Sector IV is located in the southerly area of the complex, known as the urin zone. The main building here has walls that were probably ceremonial in function since one of them is known as “wall of offerings to the ancestors”. Sector V is the location of the usnu which is a hill leveled at the summit to form an oval platform used for ceremony. A small wall encircles the hill. From the platform, one can see the main plaza of sector III, the snow-capped mountains and the Apurímac River. Sector VI, south of the usnu in the urin area, it has the Wasi Kancha ("house yard"), also known as the priests' quarters. There are four terraces here that were used as ceremonial space. In the walls of the terraces there is a zigzagged design. Sector VII can be reached from the main plaza by pathway. Located on the east side of Choquequirao, this zone contains cultivation terraces that have markedly greater amplitude than all others throughout the complex. Sector VIII, on the western side of the complex, has 80 cultivation terraces divided into plots by water canals that stream down from the main plaza. In this zone, one will find the famous "Llamas del Sol". Sector IX contains general living quarters for groups of people, such as workers or families. The buildings are constructed on top of artificial platforms in circular and rectangular design, interconnected by stairways and narrow alleys. Sector X, called paraqtepata, has 18 terraced platforms that have irrigation canals running parallel to the stairs. Sector XI has 80 terraces used for cultivation, called phaqchayuq ("the one with a waterfall"), which are the most extensive in the entire complex. Also found here are small, quadrilateral enclosures with two levels used for both ceremony and living. Outside, there are three water fountains used for drinking and to supply the irrigation canals. Sector XII lies three hours away (by foot) from the upper part of the complex. Here there are 57 platforms with permanent irrigation systems. In the uppermost terraces there are buildings for ceremony and a pool of water fed by a spring. In the semicircular enclosures ceramic shards, stone tools and remains of bones have been found. Ceremonial center The ceremonial center of Choquequirao shares many features similar to those of other Inca ceremonial centers and pilgrimage sites, such as Isla del Sol, Quespiwanka (Qhispi Wank'a, palace of Huayna Capac), Machu Picchu/Llaqtapata, Tipon and Saywite. The long and treacherous route from Cusco to Choquequirao likely passed by Machu Picchu, leading onto the face of Machu Picchu Peak. From Llaqtapata, the path continued down into the Mollepata Valley, traversed the Yanamia pass at 4670 m, and continued across the Rio Blanco, finally reaching Choquequirao from above after an estimated 7- to 10-day journey. The ceremonial center consists of a main platform and a lower plaza. Stone lined channels carried ceremonial water, or chicha to shrines and baths throughout the site. The main platform, unique in its size and prominence, limited ceremonial activity to royalty and the ministerial class. This seems as such due to evidence showing that the only entrance to the platform was through a double-jam doorway, which functioned to control access to the sacred space. Other features of the ceremonial center include structures that mark the direction of certain solar events, such as when the June and December solstice sun rises and sets. Located in the main platform, the Giant Stairway opens to the sunrise of the December solstice. Measured at 25 meters long and 4.4 meters wide, this structure seems to have been purely ceremonial in function, since the stairs end abruptly partway down a hill, leading to nothing. Large boulders that rest upon the risers of the stairway become fully illuminated when the December solstice sun rises. Gary R. Ziegler and J. McKim Malville have postulated that when the boulders become illuminated, a wak'a is activated by its solar camaquen—a case similar to when the large stone of the Torreon at Machu Picchu becomes illuminated. In the lower plaza a group of structures were found that appeared to be water shrines and baths. This belief is held based on their strong resemblance to those at sector II of Llaqtapata and because there are numerous water channels leading to that portion of the plaza. Overall, it seems as though the site was chosen, as Machu Picchu was, for its sacred geographical location, and was designed to facilitate ritual and ceremonial activity. Subsectors The area around Choquequiaro contains several subsectors that have been associated with the Inca culture that thrived in Choquequirao, suggesting that the subsectors are most likely part of the site. Design, construction style, and cultural parallels support that these sectors were tightly intertwined with Choquequiaro and the Inca at some point in their history. The lack of residential space in these sectors suggests that these were probably farming outposts from Choquequirao rather than an independent site. Due to differences in design and construction styles, it is believed that these sectors were built in three different phases. Like Choquequirao art style, the subsector also contains multiple camelid art and ceremonial phaqchas that are tightly related to Inca, especially Pachacuti’s government. Materials All lithic materials utilized for the construction of the site and surrounding sectors were mined from the local quarries. Due to the metamorphic rock in the quarries of Choquequirao, superb masonry like that at Machu Picchu could not be obtained. Instead, the entrances and corners were shaped from quartzite, and the walls were made of ashlar and plastered with clay and then painted in a light orange color. Art Most of the rock art in Choquequirao is in the terraced area where cultivation occurred. Archaeologists have documented twenty-five semi-naturalistic figures on the terraces of sector VIII of Choquequirao. The rocks used to build the walls are dark schist while the camelid images are of white calcocuarcita, a sandstone of quartz and carbonate. The camelid motifs vary between a maximum height of 1.94 m and one minimum of 1.25 m. In 2004, archaeologist Zenobio Valencia from the University of San Antonio Abad of Cusco found several camelid figurines made of white stones in a group of terraces in one sector of the archaeological site. One recent discovery for example, uncovered a scene laid into the stone terraces with white quartzite depicting several llamas loaded with cargo standing by their handlers. Present on the uppermost terrace wall is a zigzag pattern of the same quartzite. This style of design is uniquely Chachapoya and not found in other sites of Inca construction, indicating that workers from Chachapoya may have been involved in the construction of Choquequirao. Access Presently the only way to access Choquequirao is by a hard hike. The common trailhead begins at the village of San Pedro de Cachora, which is approximately a 4-hour drive from Cuzco, along the Cusco-Abancay route. Another access point is from Huanipaca village, whose crossroad is located on the same route Cusco-Abancay, 4–5 km beyond the Cachora crossroad. Huanipaca offers a 15 km trail, half distance less than Cachora trail (31 km). Over 5,000 people trekked to Choquequirao in 2013. From Choquequirao it is possible to continue hiking to Machu Picchu. Most treks range from 7-day to 11-day hikes, and involve going over the Yanama Pass, which at 4,668 m is the highest point on the trek. The construction of the cable car to Choquequirao has been declared a priority by the Apurímac Regional Government, which are destined to receive 220 million Peruvian Soles (US$82.7 million) to fund the project. It will reduce a two-day hike to a 15-minute cable car ride. Carlos Canales, president of the National Chamber of Tourism (Canatur) believes that in the first year of operation the Choquequirao cable car will receive 200,000 tourists, which will generate an income of US$4 million, with the average visitor paying US$20 per ticket. Photo gallery See also Inka Raqay Inka Wasi Iperu, tourist information and assistance Ñusta Hisp'ana Tourism in Peru Notes References Ziegler, Gary R.and J. McKim Malville. Choquequirao, Topa Inca's Machu Picchu: a royal estate and ceremonial center|journal=Proceedings of the International Astronomical Union. 2011, number 278, pages 162–168. Ziegler, Gary R and J Mckim Malville.(2013). Machu Picchu's Sacred Sisters; Choquequirao and Llactapata; Astronomy, Symbolism and Sacred Geography in the Inca Heartland. Johnson Books, Boulder. Ziegler, Gary R. Beyond Machu Picchu; Lost City in the Clouds, Peruvian Times. https://www.peruviantimes.com/06/beyond-machu-picchu-choquequirao-lost-city-in-the-clouds/23519/ Lee, Vincent R. (1997). Inca Choqek'iraw: New Work at a Long Known Site. Cortez, CO:Sixpac Manco Publications. Choquequirao, Peru's Tourism Office, 2011 Trail to Choquequirao, El Comercio Newspaper, Lima, Peru, May 13, 2009, [Spanish] Cusco travel guide, September 5, 2011, [Spanish] The Other Machu Picchu article on Choquequirao (The New York Times, June 3, 2007) Jones, Paul. Exciting News about the Choquequirao Cable Car. Totally Latin America. S.A. Retrieved 7 December 2012. Salazar, Carla. Tramway planned for Machu Picchu’s 'sister city'. AP Travel. Associated Press. Retrieved 31 August 2013. Choquequirao recibiría 600 mil viajeros en el 2018 con teleférico. El Comercio. Retrieved 7 December 2012. External links The Other Machu Picchu article on Choquequirao (The New York Times, June 3, 2007) Debate on the value of publicizing Choquequirao as a travel destination from the author of the New York Times article Archaeoastronomy Archaeological sites in Peru Former populated places in Peru Inca Ruins in Peru Populated places established in the 15th century Archaeological sites in Cusco Region Tourist attractions in Cusco Region 15th-century establishments in South America 1710 archaeological discoveries 1768 archaeological discoveries 1834 archaeological discoveries 1837 archaeological discoveries 1909 archaeological discoveries Hiking trails in Peru
Choquequirao
[ "Astronomy" ]
3,808
[ "Archaeoastronomy", "Astronomical sub-disciplines" ]
6,978,660
https://en.wikipedia.org/wiki/MINPACK
MINPACK is a library of FORTRAN subroutines for the solving of systems of nonlinear equations, or the least-squares minimization of the residual of a set of linear or nonlinear equations. MINPACK, along with other similar libraries such as LINPACK and EISPACK, originated from the Mathematics and Computer Science Division Software (MCS) of Argonne National Laboratory. Written by Jorge Moré, Burt Garbow, and Ken Hillstrom, MINPACK is free and designed to be highly portable, robust and reliable. The quality of its implementation of the Levenberg–Marquardt algorithm is attested by Dennis and Schnabel. Five algorithmic paths each include a core subroutine and a driver routine. The algorithms proceed either from an analytic specification of the Jacobian matrix or directly from the problem functions. The paths include facilities for systems of equations with a banded Jacobian matrix, for least-squares problems with a large amount of data, and for checking the consistency of the Jacobian matrix with the functions. References J. J. Moré, B. S. Garbow, and K. E. Hillstrom, User Guide for MINPACK-1, Argonne National Laboratory Report ANL-80-74, Argonne, Ill., 1980. J. J. Moré, D. C. Sorensen, K. E. Hillstrom, and B. S. Garbow, The MINPACK Project, in Sources and Development of Mathematical Software, W. J. Cowell, ed., Prentice-Hall, pages 88–111, 1984. External links Netlib download site User Guide for MINPACK-1, Chapters 1 to 3, from J. J. Moré website User Guide for MINPACK-1, Chapter 4, from J. J. Moré website Fortran libraries Numerical software
MINPACK
[ "Mathematics" ]
377
[ "Numerical software", "Mathematical software" ]
6,978,672
https://en.wikipedia.org/wiki/Comparability%20graph
In graph theory, a comparability graph is an undirected graph that connects pairs of elements that are comparable to each other in a partial order. Comparability graphs have also been called transitively orientable graphs, partially orderable graphs, containment graphs, and divisor graphs. An incomparability graph is an undirected graph that connects pairs of elements that are not comparable to each other in a partial order. Definitions and characterization For any strict partially ordered set , the comparability graph of is the graph of which the vertices are the elements of and the edges are those pairs of elements such that . That is, for a partially ordered set, take the directed acyclic graph, apply transitive closure, and remove orientation. Equivalently, a comparability graph is a graph that has a transitive orientation, an assignment of directions to the edges of the graph (i.e. an orientation of the graph) such that the adjacency relation of the resulting directed graph is transitive: whenever there exist directed edges and , there must exist an edge . One can represent any finite partial order as a family of sets, such that in the partial order whenever the set corresponding to is a subset of the set corresponding to . In this way, comparability graphs can be shown to be equivalent to containment graphs of set families; that is, a graph with a vertex for each set in the family and an edge between two sets whenever one is a subset of the other. Alternatively, one can represent the partial order by a family of integers, such that whenever the integer corresponding to is a divisor of the integer corresponding to . Because of this construction, comparability graphs have also been called divisor graphs. Comparability graphs can be characterized as the graphs such that, for every generalized cycle (see below) of odd length, one can find an edge connecting two vertices that are at distance two in the cycle. Such an edge is called a triangular chord. In this context, a generalized cycle is defined to be a closed walk that uses each edge of the graph at most once in each direction. Comparability graphs can also be characterized by a list of forbidden induced subgraphs. Relation to other graph families Every complete graph is a comparability graph, the comparability graph of a total order. All acyclic orientations of a complete graph are transitive. Every bipartite graph is also a comparability graph. Orienting the edges of a bipartite graph from one side of the bipartition to the other results in a transitive orientation, corresponding to a partial order of height two. As observes, every comparability graph that is neither complete nor bipartite has a skew partition. The complement of any interval graph is a comparability graph. The comparability relation is called an interval order. Interval graphs are exactly the graphs that are chordal and that have comparability graph complements. A permutation graph is a containment graph on a set of intervals. Therefore, permutation graphs are another subclass of comparability graphs. The trivially perfect graphs are the comparability graphs of rooted trees. Cographs can be characterized as the comparability graphs of series-parallel partial orders; thus, cographs are also comparability graphs. Threshold graphs are another special kind of comparability graph. Every comparability graph is perfect. The perfection of comparability graphs is Mirsky's theorem, and the perfection of their complements is Dilworth's theorem; these facts, together with the perfect graph theorem can be used to prove Dilworth's theorem from Mirsky's theorem or vice versa. More specifically, comparability graphs are perfectly orderable graphs, a subclass of perfect graphs: a greedy coloring algorithm for a topological ordering of a transitive orientation of the graph will optimally color them. The complement of every comparability graph is a string graph. Algorithms A transitive orientation of a graph, if it exists, can be found in linear time. However, the algorithm for doing so will assign orientations to the edges of any graph, so to complete the task of testing whether a graph is a comparability graph, one must test whether the resulting orientation is transitive, a problem provably equivalent in complexity to matrix multiplication. Because comparability graphs are perfect, many problems that are hard on more general classes of graphs, including graph coloring and the independent set problem, can be solved in polynomial time for comparability graphs. See also Bound graph, a different graph defined from a partial order Notes References . . . . . . . . . . . . . . . Graph families Order theory Perfect graphs
Comparability graph
[ "Mathematics" ]
972
[ "Order theory" ]
6,979,057
https://en.wikipedia.org/wiki/Dot%20Comedy
Dot Comedy is an American television series that aired on American Broadcasting Company (ABC). It is notable for being a series that was canceled after only one episode. Premise Dot Comedy was an early attempt at bringing Internet humor to mass television audiences in the pre-broadband era, which premiered on ABC on December 8, 2000. The show was hosted by Annabelle Gurwitch, the Sklar Brothers, and Katie Puckrik. Adapted from a British show of the same name, the show featured a similar premise to America's Funniest Home Videos in that the hosts and audience react to ostensibly humorous content originating on websites. In addition, Puckrik would interview the creators of the web content presented. Viewers were also encouraged to submit their own web content, such as video, audio, and image files. The show was a co-production with the television channel Oxygen, and episodes were planned to air afterwards on Oxygen after being broadcast on ABC. The show replaced The Trouble with Normal on ABC, which had been cancelled after five episodes as part of a troubled post-TGIF attempt to relaunch the night with adult-targeted sitcoms. Dot Comedy did even worse, being viewed by 4.1 million viewers in its only aired episode before also being cancelled. The remaining four episodes never aired. Critical reception Bob Curtright of The Wichita Eagle gave the show a mixed review. He thought that the show had the potential to display humorous content on the Internet and give a platform through which content creators could gain exposure, but criticized the Sklar Brothers' hosting as "superfluous". References External links 2000 American television series debuts 2000 American television series endings Television series canceled after one episode 2000s American reality television series Television series by Carsey-Werner Productions Works about the Internet 2000s American video clip television series American Broadcasting Company reality television shows American English-language television shows
Dot Comedy
[ "Technology" ]
375
[ "Works about the Internet", "Works about computing" ]
7,134,874
https://en.wikipedia.org/wiki/Fondements%20de%20la%20G%C3%A9ometrie%20Alg%C3%A9brique
Fondements de la Géometrie Algébrique (FGA) is a book that collected together seminar notes of Alexander Grothendieck. It is an important source for his pioneering work on scheme theory, which laid foundations for algebraic geometry in its modern technical developments. The title is a translation of the title of André Weil's book Foundations of Algebraic Geometry. It contained material on descent theory, and existence theorems including that for the Hilbert scheme. The Technique de descente et théorèmes d'existence en géometrie algébrique is one series of seminars within FGA. Like the bulk of Grothendieck's work of the IHÉS period, duplicated notes were circulated, but the publication was not as a conventional book. Contents These are Séminaire Bourbaki notes, by number, from the years 1957 to 1962. Fondements de la géométrie algébrique. Commentaires [Séminaire Bourbaki, t. 14, 1961/62, Complément]; Théorème de dualité pour les faisceaux algébriques cohérents [Séminaire Bourbaki, t. 9, 1956/57, no. 149]; (coherent duality) Géométrie formelle et géométrie algébrique [Séminaire Bourbaki, t. 11, 1958/59, no. 182]; (formal geometry) Technique de descente et théorèmes d'existence en géométrie algébrique. I-VI I. Généralités. Descente par morphismes fidèlement plats [Séminaire Bourbaki, t. 12, 1959/60, no. 190]; II. Le théorème d'existence en théorie formelle des modules [Séminaire Bourbaki, t. 12, 1959/60, no. 195]; III. Préschémas quotients [Séminaire Bourbaki, t. 13, 1960/61, no. 212]; IV. Les schémas de Hilbert [Séminaire Bourbaki, t. 13, 1960/61, no. 221]; V. Les schémas de Picard. Théorèmes d'existence [Séminaire Bourbaki, t. 14, 1961/62, no. 232]; VI. Les schémas de Picard. Propriétés générales [Séminaire Bourbaki, t. 14, 1961/62, no. 236] See also Éléments de géométrie algébrique Séminaire de Géométrie Algébrique du Bois Marie References SGA, EGA, FGA By Mateo Carmona Algebraic geometry 1962 non-fiction books Mathematics books
Fondements de la Géometrie Algébrique
[ "Mathematics" ]
566
[ "Fields of abstract algebra", "Algebraic geometry" ]
7,135,149
https://en.wikipedia.org/wiki/Working%20terrier
A working terrier is a type of terrier dog bred and trained to hunt vermin including a badger, fox, rat and other small mammals. This may require the working terrier pursuing the vermin into an underground warren. These working dog breeds are neither bred primarily for a dog show nor as a companion dog, rather they are valued for their ability to hunt, endurance and gameness. Working terriers provide utility on farms, for pest control and organized hunting activities. A terrierman leads a pack of terriers when they are working. According to the Oxford English Dictionary, the name "terrier" dates back to 1410 in the writings of Edward of Norwich, 2nd duke of York (1373 – 1415). The word terrier in Old French derives from the Latin "terra", which means "earth". The term terrier meaning "earth dog" or "dog of the earth" was used in the Middle Ages with the connection to the dog’s role of burrowing into the ground in pursuit of quarry, which eventually became the name of this group of hunting dogs. With the growth in popularity of fox hunting in Britain in the 18th and 19th centuries, terriers were extensively bred to follow the red fox, as well as the Eurasian badger, into their burrows. This is referred to as "terrier work" or "going to ground". The purpose of the terrier is to locate the burrow of the prey animal, and then either intimidate it into leaving its burrow or hold the prey still so it can be killed or captured. Working terriers can be no wider than the animals they hunt (chest circumference or "span" less than 35cm) in order to fit into the burrows and still have room to maneuver. Terrier work has been condemned by British animal welfare organizations such as the League Against Cruel Sports, the International Fund for Animal Welfare, and the Royal Society for the Prevention of Cruelty to Animals because it can lead to underground fighting between animals and cause serious injuries. The British National Working Terrier Federation denies that underground fighting is an issue, arguing that the terrier's role is to locate, bark, and flush out the hunted animals, not to attack them. Hunting below ground with terriers is largely illegal in Britain under the Hunting Act 2004, unless conducted by strict conditions intended to protect game birds. Hunting with working terriers for rats is legal in the United Kingdom. Terrier work is legal in Australia, Canada, South Africa, the United States and much of continental Europe. Requirements of a working terrier The primary characteristic of a working terrier is its active employment by an owner or handler, rather than its breed alone determining its status. The most critical physical attribute for a working terrier is a small chest size which facilitates navigation through narrow underground tunnels. The optimal chest size varies depending on the dimensions of the tunnel system; generally, smaller dogs are more effective as they can reach their quarry without the need for extensive digging and without significant fatigue. Larger dogs often face difficulties in maneuvering through tight bends in tunnels, requiring frequent intervention from handlers to continue, such as digging the dog out. Furthermore, If a dog digs to a point where the tunnel narrows significantly, it may need to move the excavated dirt behind itself to advance. This can lead to a condition where the dog is trapped by the accumulated soil, making it extremely challenging to escape the tunnel without assistance, particularly if it is unable to turn around. With two animals underground, a flow of air must be maintained to avoid asphyxiation. The more space a dog takes up in a burrow, the more the airflow will be constricted. In addition, a small dog has better maneuverability and can more easily avoid being bitten. Because of this, small dogs often receive less injury underground than larger dogs, which are more likely to find themselves jammed in a den pipe, face-to-face with the prey, and unable to move forward or backward. Other important requirements of a working terrier are essential gameness, a good nose, and the ability to problem-solve to avoid coming to harm underground. Terrier work as vermin control A wide variety of game is hunted below ground with terriers, including red foxes, groundhogs (also known as woodchucks), raccoons, opossums, nutria (also known as coypu), and European and American badgers. According to a 1994 survey by the British Association for Shooting and Conservation, 9% of foxes killed by UK gamekeepers were killed following the use of terriers. Terrier work is not a very efficient way of hunting vermin, though in 2006, some 258 members of the Royal College of Veterinary Surgeons argued that it is a comparatively humane way to reduce fox numbers and is quite selective. Because of these characteristics, terrier work is considered an ideal way to control certain nuisance wildlife in farm countries. The inefficiency of terrier work means that, unlike poisons and traps, there is no danger that a species can be wiped out over a large area and little chance that an adult will be terminated with an unseen young still in the den. Though inefficient, a team of terriers, when coupled with an enthusiastic digger, can control red foxes, raccoons, and groundhogs on small farms where their presence might be a problem for chickens, geese, wild bird populations, and crop production. Because terrier work is selective, animals can be dispatched, or else they can be moved and relocated to nearby farms, forests, or waste areas where they will do no harm. Early history Terrier work as it is known today began with the rise of the Enclosure Movement in the late 18th century in England. With enclosure, people were moved off the land and into cities and towns, and sheep and other livestock were moved into newly walled, hedged, and fenced fields. Vast expanses of enclosed open spaces proved perfect for mounted fox hunting, a sport that had arrived in the UK from France in the late 17th century. The first mounted fox hunts were described by Sir Walter Scott, who also described the first working terriers in the UK. The first true breed of working terrier that bears a resemblance to what we see in the field today is the Jack Russell Terrier. The Jack Russell Terrier is named after the Reverend John Russell, whose long life (1795–1883) encompasses the entire early history of mounted fox hunts in the UK, and who is credited with breeding the first fox-working white-bodied terrier used in the field today. With the rise of the Enclosure Movement in the late 18th and early 19th centuries came the control of sires and the rapid improvement in livestock herds. As breeds were improved, livestock shows were held to display these improvements. From these livestock shows grew the first dog shows. The first dog show appeared in the UK in 1859, the same year that Charles Darwin's Origin of Species was first published. Both Darwin's book and the first dog show drew much of their inspiration from the rapid "speciation" of new livestock breeds that had first begun with Robert Bakewell's efforts to control sire selection. If livestock breeds could be rapidly "improved" through controlled breeding, clearly the same thing could be done with dogs. Between 1800 and 1865, the number of dog breeds in the UK climbed from 15 to over 50, and it significantly increased further with the creation of the Kennel Club in 1873. Rootstock In the world of working terriers, there are but two roots: colored dogs from the north (Scotland) and white dogs from the south (England and Wales). From these two roots spring a variety of Kennel Club dogs and every type of working terrier commonly found in the field today. The "Fell Terrier" is the original non-pedigree-colored working dog of the north. From this diverse gene pool have sprung the Kennel Club Welsh Terrier, the Lakeland Terrier, and the Border Terrier. Today, only the Border Terrier is occasionally found in the field. This is not to say the working Fell Terrier has disappeared—it still exists by that name among working terrier enthusiasts. Today's working Fell Terrier may be brown, black, red, or black and tan, and may be smooth, wire, or broken-coated. The dog may be called a Fell Terrier, a "working Lakeland", or a "Patterdale Terrier". A German variety of the Fell Terrier is called the "Jagdterrier", but the standard for this dog is on the large size, and, as a consequence, it is most useful in large pipes, artificial earth, or when it has been bred down to a 12–13in size. From the southern part of England have come the white fox-working dogs, whose origins are the same as those of the Jack Russell Terrier. Kennel Club breeds derived from these mainly white-coated dogs include the Smooth Fox Terrier, the Wire Fox Terrier, the Sealyham Terrier, and most recently, the Parson Russell Terrier and Russell Terrier. None of these Kennel Club breeds are commonly found working in the field today. The absence of white-bodied working dogs in the Kennel Club does not mean that white-fox-working dogs have disappeared. The working Jack Russell Terrier is still as common as ever, presenting itself in a variety of coats (smooth, broken, and wire-coated or rough), sizes (10 inches to 15 inches tall, with most working dogs sized 10 to 13 inches tall), and coat colors (from pure white to 49% colored with tan and black markings). There is even a new type of working Jack Russell, the "Plummer Terrier", first created by Brian Plummer in the 1970s. Tools and technique The tools used for terrier work have essentially remained unchanged for more than 400 years: a small-chested and game-working terrier, a good round point shovel, a digging bar, a brush hook to clear away hedges and brambles, fox nets, water for the dog and digger, a snare to remove the quarry, or a gun or blunt instrument to dispatch it. The only "modern" piece of equipment found in a terrier man's kit that would look foreign to a terrier man from the late 18th century is an electronic radio collar used to help locate the dog underground and speed the dig. Locator collars have greatly increased the safety of dogs when underground. Controversy in the UK Terrier work has come under criticism from animal welfare groups in the UK, particularly in connection with fox hunting, where terriers may be used to flush out a fox who has gone underground. This may lead to terriers attacking the foxes rather than flushing them out, thus prolonging the death of the fox. The League Against Cruel Sports states that distressing and prolonged deaths occur during the digging out or flushing out of foxes, and serious injuries can be sustained by dogs. The league notes that terriers and terrier men often accompany hunts which claim to be legally trail hunting, but are in actual fact hunting foxes. Organizations such as the League Against Cruel Sports have produced a range of reports on the working terrier. In 1994, Alan Williams, the Labour MP for Swansea West, proposed a private members bill, the Protection of Dogs Bill, seeking to ban the activity, but it was not banned in the UK until the passage of the Hunting Act 2004. The Act outlaws terrier work unless it complies with a number of strict conditions designed for gamekeepers. See also Legislation on hunting with dogs Working animal Further reading Glover, John. (2014). Ratting With Terriers. Suffolk, England. Skycat Publications. References General references Burns, Patrick. American Working Terriers, 2005. Chapman, Eddie. "The Working Jack Russell Terrier," 1994. Fischer, John. "Vulpicide in South Nottinghamshire in 1865" MacDonald, David. "Running With the Fox." 1987. League Against Cruel Sports campaign against terrier work "Badger baiter banned after terriers hurt", Sheffield Today, 8 August 2005 External links Terrierman.com Jack Russell Terrier Club of America Jack Russell Terrier Club Great Britain Teckel Club of North America Patterdale Terrier Club of America German hunting terrier American Working Terrier Association Border Terrier Show Results in the U.K. Dog types Hunting dogs Pest control Terriers Working dogs
Working terrier
[ "Biology" ]
2,560
[ "Pests (organism)", "Pest control" ]
7,135,901
https://en.wikipedia.org/wiki/Carpathite
Carpathite is a very rare hydrocarbon mineral, consisting of exceptionally pure coronene (C24H12), a polycyclic aromatic hydrocarbon. The name has been spelled karpatite and the mineral was improperly renamed pendletonite. Discovery The mineral was first described in 1955 for an occurrence in Transcarpathian Oblast, Ukraine. It was named for the Carpathian Mountains. In 1967, unaware of the earlier description, Joseph Murdoch analyzed and described a specimen from the Picacho Peak area of San Benito County, California and named it "pendletonite". Structure Carpathite has the same crystal structure of pure coronene. The molecules are planar and lie in two sets with roughly perpendicular orientations. Molecules in the same set are parallel and partially offset, with planes 0.3463 nm apart. That is slightly larger than the inter-layer distance of graphite layers (0.335 nm), and much larger than the C-C bond lengths within the molecule (about 0.14 nm). This "corrugated layer" structure is highly resistant to intercalation, which apparently explains the purity of the mineral. Occurrence In the Ukraine discovery location, carpathite occurs at the contact zone of a diorite intrusive into argillite within cavities, and is associated with idrialite, amorphous organic material, calcite, barite, quartz, cinnabar, and metacinnabar. It has also been reported in the Presov Region of the Slovak Republic and in the Kamchatka Oblast in Russia. In the California location, it occurs in centimeter-size veins, associated (and somewhat contemporaneous) with quartz and cinnabar, in a silicified matrix. Crystals are up to 10 × 1 × 1 mm. Carbon isotope ratios and the morphology of the deposit indicate that the coronene was produced from organic matter in oceanic sediment, thermally decomposed, purified through hydrothermal transportation and chemical reactions, and deposition below 250 °C, after the other minerals in the intrusion. References Organic minerals Monoclinic minerals Minerals in space group 14 Minerals described in 1955
Carpathite
[ "Chemistry" ]
445
[ "Organic compounds", "Organic minerals" ]
7,136,270
https://en.wikipedia.org/wiki/Ares%20%28missile%29
The Ares was a proposed intercontinental ballistic missile (ICBM) derived from the Titan II missile. It was a single-stage rocket with a high-performance engine to increase the rocket's specific impulse. Both Aerojet and Rocketdyne carried out engine design studies for the project, but Ares was ultimately cancelled in favour of solid-fuel ICBMs, which were safer to store and could be launched with much less notice. The Ares missile series was canceled due to the inconvenience of using liquid fuel. Some reasons included extensive protection from corrosion within the silos, as well as the liquid fuel propellant, ideally used in the proposed Ares missiles, being more expensive to maintain. Thus making the transition to use the Minuteman II missiles, that ran on solid fuel, easier because solid fuel was more reliable for sand was less expensive than previous projects. Hence the cancellation of the Ares missile series. Specification Sources: Payload: 4,000 kg (8,800 lb) Payload Orbit: 160 km Height: 30.00 m (98.00 ft) Diameter: 3.00 m (9.80 ft) Width: 3.00 m (9.80 ft) Mass: 150,000 kg (330,000 lb) References Abandoned military rocket and missile projects of the United States Intercontinental ballistic missiles of the United States Titan (rocket family) Single-stage-to-orbit
Ares (missile)
[ "Astronomy" ]
282
[ "Rocketry stubs", "Astronomy stubs" ]
7,136,284
https://en.wikipedia.org/wiki/Duplication%20and%20elimination%20matrices
In mathematics, especially in linear algebra and matrix theory, the duplication matrix and the elimination matrix are linear transformations used for transforming half-vectorizations of matrices into vectorizations or (respectively) vice versa. Duplication matrix The duplication matrix is the unique matrix which, for any symmetric matrix , transforms into : . For the symmetric matrix , this transformation reads The explicit formula for calculating the duplication matrix for a matrix is: Where: is a unit vector of order having the value in the position and 0 elsewhere; is a matrix with 1 in position and and zero elsewhere Here is a C++ function using Armadillo (C++ library): arma::mat duplication_matrix(const int &n) { arma::mat out((n*(n+1))/2, n*n, arma::fill::zeros); for (int j = 0; j < n; ++j) { for (int i = j; i < n; ++i) { arma::vec u((n*(n+1))/2, arma::fill::zeros); u(j*n+i-((j+1)*j)/2) = 1.0; arma::mat T(n,n, arma::fill::zeros); T(i,j) = 1.0; T(j,i) = 1.0; out += u * arma::trans(arma::vectorise(T)); } } return out.t(); } Elimination matrix An elimination matrix is a matrix which, for any matrix , transforms into : .  By the explicit (constructive) definition given by , the by elimination matrix is given by where is a unit vector whose -th element is one and zeros elsewhere, and . Here is a C++ function using Armadillo (C++ library): arma::mat elimination_matrix(const int &n) { arma::mat out((n*(n+1))/2, n*n, arma::fill::zeros); for (int j = 0; j < n; ++j) { arma::rowvec e_j(n, arma::fill::zeros); e_j(j) = 1.0; for (int i = j; i < n; ++i) { arma::vec u((n*(n+1))/2, arma::fill::zeros); u(j*n+i-((j+1)*j)/2) = 1.0; arma::rowvec e_i(n, arma::fill::zeros); e_i(i) = 1.0; out += arma::kron(u, arma::kron(e_j, e_i)); } } return out; } For the matrix , one choice for this transformation is given by . Notes References . Jan R. Magnus and Heinz Neudecker (1988), Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley. . Jan R. Magnus (1988), Linear Structures, Oxford University Press. Matrices de:Eliminationsmatrix
Duplication and elimination matrices
[ "Mathematics" ]
729
[ "Matrices (mathematics)", "Mathematical objects" ]
7,136,636
https://en.wikipedia.org/wiki/Surfactin
Surfactin is a cyclic lipopeptide, commonly used as an antibiotic for its capacity as a surfactant. It is an amphiphile capable of withstanding hydrophilic and hydrophobic environments. The Gram-positive bacterial species Bacillus subtilis produces surfactin for its antibiotic effects against competitors. Surfactin showcases antibacterial, antiviral, antifungal, and hemolytic effects. Structure and Synthesis The structure consists of a peptide loop of seven amino acids (L-glutamic acid, L-leucine, D-leucine, L-valine, L-aspartic acid, D-leucine, and L-leucine) and a β-hydroxy fatty acid of variable length, thirteen to fifteen carbon atoms long. The glutamic acid and aspartic acid residues give the ring its hydrophilic character, as well as its negative charge. Conversely, the valine residue extends down, facing the fatty acid chain, to form a major hydrophobic domain. Below critical micellar concentrations (CMCs), the fatty acid tail can extend freely into solution, participating in hydrophobic interactions within micelles. This antibiotic is synthesized by a linear nonribosomal peptide synthetase, surfactin synthetase (). In solution, it has a characteristic "horse saddle" conformation (PDB: ) that explains its large spectrum of biological activity. Physical properties Surface tension Surfactin, like other surfactants, affects the surface tension of liquids in which it is dissolved. It can lower the water's surface tension from 72 mN/m to 27 mN/m at concentrations as low as 20 μM. Surfactin accomplishes this effect by occupying the intermolecular space between water molecules, decreasing the attractive forces between adjacent water molecules, mainly hydrogen bonds, to increase the solution's fluidity. This property makes surfactin and other surfactants useful as detergents and soaps. Molecular mechanisms There are three prevailing hypotheses for how surfactin works. Cation-carrier effect The cation-carrier effect is characterized by surfactin's ability to drive monovalent and divalent cations through an organic barrier. The two acidic residues aspartate and glutamate form a "claw" to stabilize divalent cations, such as Ca2+ ions used as an assembly template for the formation of micelles. When surfactin penetrates the outer sheet, its fatty acid chain interacts with the acyl chains of the phospholipids, orienting its headgroup toward the phospholipids' polar heads. Attachment of a cation causes the complex to cross the bilipidic layer using flippase enzymes. The headgroup aligns itself with the phospholipids of the inner sheet and the fatty acid chain interacts with the phospholipids acyl chains. The cation is then delivered into the intracellular medium. Pore-forming effect The pore-forming (ion channel) effect is characterized by the formation of cationic channels. It requires surfactin to self-associate inside the membrane since it cannot span across the cellular membrane. Under a hypothesis focused on uncharged membranes with minimal activation energy required to cross between inner and outer leaflets, molecular self-assembly would form a channel structure. Detergent effect The detergent effect draws on surfactin's ability to insert its fatty acid chain into the phospholipid layer, disorganizing the cell membrane to increase its permeability. Insertion of several surfactin molecules into the membrane can lead to the formation of mixed micelles by self-association and bilayer influenced by fatty chain hydrophobicity ultimately leading to bilayer solubilization. Biological properties Antibacterial and antiviral properties Surfactin is a broad-spectrum antibiotic with detergent-like activity increasing the permeability of cell membranes in all bacteria, regardless of their Gram stain classification. The minimum inhibitory concentration (MIC) of surfactin is between 12-50 μg/ml. Surfactin is also capable of degrading viral envelope lipids and forming ion channels in the inner capsid with experimental evidence showing inhibition of HIV and HSV. However, surfactin can only degrade viruses when they are outside of host cells. Furthermore, when the environment is packed with proteins and lipids, surfactin faces a buffer effect lowering its antiviral activity. Toxicity Surfactin has non-specific cytotoxicity, causing lysis through disruption to the phospholipid bilayer present in all cells. When injected into humans as an intravascular antibiotic at concentrations at or above the of 40-80 μM, surfactin has hemolytic effects. See also Mycosubtilin References Colloidal chemistry Cleaning product components Antibiotics Lipopeptides Non-ionic surfactants Polypeptide antibiotics
Surfactin
[ "Chemistry", "Technology", "Biology" ]
1,043
[ "Colloidal chemistry", "Biotechnology products", "Surface science", "Colloids", "Antibiotics", "Biocides", "Components", "Cleaning product components" ]
7,136,985
https://en.wikipedia.org/wiki/Vectorization%20%28mathematics%29
In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a matrix A, denoted vec(A), is the column vector obtained by stacking the columns of the matrix A on top of one another: Here, represents the element in the i-th row and j-th column of A, and the superscript denotes the transpose. Vectorization expresses, through coordinates, the isomorphism between these (i.e., of matrices and vectors) as vector spaces. For example, for the 2×2 matrix , the vectorization is . The connection between the vectorization of A and the vectorization of its transpose is given by the commutation matrix. Compatibility with Kronecker products The vectorization is frequently used together with the Kronecker product to express matrix multiplication as a linear transformation on matrices. In particular, for matrices A, B, and C of dimensions k×l, l×m, and m×n. For example, if (the adjoint endomorphism of the Lie algebra of all n×n matrices with complex entries), then , where is the n×n identity matrix. There are two other useful formulations: More generally, it has been shown that vectorization is a self-adjunction in the monoidal closed structure of any category of matrices. Compatibility with Hadamard products Vectorization is an algebra homomorphism from the space of matrices with the Hadamard (entrywise) product to Cn2 with its Hadamard product: Compatibility with inner products Vectorization is a unitary transformation from the space of n×n matrices with the Frobenius (or Hilbert–Schmidt) inner product to Cn2: where the superscript † denotes the conjugate transpose. Vectorization as a linear sum The matrix vectorization operation can be written in terms of a linear sum. Let X be an matrix that we want to vectorize, and let ei be the i-th canonical basis vector for the n-dimensional space, that is . Let Bi be a block matrix defined as follows: Bi consists of n block matrices of size , stacked column-wise, and all these matrices are all-zero except for the i-th one, which is a identity matrix Im. Then the vectorized version of X can be expressed as follows: Multiplication of X by ei extracts the i-th column, while multiplication by Bi puts it into the desired position in the final vector. Alternatively, the linear sum can be expressed using the Kronecker product: Half-vectorization For a symmetric matrix A, the vector vec(A) contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with the lower triangular portion, that is, the entries on and below the main diagonal. For such matrices, the half-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech(A), of a symmetric matrix A is the column vector obtained by vectorizing only the lower triangular part of A: For example, for the 2×2 matrix , the half-vectorization is . There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, the duplication matrix and the elimination matrix. Programming language Programming languages that implement matrices may have easy means for vectorization. In Matlab/GNU Octave a matrix A can be vectorized by A(:). GNU Octave also allows vectorization and half-vectorization with vec(A) and vech(A) respectively. Julia has the vec(A) function as well. In Python NumPy arrays implement the flatten method, while in R the desired effect can be achieved via the c() or as.vector() functions. In R, function vec() of package 'ks' allows vectorization and function vech() implemented in both packages 'ks' and 'sn' allows half-vectorization. Applications Vectorization is used in matrix calculus and its applications in establishing e.g., moments of random vectors and matrices, asymptotics, as well as Jacobian and Hessian matrices. It is also used in local sensitivity and statistical diagnostics. Notes See also Duplication and elimination matrices Voigt notation Packed storage matrix Column-major order Matricization References Linear algebra Matrices
Vectorization (mathematics)
[ "Mathematics" ]
916
[ "Matrices (mathematics)", "Linear algebra", "Mathematical objects", "Algebra" ]
7,137,420
https://en.wikipedia.org/wiki/QuteMol
QuteMol is an open-source, interactive, molecular visualization system. QuteMol utilizes the current capabilities of modern GPUs through OpenGL shaders to offer an array of innovative visual effects. QuteMol visualization techniques are aimed at improving clarity and an easier understanding of the 3D shape and structure of large molecules or complex proteins. Features include: Real Time ambient occlusion Depth Aware Silhouette Enhancement Ball and Sticks, space-filling and Liquorice visualization modes High resolution antialiased snapshots for creating publication quality renderings Interactive rendering of large molecules and protein (100k atoms) Standard Protein Data Bank input. See also Molecular graphics Molecular modeling on GPU List of free and open-source software packages References External links Molecular modelling software Free science software Chemistry software for Linux Free 3D graphics software Free software programmed in C++ Software that uses wxWidgets
QuteMol
[ "Chemistry" ]
178
[ "Molecular modelling software", "Molecular physics", "Computational chemistry software", "Chemistry software", "Molecular modelling", "Chemistry software for Linux", "Molecular physics stubs" ]
7,137,808
https://en.wikipedia.org/wiki/Dystroglycan
Dystroglycan is a protein that in humans is encoded by the DAG1 gene. Dystroglycan is one of the dystrophin-associated glycoproteins, which is encoded by a 5.5 kb transcript in Homo sapiens on chromosome 3. There are two exons that are separated by a large intron. The spliced exons code for a protein product that is finally cleaved into two non-covalently associated subunits, [alpha] (N-terminal) and [beta] (C-terminal). Function In skeletal muscle the dystroglycan complex works as a transmembrane linkage between the extracellular matrix and the cytoskeleton. [alpha]-dystroglycan is extracellular and binds to merosin [alpha]-2 laminin in the basement membrane, while [beta]-dystroglycan is a transmembrane protein and binds to dystrophin, which is a large rod-like cytoskeletal protein, absent in Duchenne muscular dystrophy patients. Dystrophin binds to intracellular actin cables. In this way, the dystroglycan complex, which links the extracellular matrix to the intracellular actin cables, is thought to provide structural integrity in muscle tissues. The dystroglycan complex is also known to serve as an agrin receptor in muscle, where it may regulate agrin-induced acetylcholine receptor clustering at the neuromuscular junction. There is also evidence which suggests the function of dystroglycan as a part of the signal transduction pathway because it is shown that Grb2, a mediator of the Ras-related signal pathway, can interact with the cytoplasmic domain of dystroglycan. Expression Dystroglycan is widely distributed in non-muscle tissues as well as in muscle tissues. During epithelial morphogenesis of kidney, the dystroglycan complex is shown to act as a receptor for the basement membrane. Dystroglycan expression in Mus musculus brain and neural retina has also been reported. However, the physiological role of dystroglycan in non-muscle tissues remains unclear. In December 2022, the implications of abnormal dystroglycan expression and/or O-mannosylation on the pathogenesis of cancer have been reviewed. Interactions Dystroglycan has been shown to interact with FYN, C-src tyrosine kinase, Src, NCK1, Grb2, Caveolin 3 and SHC1. See also Dystrophin-associated protein complex Actin-binding protein Agrin References Further reading External links Overview at sdbonline.org Glycoproteins Single-pass transmembrane proteins
Dystroglycan
[ "Chemistry" ]
600
[ "Glycoproteins", "Glycobiology" ]
7,137,951
https://en.wikipedia.org/wiki/Schlenk%20line
The Schlenk line (also vacuum gas manifold) is a commonly used chemistry apparatus developed by Wilhelm Schlenk. It consists of a dual manifold with several ports. One manifold is connected to a source of purified inert gas, while the other is connected to a vacuum pump. The inert-gas line is vented through an oil bubbler, while solvent vapors and gaseous reaction products are prevented from contaminating the vacuum pump by a liquid-nitrogen or dry-ice/acetone cold trap. Special stopcocks or Teflon taps allow vacuum or inert gas to be selected without the need for placing the sample on a separate line. Schlenk lines are useful for manipulating moisture- and air-sensitive compounds. The vacuum is used to remove air or other gasses present in closed, connected glassware to the line. It often also removes the last traces of solvent from a sample. Vacuum and gas manifolds often have many ports and lines, and with care, it is possible for several reactions or operations to be run simultaneously in inert conditions. When the reagents are highly susceptible to oxidation, traces of oxygen may pose a problem. Then, for the removal of oxygen below the ppm level, the inert gas needs to be purified by passing it through a deoxygenation catalyst. This is usually a column of copper(I) or manganese(II) oxide, which reacts with oxygen traces present in the inert gas. In other cases, a purge-cycle technique is often employed, where the closed, reaction vessel connected to the line is filled with inert gas, evacuated with the vacuum and then refilled. This process is repeated 3 or more times to make sure air is rigorously removed. Moisture can be removed by heating the reaction vessel with a heat gun. Techniques The main techniques associated with the use of a Schlenk line include: counterflow additions, where air-stable reagents are added to the reaction vessel against a flow of inert gas; the use of syringes and rubber septa to transfer liquids and solutions; cannula transfer, where liquids or solutions of air-sensitive reagents are transferred between different vessels stoppered with septa using a long thin tube known as a cannula. Liquid flow is supported by vacuum or inert-gas pressure. Glassware are usually connected by tightly fitting and greased ground glass joints. Round bends of glass tubing with ground glass joints may be used to adjust the orientation of various vessels. Glassware is necessarily purged of outside air by using the purge cycling technique. The solvents and reagents that are used can use a technique called "sparging" to remove air. This is where a cannula needle, which is connected to the inert gas on the line, is inserted into the reaction vessel containing the solvent; this effectively bubbles the inert gas into the solution, which will actively push out trapped gas molecules from the solvent. Filtration under inert conditions poses a special challenge. It is usually achieved using a "cannula filter". Classically, filtration is tackled with a Schlenk filter, which consists of a sintered glass funnel fitted with joints and stopcocks. By fitting the pre-dried funnel and receiving flask to the reaction flask against a flow of nitrogen, carefully inverting the set-up and turning on the vacuum appropriately, the filtration may be accomplished with minimal exposure to air. A glovebox is often used in conjunction with the Schlenk line for storing and reusing air- and moisture-sensitive solvents in a lab. Dangers The main dangers associated with the use of a Schlenk line are the risks of an implosion or explosion. An implosion can occur due to the use of vacuum and flaws in the glass apparatus. An explosion can occur due to the common use of liquid nitrogen in the cold trap, used to protect the vacuum pump from solvents. If a reasonable amount of air is allowed to enter the Schlenk line, liquid oxygen can condense into the cold trap as a pale blue liquid. An explosion may occur due to reaction of the liquid oxygen with any organic compounds also in the trap. Gallery See also Air-free technique gives a broad overview of methods including: Glovebox – used to manipulate air-sensitive (oxygen- or moisture-sensitive) chemicals. Schlenk flask – reaction vessel for handling air-sensitive compounds. Perkin triangle – used for the distillation of air-sensitive compounds. References Further reading "Handling Air-Sensitive Reagents" Sigma-Aldrich. External links Preparation of a Manganese oxide column for inert gas purification from oxygen traces Laboratory equipment Laboratory glassware Air-free techniques
Schlenk line
[ "Chemistry", "Engineering" ]
991
[ "Vacuum systems", "Air-free techniques" ]
7,137,955
https://en.wikipedia.org/wiki/Cortactin
Cortactin (from "cortical actin binding protein") is a monomeric protein located in the cytoplasm of cells that can be activated by external stimuli to promote polymerization and rearrangement of the actin cytoskeleton, especially the actin cortex around the cellular periphery. It is present in all cell types. When activated, it will recruit Arp2/3 complex proteins to existing actin microfilaments, facilitating and stabilizing nucleation sites for actin branching. Cortactin is important in promoting lamellipodia formation, invadopodia formation, cell migration, and endocytosis. Gene In humans, cortactin is encoded by the CTTN gene on chromosome 11. Structure Cortactin is a thin, elongated monomer that consists of an amino-terminal acidic (NTA) region; 37-residue-long segments that are highly conserved among cortactin proteins of all species and repeated up to 6.5 times in tandem (“cortactin repeats”); a proline-rich region; and an SH3 domain. This basic structure is highly conserved among all species that express cortactin. Activation and binding Cortactin is activated via phosphorylation, by tyrosine kinases or serine/threonine kinases, in response to extracellular signals like growth factors, adhesion sites, or pathogenic invasion of the epithelial layer. The SH3 domain of certain tyrosine kinases, such as the oncogene Src kinase, binds to cortactin's proline-rich region and phosphorylates it on Tyr421, Tyr466, and Tyr482. Once activated in this way, it can bind to filamentous actin (F-actin) with the fourth of its cortactin repeats. As the concentration of phosphorylated cortactin increases in specific regions within the cell, the monomers each begin to recruit an Arp2/3 complex to F-actin. It binds to Arp2/3 with an aspartic acid-aspartic acid-tryptophan (DDW) sequence in its NTA region, a motif that is often seen in other actin nucleation-promoting factors (NPFs). Certain serine/threonine kinases, such as ERK, can phosphorylate cortactin on Ser405 and Ser418 in the SH3 domain. Activated like this, it still associates with Arp2/3 and F-actin, but will also allow other actin NPFs, most importantly N-WASp (Neuronal Wiskott-Aldrich syndrome protein), to bind to the complex as well; when phosphorylated by tyrosine kinases, other NPFs are excluded. The ability of these other NPFs to bind the Arp2/3 complex while cortactin is also bound could come from new interactions with cortactin's SH3 domain, which is in a different conformation when phosphorylated by Ser/Thr kinases and thus may be more open to interactions with other NPFs. Having other NPFs bind to the Arp2/3 complex at the same time as cortactin may enhance nucleation site stability. Location and function in the cell Inactive cortactin diffuses throughout the cytoplasm, but upon phosphorylation, the protein begins to target certain areas in the cell. Cortactin-assisted Arp2/3-nucleated actin branches are most prominent in the actin cortex, around the periphery of the cell. A phosphorylated cortactin monomer binds to, activates, and stabilizes an Arp2/3 complex on preexisting F-actin, which provides a nucleation site for a new actin branch to form from the “mother” filament. Branches formed from cortactin-assisted nucleation sites are very stable; cortactin has been shown to inhibit debranching. Thus, polymerization and branching of actin is promoted in areas of the cell where cortactin is localized. Cortactin is very active in lamellipodia, protrusions of the cell membrane formed by actin polymerization and treadmilling that propel the cell along a surface as it migrates towards some target. Cortactin acts as a link between extracellular signals and lamellipodial “steering.” When a receptor tyrosine kinase on the cell membrane binds to an adhesion site, for example, cortactin will be phosphorylated locally to the area of binding, activate and recruit Arp2/3 to the actin cortex in that region, and thus stimulate cortical actin polymerization and movement of the cell in that direction. Macrophages, highly motile immune cells that engulf cellular debris and pathogens, are propelled by lamellipodia and identify/migrate toward a target via chemotaxis; thus, cortactin must also be activated by receptor kinases that pick up a large variety of chemical signals. Studies have implicated cortactin in both clathrin-mediated endocytosis and clathrin-independent endocytosis. In both kinds of endocytosis, it has long been known that actin localizes to sites of vesicle invagination and is a vital part of the endocytic pathway, but the actual mechanisms by which actin facilitates endocytosis are still unclear. Recently, however, it has been found that dynamin, the protein responsible for breaking the newly formed vesicular bud off the inside of the plasma membrane, can associate with the SH3 domain of cortactin. Since cortactin recruits the Arp2/3 complexes that lead to actin polymerization, this suggests that it may play an important part in linking vesicle formation to the as yet unknown functions actin has in endocytosis. Clinical significance Amplification of the genes encoding cortactin—in humans, EMS1—has been found to occur in certain tumors. Overexpression of cortactin can lead to highly-active lamellipodia in tumor cells, dubbed “invadopodia.” These cells are especially invasive and migratory, making them very dangerous, for they can easily spread cancer across the body into other tissues. Interactions Cortactin has been shown to interact with: ACTR3 ARPC2, CTNND1, FER, KCNA2, SHANK2, WASL, and WIPF1. See also actin gelsolin transferrin villin References Further reading External links Cell biology Genes on human chromosome 11
Cortactin
[ "Biology" ]
1,451
[ "Cell biology" ]
7,137,983
https://en.wikipedia.org/wiki/Enzyme%20catalysis
Enzyme catalysis is the increase in the rate of a process by an "enzyme", a biological molecule. Most enzymes are proteins, and most such processes are chemical reactions. Within the enzyme, generally catalysis occurs at a localized site, called the active site. Most enzymes are made predominantly of proteins, either a single protein chain or many such chains in a multi-subunit complex. Enzymes often also incorporate non-protein components, such as metal ions or specialized organic molecules known as cofactor (e.g. adenosine triphosphate). Many cofactors are vitamins, and their role as vitamins is directly linked to their use in the catalysis of biological process within metabolism. Catalysis of biochemical reactions in the cell is vital since many but not all metabolically essential reactions have very low rates when uncatalysed. One driver of protein evolution is the optimization of such catalytic activities, although only the most crucial enzymes operate near catalytic efficiency limits, and many enzymes are far from optimal. Important factors in enzyme catalysis include general acid and base catalysis, orbital steering, entropic restriction, orientation effects (i.e. lock and key catalysis), as well as motional effects involving protein dynamics Mechanisms of enzyme catalysis vary, but are all similar in principle to other types of chemical catalysis in that the crucial factor is a reduction of energy barrier(s) separating the reactants (or substrates) from the products. The reduction of activation energy (Ea) increases the fraction of reactant molecules that can overcome this barrier and form the product. An important principle is that since they only reduce energy barriers between products and reactants, enzymes always catalyze reactions in both directions, and cannot drive a reaction forward or affect the equilibrium position – only the speed with which is it achieved. As with other catalysts, the enzyme is not consumed or changed by the reaction (as a substrate is) but is recycled such that a single enzyme performs many rounds of catalysis. Enzymes are often highly specific and act on only certain substrates. Some enzymes are absolutely specific meaning that they act on only one substrate, while others show group specificity and can act on similar but not identical chemical groups such as the peptide bond in different molecules. Many enzymes have stereochemical specificity and act on one stereoisomer but not another. Induced fit The classic model for the enzyme-substrate interaction is the induced fit model. This model proposes that the initial interaction between enzyme and substrate is relatively weak, but that these weak interactions rapidly induce conformational changes in the enzyme that strengthen binding. The advantages of the induced fit mechanism arise due to the stabilizing effect of strong enzyme binding. There are two different mechanisms of substrate binding: uniform binding, which has strong substrate binding, and differential binding, which has strong transition state binding. The stabilizing effect of uniform binding increases both substrate and transition state binding affinity, while differential binding increases only transition state binding affinity. Both are used by enzymes and have been evolutionarily chosen to minimize the activation energy of the reaction. Enzymes that are saturated, that is, have a high affinity substrate binding, require differential binding to reduce the energy of activation, whereas small substrate unbound enzymes may use either differential or uniform binding. These effects have led to most proteins using the differential binding mechanism to reduce the energy of activation, so most substrates have high affinity for the enzyme while in the transition state. Differential binding is carried out by the induced fit mechanism – the substrate first binds weakly, then the enzyme changes conformation increasing the affinity to the transition state and stabilizing it, so reducing the activation energy to reach it. It is important to clarify, however, that the induced fit concept cannot be used to rationalize catalysis. That is, the chemical catalysis is defined as the reduction of Ea‡ (when the system is already in the ES‡) relative to Ea‡ in the uncatalyzed reaction in water (without the enzyme). The induced fit only suggests that the barrier is lower in the closed form of the enzyme but does not tell us what the reason for the barrier reduction is. Induced fit may be beneficial to the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism. Mechanisms of an alternative reaction route These conformational changes also bring catalytic residues in the active site close to the chemical bonds in the substrate that will be altered in the reaction. After binding takes place, one or more mechanisms of catalysis lowers the energy of the reaction's transition state, by providing an alternative chemical pathway for the reaction. There are six possible mechanisms of "over the barrier" catalysis as well as a "through the barrier" mechanism: Proximity and orientation Enzyme-substrate interactions align the reactive chemical groups and hold them close together in an optimal geometry, which increases the rate of the reaction. This reduces the entropy of the reactants and thus makes addition or transfer reactions less unfavorable, since a reduction in the overall entropy when two reactants become a single product. However this is a general effect and is seen in non-addition or transfer reactions where it occurs due to an increase in the "effective concentration" of the reagents. This is understood when considering how increases in concentration leads to increases in reaction rate: essentially when the reactants are more concentrated, they collide more often and so react more often. In enzyme catalysis, the binding of the reagents to the enzyme restricts the conformational space of the reactants, holding them in the 'proper orientation' and close to each other, so that they collide more frequently, and with the correct geometry, to facilitate the desired reaction. The "effective concentration" is the concentration the reactant would have to be, free in solution, to experiences the same collisional frequency. Often such theoretical effective concentrations are unphysical and impossible to realize in reality – which is a testament to the great catalytic power of many enzymes, with massive rate increases over the uncatalyzed state. However, the situation might be more complex, since modern computational studies have established that traditional examples of proximity effects cannot be related directly to enzyme entropic effects. Also, the original entropic proposal has been found to largely overestimate the contribution of orientation entropy to catalysis. Proton donors or acceptors Proton donors and acceptors, i.e. acids and base may donate and accept protons in order to stabilize developing charges in the transition state. This is related to the overall principle of catalysis, that of reducing energy barriers, since in general transition states are high energy states, and by stabilizing them this high energy is reduced, lowering the barrier. A key feature of enzyme catalysis over many non-biological catalysis, is that both acid and base catalysis can be combined in the same reaction. In many abiotic systems, acids (large [H+]) or bases ( large concentration H+ sinks, or species with electron pairs) can increase the rate of the reaction; but of course the environment can only have one overall pH (measure of acidity or basicity (alkalinity)). However, since enzymes are large molecules, they can position both acid groups and basic groups in their active site to interact with their substrates, and employ both modes independent of the bulk pH. Often general acid or base catalysis is employed to activate nucleophile and/or electrophile groups, or to stabilize leaving groups. Many amino acids with acidic or basic groups are this employed in the active site, such as the glutamic and aspartic acid, histidine, cystine, tyrosine, lysine and arginine, as well as serine and threonine. In addition, the peptide backbone, with carbonyl and amide N groups is often employed. Cystine and Histidine are very commonly involved, since they both have a pKa close to neutral pH and can therefore both accept and donate protons. Many reaction mechanisms involving acid/base catalysis assume a substantially altered pKa. This alteration of pKa is possible through the local environment of the residue. pKa can also be influenced significantly by the surrounding environment, to the extent that residues which are basic in solution may act as proton donors, and vice versa. The modification of the pKa's is a pure part of the electrostatic mechanism. The catalytic effect of the above example is mainly associated with the reduction of the pKa of the oxyanion and the increase in the pKa of the histidine, while the proton transfer from the serine to the histidine is not catalyzed significantly, since it is not the rate determining barrier. Note that in the example shown, the histidine conjugate acid acts as a general acid catalyst for the subsequent loss of the amine from a tetrahedral intermediate.  Evidence supporting this proposed mechanism (Figure 4 in Ref. 13) has, however been controverted. Electrostatic catalysis Stabilization of charged transition states can also be by residues in the active site forming ionic bonds (or partial ionic charge interactions) with the intermediate. These bonds can either come from acidic or basic side chains found on amino acids such as lysine, arginine, aspartic acid or glutamic acid or come from metal cofactors such as zinc. Metal ions are particularly effective and can reduce the pKa of water enough to make it an effective nucleophile. Systematic computer simulation studies established that electrostatic effects give, by far, the largest contribution to catalysis. It can increase the rate of reaction by a factor of up to 107. In particular, it has been found that enzyme provides an environment which is more polar than water, and that the ionic transition states are stabilized by fixed dipoles. This is very different from transition state stabilization in water, where the water molecules must pay with "reorganization energy". In order to stabilize ionic and charged states. Thus, the catalysis is associated with the fact that the enzyme polar groups are preorganized The magnitude of the electrostatic field exerted by an enzyme's active site has been shown to be highly correlated with the enzyme's catalytic rate enhancement. Binding of substrate usually excludes water from the active site, thereby lowering the local dielectric constant to that of an organic solvent. This strengthens the electrostatic interactions between the charged/polar substrates and the active sites. In addition, studies have shown that the charge distributions about the active sites are arranged so as to stabilize the transition states of the catalyzed reactions. In several enzymes, these charge distributions apparently serve to guide polar substrates toward their binding sites so that the rates of these enzymatic reactions are greater than their apparent diffusion-controlled limits. Covalent catalysis Covalent catalysis involves the substrate forming a transient covalent bond with residues in the enzyme active site or with a cofactor. This adds an additional covalent intermediate to the reaction, and helps to reduce the energy of later transition states of the reaction. The covalent bond must, at a later stage in the reaction, be broken to regenerate the enzyme. This mechanism is utilised by the catalytic triad of enzymes such as proteases like chymotrypsin and trypsin, where an acyl-enzyme intermediate is formed. An alternative mechanism is schiff base formation using the free amine from a lysine residue, as seen in the enzyme aldolase during glycolysis. Some enzymes utilize non-amino acid cofactors such as pyridoxal phosphate (PLP) or thiamine pyrophosphate (TPP) to form covalent intermediates with reactant molecules. Such covalent intermediates function to reduce the energy of later transition states, similar to how covalent intermediates formed with active site amino acid residues allow stabilization, but the capabilities of cofactors allow enzymes to carryout reactions that amino acid side residues alone could not. Enzymes utilizing such cofactors include the PLP-dependent enzyme aspartate transaminase and the TPP-dependent enzyme pyruvate dehydrogenase. Rather than lowering the activation energy for a reaction pathway, covalent catalysis provides an alternative pathway for the reaction (via to the covalent intermediate) and so is distinct from true catalysis. For example, the energetics of the covalent bond to the serine molecule in chymotrypsin should be compared to the well-understood covalent bond to the nucleophile in the uncatalyzed solution reaction. A true proposal of a covalent catalysis (where the barrier is lower than the corresponding barrier in solution) would require, for example, a partial covalent bond to the transition state by an enzyme group (e.g., a very strong hydrogen bond), and such effects do not contribute significantly to catalysis. Metal ion catalysis A metal ion in the active site participates in catalysis by coordinating charge stabilization and shielding. Because of a metal's positive charge, only negative charges can be stabilized through metal ions. However, metal ions are advantageous in biological catalysis because they are not affected by changes in pH. Metal ions can also act to ionize water by acting as a Lewis acid. Metal ions may also be agents of oxidation and reduction. Bond strain This is the principal effect of induced fit binding, where the affinity of the enzyme to the transition state is greater than to the substrate itself. This induces structural rearrangements which strain substrate bonds into a position closer to the conformation of the transition state, so lowering the energy difference between the substrate and transition state and helping catalyze the reaction. However, the strain effect is, in fact, a ground state destabilization effect, rather than transition state stabilization effect. Furthermore, enzymes are very flexible and they cannot apply large strain effect. In addition to bond strain in the substrate, bond strain may also be induced within the enzyme itself to activate residues in the active site. Quantum tunneling These traditional "over the barrier" mechanisms have been challenged in some cases by models and observations of "through the barrier" mechanisms (quantum tunneling). Some enzymes operate with kinetics which are faster than what would be predicted by the classical ΔG‡. In "through the barrier" models, a proton or an electron can tunnel through activation barriers. Quantum tunneling for protons has been observed in tryptamine oxidation by aromatic amine dehydrogenase. Quantum tunneling does not appear to provide a major catalytic advantage, since the tunneling contributions are similar in the catalyzed and the uncatalyzed reactions in solution. However, the tunneling contribution (typically enhancing rate constants by a factor of ~1000 compared to the rate of reaction for the classical 'over the barrier' route) is likely crucial to the viability of biological organisms. This emphasizes the general importance of tunneling reactions in biology. In 1971-1972 the first quantum-mechanical model of enzyme catalysis was formulated. Active enzyme The binding energy of the enzyme-substrate complex cannot be considered as an external energy which is necessary for the substrate activation. The enzyme of high energy content may firstly transfer some specific energetic group X1 from catalytic site of the enzyme to the final place of the first bound reactant, then another group X2 from the second bound reactant (or from the second group of the single reactant) must be transferred to active site to finish substrate conversion to product and enzyme regeneration. We can present the whole enzymatic reaction as a two coupling reactions: It may be seen from reaction () that the group X1 of the active enzyme appears in the product due to possibility of the exchange reaction inside enzyme to avoid both electrostatic inhibition and repulsion of atoms. So we represent the active enzyme as a powerful reactant of the enzymatic reaction. The reaction () shows incomplete conversion of the substrate because its group X2 remains inside enzyme. This approach as idea had formerly proposed relying on the hypothetical extremely high enzymatic conversions (catalytically perfect enzyme). The crucial point for the verification of the present approach is that the catalyst must be a complex of the enzyme with the transfer group of the reaction. This chemical aspect is supported by the well-studied mechanisms of the several enzymatic reactions. Consider the reaction of peptide bond hydrolysis catalyzed by a pure protein α-chymotrypsin (an enzyme acting without a cofactor), which is a well-studied member of the serine proteases family, see. We present the experimental results for this reaction as two chemical steps: where S1 is a polypeptide, P1 and P2 are products. The first chemical step () includes the formation of a covalent acyl-enzyme intermediate. The second step () is the deacylation step. It is important to note that the group H+, initially found on the enzyme, but not in water, appears in the product before the step of hydrolysis, therefore it may be considered as an additional group of the enzymatic reaction. Thus, the reaction () shows that the enzyme acts as a powerful reactant of the reaction. According to the proposed concept, the H transport from the enzyme promotes the first reactant conversion, breakdown of the first initial chemical bond (between groups P1 and P2). The step of hydrolysis leads to a breakdown of the second chemical bond and regeneration of the enzyme. The proposed chemical mechanism does not depend on the concentration of the substrates or products in the medium. However, a shift in their concentration mainly causes free energy changes in the first and final steps of the reactions () and () due to the changes in the free energy content of every molecule, whether S or P, in water solution. This approach is in accordance with the following mechanism of muscle contraction. The final step of ATP hydrolysis in skeletal muscle is the product release caused by the association of myosin heads with actin. The closing of the actin-binding cleft during the association reaction is structurally coupled with the opening of the nucleotide-binding pocket on the myosin active site. Notably, the final steps of ATP hydrolysis include the fast release of phosphate and the slow release of ADP. The release of a phosphate anion from bound ADP anion into water solution may be considered as an exergonic reaction because the phosphate anion has low molecular mass. Thus, we arrive at the conclusion that the primary release of the inorganic phosphate H2PO4− leads to transformation of a significant part of the free energy of ATP hydrolysis into the kinetic energy of the solvated phosphate, producing active streaming. This assumption of a local mechano-chemical transduction is in accord with Tirosh's mechanism of muscle contraction, where the muscle force derives from an integrated action of active streaming created by ATP hydrolysis. Examples of catalytic mechanisms In reality, most enzyme mechanisms involve a combination of several different types of catalysis. Triose phosphate isomerase Triose phosphate isomerase () catalyses the reversible interconversion of the two triose phosphates isomers dihydroxyacetone phosphate and D-glyceraldehyde 3-phosphate. Trypsin Trypsin () is a serine protease that cleaves protein substrates after lysine or arginine residues using a catalytic triad to perform covalent catalysis, and an oxyanion hole to stabilise charge-buildup on the transition states. Aldolase Aldolase () catalyses the breakdown of fructose 1,6-bisphosphate (F-1,6-BP) into glyceraldehyde 3-phosphate and dihydroxyacetone phosphate (DHAP). Enzyme diffusivity The advent of single-molecule studies in the 2010s led to the observation that the movement of untethered enzymes increases with increasing substrate concentration and increasing reaction enthalpy. Subsequent observations suggest that this increase in diffusivity is driven by transient displacement of the enzyme's center of mass, resulting in a "recoil effect that propels the enzyme". Reaction similarity Similarity between enzymatic reactions (EC) can be calculated by using bond changes, reaction centres or substructure metrics (EC-BLAST ). See also Catalytic triad Enzyme assay Enzyme inhibitor Enzyme kinetics Enzyme promiscuity Protein dynamics Pseudoenzymes, whose ubiquity despite their catalytic inactivity suggests omic implications Quantum tunnelling The Proteolysis Map Time resolved crystallography References Further reading External links Articles containing video clips es:Catálisis enzimática
Enzyme catalysis
[ "Chemistry" ]
4,285
[ "Catalysis", "Chemical kinetics" ]
7,138,002
https://en.wikipedia.org/wiki/Sallie%20W.%20Chisholm
Sallie Watson "Penny" Chisholm (born 1947) is an American biological oceanographer at the Massachusetts Institute of Technology. She is an expert in the ecology and evolution of ocean microbes. Her research focuses particularly on the most abundant marine phytoplankton, Prochlorococcus, that she discovered in the 1980s with Rob Olson and other collaborators. She has a TED talk about their discovery and importance called "The tiny creature that secretly powers the planet". Early life and education Chisholm was born in Marquette, Michigan and graduated from Marquette Senior High School in 1965. She attended Skidmore College and earned a PhD from SUNY Albany in 1974. Following her Ph.D., she served as a post-doctoral researcher at the Scripps Institution of Oceanography from 1974 to 1976. Career Chisholm has been a faculty member at the Massachusetts Institute of Technology since 1976 and a visiting scientist at the Woods Hole Oceanographic Institution since 1978. Her research has focused on the ecology of marine phytoplankton. Chisholm's early work focused on the processes by which such plankton take up nutrients and the manner in which this affects their life cycle on diurnal time scales. This led her to begin using flow cytometry which can be used to measure the properties of individual cells. The application of flow cytometry to environmental samples led Chisholm and her collaborators (most notably Rob Olson and Heidi Sosik) to the discovery that small plankton (in particular Prochlorococcus and Synechococcus) accounted for a much more substantial part of marine productivity than had previously been realized. Previously, biological oceanographers had focused on silicaceous diatoms as being the most important phytoplankton, accounting for 10–20 gigatons of carbon uptake each year. Chisholm's work showed that an even larger amount of carbon was cycled through these small algae, which may also play an important role in the global nitrogen cycle. In recent years, Chisholm has played a visible role in opposing the use of iron fertilization as a technological fix for anthropogenic climate change. In 1994, Chisholm was one of 16 women faculty in the School of Science at MIT who drafted and co-signed a letter to the then-Dean of Science (now Chancellor of Berkeley) Robert Birgeneau, which started a campaign to highlight and challenge gender discrimination at MIT. Awards and honors Chisholm has been a member of the United States National Academy of Sciences (NAS) since 2003 and a fellow of the American Academy of Arts and Sciences since 1992. In January 2010, she was awarded the Alexander Agassiz Medal, for "pioneering studies of the dominant photosynthetic organisms in the sea and for integrating her results into a new understanding of the global ocean." She was a co-recipient in 2012 of the Ruth Patrick Award from the Association for the Sciences of Limnology and Oceanography. Chisholm received the National Medal of Science from President Barack Obama on February 1, 2013. In 2013, she was awarded the Ramon Margalef Prize in Ecology, "for being one of the most productive, charismatic and active researchers on biology and marine ecology". On May 24, 2018, she was awarded the Doctor of Science degree by Harvard University. In 2019 she received the Crafoord Prize in Biosciences, "for the discovery and pioneering studies of the most abundant photosynthesising organism on Earth, Prochlorococcus". This prize is considered equivalent to the Nobel Prize (for which there is no Biosciences category). Chisholm was honored at the Crafoord Prize Symposium in Biosciences at which 6 internationally prominent scientists spoke (in order of presentations): Alexandra Worden (GEOMAR Helmholtz Centre for Ocean Research Kiel, Germany), Corina Brussaard (NIOZ Royal Netherlands Institute for Sea Research, The Netherlands), Ramunas Stepanauskas (Bigelow Laboratory for Ocean Sciences, US), Rachel Foster (Stockholm University, Sweden), Francis M. Martin (INRA French National Institute for Agricultural Research, France) and David Karl (University of Hawaii, US). Select works See also Prochlorococcus Synechococcus Carbon cycle Global warming References External links Chisholm Lab at MIT Online Chisholm Lecture Video of Chisholm talking about her work, from the National Science & Technology Medals Foundation 1947 births Living people People from Marquette, Michigan American marine biologists American oceanographers American sustainability advocates American women biologists Skidmore College alumni University at Albany, SUNY alumni Massachusetts Institute of Technology School of Science faculty Members of the United States National Academy of Sciences Winners of the Ramon Margalef Prize in Ecology Fellows of the Ecological Society of America Biogeochemists American women oceanographers Fellows of the American Academy of Microbiology American women academics 20th-century American biologists 20th-century American earth scientists 20th-century American academics 20th-century American women scientists 21st-century American biologists 21st-century American earth scientists 21st-century American academics 21st-century American women scientists Biologists from Michigan Benjamin Franklin Medal (Franklin Institute) laureates
Sallie W. Chisholm
[ "Chemistry" ]
1,075
[ "Geochemists", "Biogeochemistry", "Biogeochemists" ]
7,138,038
https://en.wikipedia.org/wiki/Calsequestrin
Calsequestrin is a calcium-binding protein that acts as a calcium buffer within the sarcoplasmic reticulum. The protein helps hold calcium in the cisterna of the sarcoplasmic reticulum after a muscle contraction, even though the concentration of calcium in the sarcoplasmic reticulum is much higher than in the cytosol. It also helps the sarcoplasmic reticulum store an extraordinarily high amount of calcium ions. Each molecule of calsequestrin can bind 18 to 50 Ca2+ ions. Sequence analysis has suggested that calcium is not bound in distinct pockets via EF-hand motifs, but rather via presentation of a charged protein surface. Two forms of calsequestrin have been identified. The cardiac form Calsequestrin-2 (CASQ2) is present in cardiac and slow skeletal muscle and the fast skeletal form Calsequestrin-1(CASQ1) is found in fast skeletal muscle. The release of calsequestrin-bound calcium (through a calcium release channel) triggers muscle contraction. The active protein is not highly structured, more than 50% of it adopting a random coil conformation. When calcium binds there is a structural change whereby the alpha-helical content of the protein increases from 3 to 11%. Both forms of calsequestrin are phosphorylated by casein kinase 2, but the cardiac form is phosphorylated more rapidly and to a higher degree. Calsequestrin is also secreted in the gut where it deprives bacteria of calcium ions.. Cardiac calsequestrin Cardiac calsequestrin (CASQ2) plays an integral role in cardiac regulation. Mutations in the cardiac calsequestrin gene have been associated with cardiac arrhythmia and sudden death. CASQ2 is thought to have a role in regulating cardiac excitation-contraction coupling and calcium-induced calcium release (CICR) in the heart, as overexpression of CASQ2 has been shown to substantially raise the magnitude of cell-averaged ICA-induced calcium transients and spontaneous calcium sparks in isolated heart cells. Furthermore, CASQ2 modulates the CICR mechanism by lengthening to process to functionally recharge the sarcoplasmic reticulum's calcium ion stores. A lack of or mutation in CASQ2 has been directly associated with catecholaminergic polymorphic ventricular tachycardia (CPVT). A mutation can have a significant effect if it disrupts the linear polymerization ability of CASQ2, which directly accounts for its high-capacity to bind Ca2+. In addition, the hydrophobic core of domain II appears to be necessary for CASQ2's function, because a single amino acid mutation that disrupts this hydrophobic core directly leads to molecular aggregates, which are unable to respond to calcium ions. See also Catecholaminergic polymorphic ventricular tachycardia References Further reading External links GeneReviews/NCBI/NIH/UW entry on Catecholaminergic Polymorphic Ventricular Tachycardia Protein families Endoplasmic reticulum resident proteins
Calsequestrin
[ "Biology" ]
669
[ "Protein families", "Protein classification" ]
7,138,096
https://en.wikipedia.org/wiki/Reactive%20user%20interface
A human-to-computer user interface is said to be "reactive" if it has the following characteristics: The user is immediately aware of the effect of each "gesture". Gestures can be keystrokes, mouse clicks, menu selections, or more esoteric inputs. The user is always aware of the state of their data. Did I just save those changes? Did I just overwrite my backup by mistake? No data is hidden. In a figure-drawing program, the user can tell whether a line segment is composed of smaller segments. The user always knows how to get help. Help may be context-sensitive or modal, but it is substantial. A program with a built-in help browser is not reactive if its content is just a collection of screen shots or menu item labels with no real explanation of what they do. Reactivity was a major goal in the early user interface research at MIT and Xerox PARC. A computer program which was not reactive would not be considered user friendly no matter how elaborate its presentation. Early word-processing programs whose on-screen representations look nothing like their printer output could be reactive. The common example was WordStar on CP/M. On-screen, it looked like a markup language in a character cell display, but it had deep built-in help which was always available from an on-screen menu bar, and the effect of each keystroke was obvious. References User interfaces
Reactive user interface
[ "Technology", "Engineering" ]
293
[ "User interfaces", "Interfaces", "Software engineering stubs", "Software engineering" ]
7,138,105
https://en.wikipedia.org/wiki/Nammo
Nammo, short for Nordic Ammunition Company, is a Norwegian-Finnish aerospace and defence group specialized in production of ammunition, rocket engines and space applications. The company has subsidiaries in Finland, Germany, Norway, Sweden, Switzerland, Spain, the United Kingdom, the Republic of Ireland, and the United States. The company ownership is evenly split between the Norwegian government (represented by the Norwegian Ministry of Trade, Industry and Fisheries) and the Finnish defence company Patria. The company has its headquarters in Raufoss, Norway. The company has four business units: Small and Medium Caliber Ammunition, Large Caliber Systems, Aerospace Propulsion, and Commercial Ammunition. History Nammo was founded in 1998 by Raufoss (Norway), Patria (Finland), and (Sweden). The in Lapua, Finland, is also part of the Nammo group as . In 2005, the present joint ownership between Patria and the Norwegian government was established. In 2007, Nammo acquired the US munitions company Talley, Inc. after purchasing 100% of its shares. Controversies Norwegian export control laws prohibit Norwegian companies from selling munitions to countries at war or conflict. Nammo's then information director, Sissel Solum, said Nammo bears no responsibility for the use of their munitions after purchase, although some claimed (including the Norwegian Church Aid and PRIO) that this is a breach of the intended spirit of national export regulations. In 2009, it was revealed that the Israeli Defense Forces purchased 28,000 M72 LAWs from Nammo Talley, along with weapons parts and training missiles valued at NOK 600 million. These munitions were later used in Operation Cast Lead. According to Nammo Raufoss AS managing director, Lars Harald Lied, the company also produces 12.7mm "Multi-Purpose" ammunition that was used by both American and Norwegian soldiers in the War in Afghanistan. Products Aerospace propulsion Nammo produces the following missiles and missile propulsion systems: AIM-120 AMRAAM RIM-162 ESSM IRIS-T (under license) Exocet AIM-9 Sidewinder Penguin Naval strike missile (the rocket booster) Ariane 5 (separation and acceleration boosters) IDAS (interactive defence & attack for submarines) Orbital launch vehicle Nammo manufactures separation rocket motors for Ariane 6, and in the past manufactured them for the Ariane 5. In January 2013, Nammo and the Andøya Rocket Range spaceport announced that they would be "developing an orbital Nanosatellite launch vehicle (NLV) rocket system called North Star that uses a standardized hybrid motor, clustered in different numbers and arrangements, to build two types of sounding rockets and an orbital launcher", able to deliver a nanosat into polar orbit. Small and medium caliber ammunition 5.56×45mm NATO 6.5×47mm Lapua 7.62×39mm 7.62×51mm NATO and .308 Winchester 7.62×54mmR/7.62×53mmR .30-06 Springfield (7.62×63mm) .338 Lapua Magnum (8.6×70mm) 9×19mm Parabellum Large caliber ammunition As of 2018, Nammo produced the following non-exhaustive list of medium and large caliber ammunition: 12.7×99mm (.50 BMG) 12.7×99 mm Raufoss Mk 211 multipurpose 20×102mm 20×139mm 25×137mm 27×145mm 30×113mm 30×173mm 30mm Swimmer (APFSDS-T MK 258 Mod 1) 35×228mm 40×51mm 40×53mm 57mm L/70 3P 120 mm tank ammunition Propellant charges for artillery and mortars Artillery shell bodies Hand grenades Warheads Shoulder-fired systems Nammo has manufactured shoulder-fired systems since the 1960s, with licence production of the M72 LAW beginning at Raufoss in Norway in 1966. In 2007, Nammo acquired the US munitions company Talley, Inc. after purchasing 100% of its shares. Today, Nammo has operations in ten places in the US (Nammo Defense Systems Inc.) and is the only licensed manufacturer of the M72 LAW, with production lines in Raufoss and Mesa, Arizona. In addition to the M72, Mesa also manufactures the M141 Bunker Defeat Munition for the United States Army, while Nammo's facilities in Columbus, Mississippi, manufactures ammunition for the SMAW system for the United States Marine Corps. Nammo Defense Systems Inc., Mesa, Arizona, was awarded a $498,092,926 firm-fixed-price contract for the full rate production of M72 light assault weapon variants and components for shoulder-launched munitions training systems on 20 December 2021. Rocket engine consultancy and development In 2019, Nammo was awarded an ESA contract to initiate development of a reusable rocket engine for the ascent stage of the Heracles lunar lander. The engine may be fed by electrically driven pumps, from low pressure propellant tanks, which may enable in-space refueling. References External links Manufacturing companies of Norway Defence companies of Norway Defence companies of Finland Defence companies of Sweden Spaceflight Companies established in 1998 Companies based in Innlandet Ammunition manufacturers 1998 establishments in Norway Patria (company)
Nammo
[ "Astronomy" ]
1,085
[ "Spaceflight", "Outer space" ]
7,138,173
https://en.wikipedia.org/wiki/Sanduleak%20-69%20202
Sanduleak -69 202 (Sk -69 202, also known as GSC 09162-00821) was a magnitude 12 blue supergiant star, located on the outskirts of the Tarantula Nebula in the Large Magellanic Cloud. It was the progenitor of supernova 1987A. The star was originally charted by the Romanian-American astronomer Nicholas Sanduleak in 1970, but was not well studied until identified as the star that exploded in the first naked eye supernova since the invention of the telescope, when its maximum reached visual magnitude +2.8. The discovery that a blue supergiant was a supernova progenitor contradicted the prevailing theories of stellar evolution and produced a flurry of new ideas about how such a thing might happen, but it is now accepted that blue supergiants are a normal progenitor for some supernovae. The candidate luminous blue variable HD 168625 possesses a bipolar nebula that is a close twin of that around Sk -69 202. It is speculated that Sk -69 202 may have been a luminous blue variable in the recent past, although it was apparently a normal luminous supergiant at the time it exploded. See also Neutrino astronomy List of supernovae History of supernova observation References Stars in the Large Magellanic Cloud Dorado B-type supergiants Luminous blue variables Large Magellanic Cloud Extragalactic stars Tarantula Nebula
Sanduleak -69 202
[ "Astronomy" ]
295
[ "Dorado", "Constellations" ]
7,138,223
https://en.wikipedia.org/wiki/Osteonectin
Osteonectin (ON) also known as secreted protein acidic and rich in cysteine (SPARC) or basement-membrane protein 40 (BM-40) is a protein that in humans is encoded by the SPARC gene. Osteonectin is a glycoprotein in the bone that binds calcium. It is secreted by osteoblasts during bone formation, initiating mineralization and promoting mineral crystal formation. Osteonectin also shows affinity for collagen in addition to bone mineral calcium. A correlation between osteonectin over-expression and ampullary cancers and chronic pancreatitis has been found. Gene The human SPARC gene is 26.5 kb long, and contains 10 exons and 9 introns and is located on chromosome 5q31-q33. Structure Osteonectin is a 40 kDa acidic and cysteine-rich glycoprotein consisting of a single polypeptide chain that can be broken into 4 domains: 1) a Ca2+ binding domain near the glutamic acid-rich region at the amino terminus (domain I), 2) a cysteine-rich domain (II), 3) a hydrophilic region (domain III), and 4) an EF hand motif at the carboxy terminus region (domain IV). Function Osteonectin is an acidic extracellular matrix glycoprotein that plays a vital role in bone mineralization, cell-matrix interactions, and collagen binding. Osteonectin also increases the production and activity of matrix metalloproteinases, a function important to invading cancer cells within bone. Additional functions of osteonectin beneficial to tumor cells include angiogenesis, proliferation and migration. Overexpression of osteonectin is reported in many human cancers such as breast, prostate, colon and pancreatic. This molecule has been implicated in several biological functions, including mineralization of bone and cartilage, inhibiting mineralization, modulation of cell proliferation, facilitation of acquisition of differentiated phenotype and promotion of cell attachment and spreading. A number of phosphoproteins and glycoproteins are found in bone. The phosphate is bound to the protein backbone through phosphorylated serine or threonine amino acid residues. The best characterized of these bone proteins is osteonectin. It binds collagen and hydroxyapatite in separate domains, is found in relatively large amounts in immature bone, and promotes mineralization of collagen. Tissue distribution Fibroblasts, including periodontal fibroblasts, synthesize osteonectin. This protein is synthesized by macrophages at sites of wound repair and platelet degranulation, so it may play an important role in wound healing. SPARC does not support cell attachment, and like tenascin, is anti-adhesive and an inhibitor of cell spreading. It disrupts focal adhesions in fibroblasts. It also regulates the proliferation of some cells, especially endothelial cells, mediated by its ability to bind to cytokines and growth factors. Osteonectin has also been found to decrease DNA synthesis in cultured bone. High levels of immunodetectable osteonectin are found in active osteoblasts and marrow progenitor cells, odontoblasts, periodontal ligament and gingival cells, and some chondrocytes and hypertrophic chondrocytes. Osteonectin is also detectable in osteoid, bone matrix proper, and dentin. Osteonectin has been localized in a variety of tissues, but is found in greatest abundance in osseous tissue, tissues characterized by high turnover (such as intestinal epithelium), basement membranes, and certain neoplasms. Osteonectin is expressed by a wide variety of cells, including chondrocytes, fibroblasts, platelets, endothelial cells, epithelial cells, Leydig cells, Sertoli cells, luteal cells, adrenal cortical cells, and numerous neoplastic cell lines (such as SaOS-2 cells from human osteosarcoma). References Further reading External links Glycoproteins Extracellular matrix proteins Matricellular proteins Genes mutated in mice
Osteonectin
[ "Chemistry" ]
933
[ "Glycoproteins", "Glycobiology" ]
7,139,215
https://en.wikipedia.org/wiki/Digital%20timing%20diagram
A digital timing diagram represents a set of signals in the time domain. A timing diagram can contain many rows, usually one of them being the clock. It is a tool commonly used in digital electronics, hardware debugging, and digital communications. Besides providing an overall description of the timing relationships, the digital timing diagram can help find and diagnose digital logic hazards. Diagram convention Most timing diagrams use the following conventions: Higher value is a logic one Lower value is a logic zero A slot showing a high and low is an either-or (such as on a data line) A Z indicates high impedance A greyed out slot is a don't-care or indeterminate. Example: SPI bus timing The timing diagram example on the right describes the Serial Peripheral Interface (SPI) Bus. Most SPI master nodes can set the clock polarity (CPOL) and clock phase (CPHA) with respect to the data. This timing diagram shows the clock for both values of CPOL and the values for the two data lines (MISO & MOSI) for each value of CPHA. Note that when CPHA=1, then the data is delayed by one-half clock cycle. SPI operates in the following way: The master determines an appropriate CPOL & CPHA value The master pulls down the slave select (SS) line for a specific slave chip The master clocks SCK at a specific frequency During each of the eight clock cycles, the transfer is full duplex: The master writes on the MOSI line and reads the MISO line The slave writes on the MISO line and reads the MOSI line When finished the master can continue with another byte transfer or pull SS high to end the transfer When a slave's SS line is high, both its MISO and MOSI line should be high impedance to avoid disrupting a transfer to a different slave. Before SS being pulled low, the MISO & MOSI lines are indicated with a "z" for high impedance. Also, before the SS is pulled low, the "cycle #" row is meaningless and is shown greyed out. Note that for CPHA=1, the MISO & MOSI lines are undefined until after the first clock edge and are also shown greyed out before that. A more typical timing diagram has just a single clock and numerous data lines. Software The following diagram software may be used to draw timing diagrams: PlantUML Timing Diagrammer References External links Wavedrom is an online timing diagram editor. Timing Diagrammer has a Windows binary. Diagrams Timing Diagram Logic Timing Diagram
Digital timing diagram
[ "Technology" ]
529
[ "Digital systems", "Information systems" ]
7,139,621
https://en.wikipedia.org/wiki/Shearing%20%28physics%29
In continuum mechanics, shearing refers to the occurrence of a shear strain, which is a deformation of a material substance in which parallel internal surfaces slide past one another. It is induced by a shear stress in the material. Shear strain is distinguished from volumetric strain. The change in a material's volume in response to stress and change of angle is called the angle of shear. Overview Often, the verb shearing refers more specifically to a mechanical process that causes a plastic shear strain in a material, rather than causing a merely elastic one. A plastic shear strain is a continuous (non-fracturing) deformation that is irreversible, such that the material does not recover its original shape. It occurs when the material is yielding. The process of shearing a material may induce a volumetric strain along with the shear strain. In soil mechanics, the volumetric strain associated with shearing is known as Reynolds' dilation if it increases the volume, or compaction if it decreases the volume. The shear center (also known as the torsional axis) is an imaginary point on a section, where a shear force can be applied without inducing any torsion. In general, the shear center is not the centroid. For cross-sectional areas having one axis of symmetry, the shear center is located on the axis of symmetry. For those having two axes of symmetry, the shear center lies on the centroid of the cross-section. In some materials such as metals, plastics, or granular materials like sand or soils, the shearing motion rapidly localizes into a narrow band, known as a shear band. In that case, all the sliding occurs within the band while the blocks of material on either side of the band simply slide past one another without internal deformation. A special case of shear localization occurs in brittle materials when they fracture along a narrow band. Then, all subsequent shearing occurs within the fracture. Plate tectonics, where the plates of the Earth's crust slide along fracture zones, is an example of this. Shearing in soil mechanics is measured with a triaxial shear test or a direct shear test. See also Shear strength Further reading Terzaghi, K., 1943, Theoretical Soil Mechanics, John Wiley and Sons, New York 123 Popov, E., 1968, Introduction to mechanics of solids, Prentice-Hall, Inc., New Jersey Continuum mechanics sl:Strig
Shearing (physics)
[ "Physics" ]
492
[ "Classical mechanics", "Continuum mechanics" ]
7,141,148
https://en.wikipedia.org/wiki/VIT%2C%20C.A.
VIT, C.A. (Venezolana de Industria Tecnológica, Compañía Anónima) is a Venezuelan manufacturer of desktop computers and laptops, supported by the Venezuelan government and a Chinese information technology company Inspur (former ). The first computer they produced was called Computador Bolivariano (English: Bolivarian Computer), which came with the Kubuntu Linux operating system. Since April 28, 2009, VIT computers are pre-installed with Canaima GNU/Linux. By 2015, the second production line was expanded, which increased the assembly capacity of servers and computers by 150,000 units. See also Canaima (operating system) GendBuntu Inspur LiMux Nova (operating system) Ubuntu Kylin Notes External links VIT homepage Ministry of Science and Technology Computer companies established in 2005 Computer hardware companies Government-owned companies of Venezuela Manufacturing companies of Venezuela Venezuelan brands Venezuelan companies established in 2005
VIT, C.A.
[ "Technology" ]
202
[ "Computer hardware companies", "Computers" ]
7,141,169
https://en.wikipedia.org/wiki/Pan%20Am%20Flight%20202
Pan American World Airways Flight 202 was a Boeing 377 Stratocruiser aircraft that crashed in the Amazon Basin about southwest of Carolina, Brazil, on April 29, 1952. The accident happened en route from Rio de Janeiro, Brazil, to Port of Spain, Trinidad and Tobago, during the third leg of a four-leg journey. All 50 people on board were killed in the deadliest-ever accident involving the Boeing 377. The investigation took place under exceptionally unfavorable conditions, and the exact cause of the crash was not established. However, it was theorized based on an examination of the wreckage that an engine had separated in flight after propeller blade failure. Aircraft The Boeing 377 Stratocruiser registration N1039V, christened Clipper Good Hope, made its first flight on September 28, 1949. At the time of the accident, it had accumulated a total of 6944 airframe hours in flight. It was equipped with four 28-cylinder Pratt & Whitney R-4360 Wasp Major radial piston engines, each with a Hamilton Standard Model 24260 four-blade propeller. The propeller blades were constructed with a rubber core filling a steel shell, which was later identified as a design prone to structural failure. Flight and disappearance Flight 202 was an international scheduled passenger flight from Buenos Aires, Argentina, to New York City, New York, with three en route stops scheduled at Montevideo, Uruguay; Rio de Janeiro, Brazil; and Port of Spain, Trinidad and Tobago. It began its route on the evening of April 28, 1952, in Buenos Aires, and after stopping off in Montevideo, it arrived in Rio de Janeiro at 1:05 a.m. local time (04:05 UTC) on April 29. It departed Rio less than two hours later, at 2:43 a.m. (05:43 UTC), heading for Port of Spain on the third leg of its journey. It was cleared to fly an off-airways route directly to Port of Spain, which took it over the dense forests of the Amazon jungle that were still unexplored at the time. The flight reported abeam the city of Barreiras in eastern Brazil at 6:16 a.m. local time (09:16 UTC), flying at under VFR conditions; the pilots estimated that the next position report would be at 7:45 a.m. (10:45 UTC), abeam the city of Carolina in the northeastern state of Maranhão, Brazil. This was the last known message from the flight. Witnesses in the villages of Formosa and São Francisco reported seeing the aircraft overhead at about the time it reported abeam Barreiras; they described the aircraft as operating normally. When the aircraft failed to report abeam Carolina and then abeam the city of Santarém in northern Brazil, local authorities initiated a missing aircraft alert. Search and discovery Brazilian Air Force, United States Air Force, and United States Navy aircraft searched the jungle, while Brazilian Navy ships searched the coastal areas off northern South America. The wreckage was not found until May 1, when a Pan American Curtiss Commando freighter reported finding it in Caraja Indian territory southwest of Carolina. "The burned, broken wreckage of the Pan American Stratocruiser that vanished Monday night was found in northern Brazil today," reported The New York Times in its May 2, 1952, issue. There was no evidence that any of the 50 people on board, including 19 Americans, lived through the crash. An air hunt over of jungle, river basins and plateau land finally located the ruins in the Indian country between the cities of Barreiras and Carolina. According to airline officials, Capt. Jim Kowing of Miami piloted a C-46 Pan American cargo plane that made the discovery. The scene is about southwest of Carolina, a Tocantins River town north-northwest of Rio de Janeiro. The double-decked Stratocruiser was reported to have broken in two; its charred wreckage was scattered on both sides of a hill. Pan American officials said a Panair do Brasil airliner circled the scene of the crash; its pilot reported extensive evidence of fire and said he saw two of the big plane's engines lying apart in the hilly, heavily wooded area. Maj. Richard Olney of the US Air Force base in San Juan, Puerto Rico, and Maj. Oliver Seaman, an Air Force flight surgeon, oversaw the conversion of a Pan American passenger plane to carry a seven-person rescue team. Pan American's office at Miami reported that, after circling the scene for four hours, the rescue plane returned to its base at Para without dropping the rescue team. It said they did not jump because there were no signs of survivors. Investigation Later, a 27-man investigation team flew via seaplane to Lago Grande, a tiny Indian village on the Araguaia River less than from the wreckage, with the intention of trekking to the accident site. Unfortunately, the extreme nature of the terrain forced all but seven team members to return to Lago Grande before reaching the site. The remaining seven investigators, running short of water, food and other supplies, were only able to confirm that all on board had died on impact and that a huge fire had consumed the fuselage. A properly equipped and provisioned second investigation team built a base camp northwest of Lago Grande and finally reached the wreckage on August 15. They determined that the wreckage had fallen to the ground in three main sections. Most of the wreckage, including the fuselage, the starboard or right wing, the root of the port or left wing (including the nacelle for the No. 2 engine but not the engine itself), and the Nos. 3 and 4 engines (normally attached to the starboard wing), had fallen in an area of dense forest about northwest of the base camp. The outer port wing and the No. 1 engine had fallen to the northwest of the main wreckage; the empennage and fractured parts of the No. 2 engine (normally attached to the port wing) had fallen roughly north of the main wreckage and northeast of the port wing. Although the No. 2 engine and its propeller were not found, evidence on the port wing root, the No. 2 engine nacelle, the leading edge of the vertical stabilizer, and the horizontal stabilizer led investigators to believe that the engine and/or propeller had failed in flight. There had been two prior engine separation incidents with the 377 on January 24 and 25, 1950. In this case, investigators hypothesized that the propeller failure caused the engine to experience highly unbalanced loads and it eventually separated from the aircraft, precipitating an in-flight breakup. Debris from the propeller and engine may have contributed to the breakup by damaging control surfaces after being flung from the port wing during the failure. See also 1954 Prestwick air disaster Northwest Orient Airlines Flight 2 Pan Am Flight 6 Pan Am Flight 7 Pan Am Flight 845/26 References Notes Sources External links Accident Report on Flight 202 - Civil Aeronautics Board - PDF Airliner accidents and incidents caused by in-flight structural failure Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents involving in-flight engine separations Aviation accidents and incidents in Brazil Aviation accidents and incidents in 1952 202 1952 in Brazil Accidents and incidents involving the Boeing 377 April 1952 events in South America Airliner accidents and incidents caused by engine failure
Pan Am Flight 202
[ "Materials_science" ]
1,503
[ "Airliner accidents and incidents caused by mechanical failure", "Mechanical failure" ]
7,141,364
https://en.wikipedia.org/wiki/Helios%20%28propulsion%20system%29
Helios is a design for a spacecraft propulsion system such that small (0.1 kiloton) nuclear bombs would be detonated in a chamber roughly in diameter. Water would be injected into the chamber, super-heated by the explosion and expelled for thrust. It was a precursor concept to the Orion project. Like Orion, it would have achieved constant acceleration through rapid "pulsed" operation. This design would have yielded a specific impulse of about 1150 seconds (compared to a modern chemical rocket's 450 seconds). However, a number of technical problems existed, most prominently how to keep the combustion chamber from exploding from the great pressures of the atomic detonations. The Helios propulsion system was originally conceived by Freeman Dyson. See also Operation Plumbbob (1957), nuclear fission explosion test with steel plate experiment for Pascal-B References Further reading Nuclear spacecraft propulsion Freeman Dyson
Helios (propulsion system)
[ "Astronomy" ]
182
[ "Astronomy stubs", "Spacecraft stubs" ]
7,142,213
https://en.wikipedia.org/wiki/Geography%20of%20toll%20roads
Asia Azerbaijan Baku - Guba - Samur (Azerbaijan–Russia border) This toll road serves as an alternative to the existing Baku-Guba-Samur border road which is longer. Bangladesh Bangladesh has five toll bridges and four toll roads. None of them are of an electronic collection system. In Bangladesh, roads and bridges are built by the government. After building the roads and bridge, the government invites tender to give an operation and management (O&M) contract for five years against a fee. The O&M operators maintains the bridge and collects toll on behalf of the government. The toll tariff of Bangabandhu Multipurpose Bridge (formerly known as Jamuna Bridge), length , the longest bridge of the country is considered very high compared with other bridges. Mr. Md. Mobarak Hossain, the CEO of Marga Net One Limited (Joint Venture by Pt. Jasa Marga (Persaro) Indonesia) and Net One Solutions Ltd, Bangladesh, who was also the CEO of the second O&M Operator of Bangabandhu (Jamuna) Bridge, feels that () per private car is too high, while the trucks and lorries pays a maximum of for a single trip. The Bangabandhu Bridge is a vital link connecting the eastern part of the country with its northern part. China Nearly all Chinese expressways and express routes charge tolls, although they are not often networked from one toll expressway to another. However, beginning with the Jingshen Expressway, tolls are gradually being networked. Given the size of the nation, however, the task is rather difficult. China National Highways, which are not expressways, but "grade-A" routes, also charge tolls. Some provincial, autonomous-regional and municipal routes, as well as some major bridges, will also charge passage fees. In November 2004, legislation in China provided for a minimum length of a stretch of road or expressway in order for tolls to be charged. Hong Kong In Hong Kong, most tunnels and some bridges that form part of the motorway networks are tolled to cover construction and maintenance costs. Some built recently are managed in the Build-Operate-Transfer (BOT) basis. The companies which build the tunnels or bridges are given franchise of a certain length of time (usually 30 years) to operate. Ownership will be transferred to the government when the franchise expires. An example is the Cross-Harbour Tunnel. India Access-controlled roads in India are tolled. In addition to cash tolls, toll plazas have dedicated electronic toll collection lanes for quicker operation. In addition, most of the upgraded sections of the National Highway network are also tolled. These tolls are lower than those on expressways. Currently, a massive project is underway to expand the highway network and the Government of India plans to add an additional of expressways to the network by the year 2022. Indonesia Indonesia opened its first toll road, the Jagorawi Toll Road, in 1978. This linked the capital city of Jakarta to the neighboring cities of Bogor and Ciawi south of the capital. Since then, Indonesia has seen a dramatic increase on the operational length and reach of its toll road system, spanning by the end of June 2024, with most of it being built during the presidency of Joko Widodo. Since October 2017, all toll booths in Indonesia only accepts electronic payments through payment cards. Israel Highway 6 in Israel, widely known as the Trans-Israel Highway or Cross-Israel Highway, is to date the only electronic toll highway in Israel. Currently Highway 6 is 110 km long, all of which is a freeway. This figure will grow in the next few years as additional segments, currently undergoing statutory approvals and permitting processes, are added to the main section of the road. Highway 6 uses a system of cameras and transponders to toll vehicles automatically. There are no toll booths, allowing Highway 6 to be designed as a normal freeway with interchanges. Japan The vast majority of Japan's extensive expressway consists of toll roads. Payment of the fare can either be made in cash as one exits or using the electronic toll collection card system. As of 2001, the toll fees for an ordinary passenger car was per kilometre plus a terminal charge. Malaysia Malaysia has extensive toll roads that forms the majority of country's expressways which in length spans more than ranging North to the Thai border, South to the Causeway and Second Link to Singapore, West to Klang and Pulau Indah and East towards Kuantan. Most of the toll roads are in major cities and conurbations such as Klang Valley, Johor Bahru and Penang. All of Malaysian toll roads are managed in the Build-Operate-Transfer basis as in Hong Kong and Japan (see below). Pakistan All motorways and few expressways are toll roads. First such motorway M2 was opened to public in 1997. Since then the M3, M9, M10, and M1, all toll roads, have become operational. The M8 is under construction. Philippines Currently, the Philippines have toll roads, mostly on the island of Luzon and one in Visayas. The toll roads, mostly named after their locations, comprise a total length of . The three major concessionaires of those toll roads are San Miguel Corporation, NLEX Corporation, and Prime Asset Ventures, Inc. The Toll Regulatory Board regulates all toll roads, while the Department of Public Works and Highways plays a crucial role in the planning, design, construction, and maintenance of infrastructure facilities, including expresssways. Electronic toll collections on all Philippine expressways are on a dry run since 2023, aiming for full implementation in 2024. Singapore In Singapore, toll stations are automated, thus reducing manpower. The automated toll stations, also known to the locals as ERP or Electronic Road Pricing, was introduced by Land Transport Authority (LTA) to reduce city traffic jams. The number of toll stations is increasing rapidly and some Singaporeans even call it "Every road pay". Sri Lanka Sri Lanka currently operates 2 toll roads. The Southern Expressway (E 01) and the Katunayake Expressway (E 03). The Kandy Colombo Expressway (E 02) is under planning at the moment (2013). The toll revenue is used to repair and maintain the expressways. Taiwan Freeways in Taiwan are not exactly toll roads in the sense that toll gates/stations are not located at the entrance and exits of the freeway. Toll stations with weigh stations are located every thirty to forty kilometres on the No. 1 and No. 3 National Freeways of the Republic of China. There are usually no freeway exits once a toll station notification sign appears, making it necessary for the driver to be familiar with the locations of the toll stations in advance. Other toll roads in Taiwan are usually newly built bridges and tunnels. Tolls are frequently collected to pay off the construction cost and once paid off, the tolls may be repealed. Tajikistan Toll roads in Tajikistan are owned and operated by Innovative Road Solutions (IRS). The northern point is in the Sughd Viloyat and the Southern point ends at Kurgan Tyube ( south of the capital of the country, Dushanbe). While going from end to end costs roughly for regular 2 axle vehicles, it can top to for semitrucks. The IRS is setting up new toll plazas that are going to be able to read off the digital device attached to windshield while passing through at the speed of no more than , similar to ones in the United States. This is the only toll road in the entire Central Asia with about 5 cars going through each toll plaza every minute in every direction. The more information can be found on their homepage at www.IRS.tj Thailand Most of the toll roads in Thailand are either within Greater Bangkok or originated from Bangkok. They are called expressways, tollways, and motorways. Two government agencies under the Ministry of Transport, namely the Expressway Authority of Thailand (EXAT) and the Department of Highways (DoH), own networks of toll roads. Some are operated by the agencies themselves; others are operated by private concessionaires. EXAT is in charge of Chaloem Mahanakhon Expressway, Si Rat Expressway, Chalong Rat Expressway, Udon Ratthaya Expressway, Burapha Withi Expressway and Kanchanphisek Outer Ring Road (southern section). DoH is in charge of Uttraphimuk Tollway (formerly Don Mueang Tollway), Motorway No. 7 (Bangkok-Chonburi), and Motorway No. 9 (Kanchanphisek Outer Ring Road - eastern section). Both agencies have plans to build more toll roads in the future, expanding their networks to the provinces. Electronic Toll Collection using passive RFID tags is used in Chaloem Mahanakhon, Si Rat, Chalong Rat, and Burapha Withi Expressway while Uttraphimuk Tollway employs passive IC-card-based Touch-and-Go system. There are plans to upgrade and expand ETC systems in the near future. United Arab Emirates The toll system Salik started in Dubai in July 2007. Africa Morocco Morocco has an extensive system of toll roads or Autoroutes. These were for the most part recently built, and from Casablanca connect all of Moroccos major cities such as Marrakech, Rabat, and Tangier. Operator Autoroutes Du Maroc runs the network on a pay-per-use basis, with toll stations placed along its length. Goal is completing a north–south and an east–west link crossing the country. Both axis will be important sections of the Pan-African main links. South Africa In South Africa, some of the National routes have sections that are toll roads (with physical tollgates), namely the N1, N2, N3, N4 & N17. All toll roads are run by the South African National Roads Agency Limited except for the N4 and part of the N3, which are run by concessionaires. Zimbabwe In 2013, the Ntabazinduna Toll Plaza was opened outside of Bulawayo on the A5 Road to Harare and the toll system was introduced by a South African Group named Group Five as part of a government project to provide safer highways and to also benefit the local community and also the local economy. Additionally eight more toll plazas will be operating in Zimbabwe. Zambia In Zambia, every Inter-Territorial Road (designated with the letter T; except for T6), together with many Territorial Roads (designated with the letter M) and very few District Roads (designated with the letter D) are toll roads with tollgates. The tollgates are run by the National Road Fund Agency (NRFA) and the Road Development Agency (RDA). Europe Toll roads in Europe have a long history. The first turnpike road in England was authorised in the seventeenth century. The term turnpike refers to a gate on which sharp pikes would be fixed as a defence against cavalry. Early references include the (mythical) Greek ferryman Charon charging a toll to ferry (dead) people across the river Acheron. Germanic tribes charged tolls to travellers across mountain passes. Tolls were used in the Holy Roman Empire in the 14th century and 15th century. In some European countries payment of road tolls is made using stickers which are affixed to the windscreen. Germany uses a system based on satellite technology for large vehicles. In other countries payment may be made in cash, by credit card, by pre-paid card or by an electronic toll collection system. Tolls may vary according to the distance travelled, the building and maintenance costs of the motorway and the type of vehicle. Some of these toll roads are privately owned and operated. Others are owned by the government. Some of the government-owned toll roads are privately operated. Belarus Major highways in Belarus are toll roads with Open road tolling (ORT) or free-flow tolling. BelToll is an electronic toll collection system (ETC), valid from 1 July 2013 in the Republic of Belarus. Croatia Almost all Croatian highways are toll roads with the exception of the Zagreb bypass and Rijeka bypass. There are five vehicle categories in Croatia that differ in weight, height, number of axles and trailer attachment. The toll for the use of a motorway on which a closed or open toll system has been introduced is calculated and charged according to the distance between the two toll points the vehicle passes, the group of vehicles to which the vehicle is deployed and the unit price per kilometer. The unit price per kilometer of motorway, i.e. individual sections of motorway, is determined according to construction costs, maintenance costs, management costs and costs of development of motorways and toll road facilities. The unit price per kilometer can be determined differently for each section of the motorway. Toll payment is possible in six ways: Cash payment: Croatian kunas and Euros Card payment: Visa, Mastercard, Maestro, Diners Card, American Express, INA card, MOL card Subscription cards (smart): AZM smart card Unique smart card for all motorways in the Republic of Croatia for people with disabilities entitled to free tolls ENC - Electronic toll collection: HAC, Bina Istra Toll surcharge via SMS and via WEB portal There are three Croatian companies that build and maintain highways and collect tolls: Hrvatske autoceste (Croatian Motorways Ltd) Autocesta Zagreb - Macelj (Zagreb - Macelj Motorway) BINA Istra Denmark The Great Belt Fixed Link the Øresund Bridge are toll roads. In the Faroe Islands, the inter-island road tunnels Vágatunnilin, Norðoyatunnilin, Eysturoyartunnilin and Sandoyartunnilin have tolls (but no physical toll booths are present and the toll must be paid at nearby petrol stations). France In Europe, the most substantial use of toll roads is in France, where most of the autoroutes carry quite heavy tolls. Iceland The Hvalfjörður Tunnel was tolled from 11 July 1998 but became toll free as of 28 September 2018. Currently the Vaðlaheiðargöng tunnel, opened in December 2018, is the only toll road in Iceland. Ireland The Republic of Ireland has three toll roads, three toll bridges, and two toll tunnels, which are operated by various independent operators. Most were built under a public-private partnership system, giving the company which arranged for the road to be built the right to collect tolls for a defined period. Tolls vary from to for cars. Italy Most Italian motorways are toll roads, with some exceptions such as some motorways in Southern Italy and Sicily or the Grande Raccordo Anulare (Rome's ring road). In most motorways, toll is proportional to the distance traveled and has to be paid on exit, where toll gates — caselli — are placed. On other motorways, however, toll gates are placed directly along the route — barriere —. In such cases, it is required to pay a fixed amount, regardless of the distance traveled. A8, A9, A52 are good examples of that system. Toll can be paid in cash, by credit card, by pre-paid card, or by Telepass. 61% of the Italian motorways are handled by the "Autostrade per l'Italia S.p.A." company, and its subsidiaries. All of these carriers are now privately owned and supervised by ANAS. The network of motorways covers most of Italy: northern and central Italy are well covered, the south and Sicily are scarcely covered, Sardinia is not covered at all. The motorway operators are required to build, operate and maintain their networks at cost and to cover their expenses from the toll they collect. The tolls vary according to the building and maintenance costs of the motorway and the type of vehicle. Besides the motorways, only some alpine tunnels (such as the Mont Blanc Tunnel) are tolled. Today, no toll is required on other roads, including motorway-like dual carriageways — superstrade. The first tolled superstrada is under construction now north of Venice. The Netherlands In the beginning of the 20th century, almost all communities collected toll on all passing traffic, usually including pedestrians and livestock. In 1953, the central government abolished all communal tolls. As of 2008, there are three effective toll roads in Netherlands. They are for the Western Scheldt Tunnel, Kil tunnel, both major arteries, and the "Tolbrug" (Toll Bridge) in Nieuwerbrug, a local hand-drawn bridge. Also, for the Wijkertunnel, a "shadow toll" is paid by Rijkswaterstaat for each passed vehicle. Norway Norway has extensively been using toll as a way to finance road infrastructure in the last decades. There are also toll rings around some cities, where drivers have to pay to enter or leave the city, regardless of if the road is new or old. The first city was Bergen in 1986. The money goes to construction of infrastructure in and around the city. Poland There are three toll highways in Poland, connecting the major cities and the nation's boundaries. Two routes travel east–west, one running between Łódź and the German border, the other currently connecting Katowice and Kraków, with current construction extending the roads to the German and Ukrainian boundaries. A north–south route connects Rybnik to Katowice, and Toruń to Gdańsk. Portugal In Portugal a certain number of roads are designated Toll-Roads. They charge a fixed value per kilometre distance, with several classes depending on vehicle type and regulated by the government. Several authorised franchises run them, the largest at present being BRISA. For cash-free payments there exists the Via Verde, an electronic toll collection system. On leaving the motorway, charges are automatically debited from a bank account. Russia A number of toll roads in Barnaul and Pskov Region (Nevil-Velezh ( ($8)), Pechori-state border RUR 140), also M4-Don ( close to Lipetsk costs ($0.75) for cars and ($1.70) for trucks). Overall toll network is or 0.05% of total road network. Average price in Pskov region having of toll roads is to per km for cars and to for trucks. This comes close to $0.50 per km for trucks. Ordinary speed limits apply so far. In 2007 adopted Toll Road Law and Concession Law in 2005 to develop this sector. Slovenia For the use of of the Slovenian freeways and expressways use of toll stickers is obligatory for all vehicles with the permissible maximum weight of on motorways and expressways as of 1 July 2008. The sticker costs are €15 for 7 days, €30 for a month and €95 for a year. Motorcyclists have to pay €7.50 for 7 days, €25 for a half year and €47.50 for a year. Trucks use existing toll road stops. Use of highways and expressways without a valid and properly displayed sticker in a vehicle is a violation of the law and is punished with a fine of €300 or more. Due to the high costs of toll stickers for transit drivers going to vacation to Croatian and Montenegrin coast and others only passing through Slovenia, the highways are avoided by some travellers. Brussels had opened the case with the statement that the Slovenian vignette violates prevailing EU rights and discriminates road users. The European commissioner for traffic and transport, Antonio Tajani, had investigated in the case of discrimination. On 28 January 2010, after short-term vignettas were introduced by Slovenia and some other changes were made to the Slovenian vignette system, the European Commission concluded that the vignette system is in accordance with the European law. Spain Most Spanish toll roads are networked, so you must get a ticket on entering and pay when leaving the road. Technically, all roads belong to the Government, although toll roads are built and maintained by private companies under a State concession; when the concession expires, the road is reverted to State ownership, however most of then are renewed. Toll roads are called in Spanish autopistas. Freeways, often comparable to autopistas in building and ride quality, are called autovías. There are some autovías which are actually built and maintained by private companies, such as Pamplona-Logroño A-12 or Madrid access road M45. The company assumes the building costs and the Autonomous Community where they are located (in the given examples, Navarre and Madrid) pays a yearly per-vehicle fee to the company based upon usage statistics, called "shadow toll" (in Spanish, peaje en la sombra). The system can be regarded as a way for the Government to finance the build of new roads at the expense of the building company. Also, since the payment starts only after the road is finished, construction delays are usually shorter than those of regular state-owned freeways. However, those cannot be classified as toll roads since drivers do not need to pay any fees. Sweden The border-crossing Oresund Bridge has around €50 toll. Two other bridges, including Sundsvall Bridge (and until 2021 also the Svinesund Bridge) have tolls of smaller amounts. The Stockholm and Gothenburg City areas have congestion pricing on entry or exit. Before the year 2000 road tolls did not exist in Sweden for several decades. Switzerland For the use of Swiss motorways the use of toll stickers is obligatory. They costs CHF 40 per year per vehicle (a car towing a trailer needs two stickers). There are no stickers for shorter periods and they are valid 14 months (the 2010 sticker is valid from 1 December 2009 until 31 January 2011). However, this also means that a sticker bought any time during the year can only be used for less than the maximum period until 31 January of the following year. Republic of Turkey In the Republic of Turkey toll is collected on certain highways, the so-called Otoyol or Karayolu. This is done so by three different systems. Every toll road has lanes for all three payment methods. One method is KGS (Kartlı Geçiş Sistemi) (English: Card passage system) which requires a Prepaid card to be presented at the toll port. Every passing will be withdrawn from the card. Another method is HGS (Hızlı Geçiş Sistemi) (English: Fast passage system.) which uses a RFID chip stuck to the windshield of the vehicle. This chip is scanned automatically when passing the toll collecting point and money will be automatically withdrawn from the connected bank account. The last system is OGS which is calles Otomatik Geçiş Sistemi in Turkish, and translates Automatic passage system in English. This form of payment requires a fixed amount of money to be paid for a monthly or annual subscription. When subscribed, your car will be equipped with a barcoded sticker, which will be checked by CCTV cameras automatically to check if the car is actually subscribed. The Turkish toll system can't be avoided (except for avoiding toll roads), because one can simply not pass KGS without the required card, and due to the cameras, all cars passing OGS and HGS with no or expired or counterfied chips or stickers will get a fine nicely presented at their home address. United Kingdom Road rates were introduced in England in the seventeenth century. The first turnpike road, whereby travellers paid tolls to be used for road upkeep, was authorised in 1663 for a section of the Great North Road in Hertfordshire. The first turnpike trust was established by Parliament through a Turnpike Act in 1706. From 1751 until 1772, there was a flurry of interest in turnpike trusts and a further 390 were established. By 1825, over 1,000 trusts controlled of road in England and Wales. The rise of railway transport largely halted the improving schemes of the turnpike trusts. Unable to earn sufficient revenue from tolls alone the trusts took to requiring taxes from the local parishes. The system was never properly reformed but from the 1870s Parliament stopped renewing the acts and roads began to revert to local authorities, the last trust vanishing in 1895. The Local Government Act 1888 created county councils and gave them responsibility for maintaining the major roads. There are still a small number of toll bridges left including Swinford toll bridge near Oxford. Most UK roads today are maintained from general taxation, some of which is raised from motoring taxes including fuel duty and vehicle excise duty. Today, there are few tolls on roads in the United Kingdom - mainly toll bridges and tunnels. Until recently there were only two toll roads to which there is a public right of way (Rye Road in Stanstead Abbotts and College Road in Dulwich) together with another five or so private toll-roads. The M6 Toll motorway to the north of Birmingham levies a usage charge. North and South America Bolivia In November 2006, the Ministry of Public Works, Services and Housing of Bolivia created Vías Bolivia, a public entity with the purpose of pricing, managing and maintaining the toll roads across the country. As of 2021, there are currently 141 operating toll roads in Bolivia as well as 13 weigh stations for commercial vehicles and trucks. Brazil In Brazil, toll roads are a recent institution, and were adopted mostly in non-federal highways. The state of São Paulo has the highest length of toll roads, which are exploited either by private companies which bought a concession from the state, or by a state owned company (see Highway system of São Paulo). In São Paulo there is also a statewide electronic collection system using a plastic transponder (e-tag) attached to the windscreen, named SemParar'''. There is a growing trend towards tolling in all major highways of the country, but some resistance by the population is beginning to be felt, particularly due to some abuses which are being imposed, restricting the constitutional rights of coming and going (because the Brazilian highway system has very few non-tolled vicinal roads in parallel to highways) and making some trips an extremely expensive affair, as compared to average Brazilian earning power (in São Paulo, a round trip may cost upward of two hundreds Brazilian real in some roads, higher than petrol expenses). Canada Most tolled roadways in Canada are bridges to the United States, although a few domestic bridges in some provinces have tolls. Toll highways disappeared, for the most part, in the 1970s and 1980s. In the 1990s, political pressure dropped the new tolls on an upgraded section of the Trans-Canada Highway in New Brunswick. Highway 407 in the Greater Toronto Area is a modern toll route and does not have collection booths but an overhead sensor. It's heavily criticized as the government leased it for 99 years with the company having unlimited control of the highway and tolls so it is expensive, but still a necessity for gridlocked Toronto. Nova Scotia has a toll highway on the Trans Canada Highway between Debert and Oxford. Colombia Many highways in Colombia charge tolls. Motorcycles are allowed to bypass for free. Ecuador The Pan-American Highway in Ecuador charges tolls. Motorcycles pay a reduced fare. Mexico Mexico has an extensive system of toll roads or Autopistas. Autopistas are built and funded by Federal taxes and are built to nearly identical standards as the US Interstate Highways System. Also, many states in Mexico have their own toll roads such as Puebla, Veracruz and Nuevo León. All federal toll highways operate with 3 payment options, cash, credit card and electronic tag IAVE. IAVE in all the highways is operated by Caminos y Puentes Federales (CAPUFE). Panama Most of the toll roads in Panama were built in the mid-1990s, with the exception of the Arraijan-Chorrerra Highway. The three modern toll roads were built after the transportation plan made by the Government of Japan in the mid-1980s using the BOT formula. This highways are the Corredor Norte in the north of the Panama City, and the Corredor Sur in the south. Another highway was built and is the Panama-Colon Highway. Puerto Rico There are several toll roads in Puerto Rico, where toll roads are called "autopista" (which loosely translates to "car track") and toll houses are called "peaje". United States A toll road in the United States, especially near the east coast, is often called a turnpike. The term turnpike originated from the turnstile or gate which blocked passage until the fare was paid at a toll house (or toll booth'' in current terminology). Most tolled facilities in the US today use an electronic toll collection system as an alternative to paying cash. Examples of this are the E-ZPass system used on most toll bridges and toll roads in the eastern U.S. from North Carolina to Maine and Illinois; Houston's EZ Tag, which also works in other parts of the state of Texas, Oklahoma's Pikepass (which also works in Texas and Kansas), California's FasTrak, Illinois' I-Pass, and Florida's SunPass. Toll roads are only in 26 states as of 2006. The majority of states without any turnpikes are in the West and South. After a halt in toll road construction following the establishment of the Interstate Highway System in 1956, many states are going back to implementing tolls to fund capital improvements and manage congestion. This is because the cost of expanding and maintaining the highway network is increasing faster than the amount of revenue that can be generated by the federal gasoline tax for the Highway Trust Fund. Years after abolishing tolls, Kentucky and Connecticut have both re-examinined the possibility of reinstating tolls on some highways, while several other states are advancing the construction of new toll roads to supplement their existing networks of toll-free expressways. Oceania Australia In Australia, a small number of motorways have been tolled due to cover the expense of their construction. Such roads can be found in the Australian cities of Brisbane, Sydney and Melbourne. There are no toll roads in the Australian states of South Australia, Western Australia, Tasmania or any of the mainland territories. Toll collection is by electronic toll collection; there are no longer any cash booths in Australia. In Brisbane, there are three tollway operators (Brisbane City Council, Queensland Motorways, and RiverCity Motorway). Brisbane City Council owns and operates the Go Between Bridge over the Brisbane River in the city. Queensland Motorways operates the tolls on the Sir Leo Hielscher Bridges, and another two on the Logan Motorway on the south side. RiverCity Motorway operates the Clem Jones Tunnel, which runs underneath the city between the inner southern and northern suburbs. All toll collection points are electronically operated. Another company, BrisConnections, is currently constructing another toll tunnel (the longest tunnel in Australia) called the Airport Link, and will allow traffic to flow from the northern Clem Jones - Inner City Bypass interchange, direct to Brisbane Airport. Construction is due to be complete in 2012. International Travellers and people that are new to Brisbane should note, the penalty for non payment of tolls is in excess of $140 (per trip). In Melbourne, there are two companies that operate tollways within the Melbourne metropolitan area. Transurban operates CityLink covering sections of the Monash Freeway, Southern Link, Western Link and the upgraded sections of the Tullamarine Freeway. ConnectEast operates EastLink that runs through the Eastern Suburbs of Melbourne. All Melbourne tollways are electronically tolled. The West Gate Bridge opened as a toll bridge upon its completion in 1978, however the toll was abolished in 1985. In Sydney, many of the motorways contain at least one tolled section with a mixture of government and private ownership. The State Government owns the Sydney Harbour Bridge and Sydney Harbour Tunnel, while the M2 Motorway, M4 Motorway, M5 Motorway, Eastern Distributor, Westlink M7 and Lane Cove Tunnel are privately operated by a variety of companies such as Macquarie Infrastructure, Transurban, and to a lesser extent Industry Super funds such as Retail Employees Super, SunSuper, and the Industry Funds Management which partly own the M5 motorway in South Western Sydney. As well as the tolled motorways, the Cross City Tunnel - an east–west route underneath the Sydney CBD - was opened to traffic in 2005. This road has become somewhat controversial due to the relatively high toll charge and the closure of surrounding roads designed to funnel traffic through the tunnel. All Sydney tollways accept E-tags; the Westlink M7, Sydney Harbour Tunnel, Cross City Tunnel, Lane Cove Tunnel and from 1 December 2007 on the M2 Motorway have no cash booths, just E-Tag readers to zoom on through as they charge their tolls only through electronic tolling methods or through the use of number plate reading as you go through, then you have to pay after a certain time frame (for example; before 24 hours), otherwise you will get a fine in the mail. The M5 Motorway moved to electronic only tolling in 2013. Tolls on the M4 Motorway were abolished in 2010. An E-Tag is an RFID device that allow a driver to pass through a toll point without physically stopping. When a vehicle fitted with an E-Tag passes through a toll collection point, the E-Tag identifies the electronic account of the vehicle passing through and the toll-road operator recovers the toll via that account. There are four providers of E-tag accounts in New South Wales (RTA, RoamTag, Interlink Roads, and M2 Consortium). All tags provided by these four providers can be used on every E-Tag-enabled tollway in Australia. New Zealand Auckland Harbour Bridge was opened in 1959 and operated as a toll bridge until 1984. In the 1960s a group of university students attempted to disrupt the toll system by repeatedly crossing the bridge using motor-scooters (to which a very low toll applied), and paying their toll in £5 notes; the hope was that they would exhaust the supplies of change held at the toll booths. However, the toll authority got wind of their plans and got a very large supply of small change (copper coinage), so the students were soon weighed down with large amounts of small change. The Lyttelton Road Tunnel, linking the City of Christchurch with the harbour at Lyttelton, was originally a Toll Tunnel built in 1962. The government of the day promised that as soon as the tunnel was paid for, the toll would be removed The promise was kept, and the toll was removed in the mid-1970s once the tunnel had been paid off. The Tunnel Authority building and toll booths are still there at the Heathcote end. The City of Tauranga operates a toll road running between the outlying settlement of Tauriko on State Highway 2 and central business district of the city. This toll road also act as a feeder route for the Tauranga Harbour Bridge. Tolls are collected by staff operating tollbooths at the western end of the road. The Northern Gateway Toll Road is a motorway extension to State Highway 1 just north of Auckland. Northbound, the toll road begins just before Orewa and ends via a pair of road tunnels through the Johnstone Hills near Puhoi. The toll road opened in January 2009 and gives motorists a choice between a more direct route or State Highway 17 via Orewa. Tolling is implemented through automatic vehicle license plate reading with cameras in an overhead gantry. See also Toll road List of toll roads High-occupancy toll Private highway Electronic toll collection TELEPASS (Italy) SunPass (Florida, USA) E-PASS (Florida, USA) E-ZPass (northeastern USA) I-PASS (Illinois, USA) FasTrak (California, USA) Pikepass (Oklahoma, USA) TxTag (Texas, USA) Highway 407 (Toronto, ON, Canada) CityLink (Australia) London congestion charge Turnpike trusts the first organisations empowered to collect tolls on English roads Malaysian expressway system Tunnels and bridges in Hong Kong Expressways of Japan Toll roads in Europe Toll roads in the United States External links Turnpikes and Toll Roads in Nineteenth-Century America (EH.Net Economic History encyclopedia) National Alliance Against Tolls (British anti toll group, but "News" pages includes USA and other countries.) References Toll roads Wireless locating Car costs Human geography
Geography of toll roads
[ "Technology", "Environmental_science" ]
7,483
[ "Environmental social science", "Wireless locating", "Human geography" ]
7,142,324
https://en.wikipedia.org/wiki/Nitrifying%20bacteria
Nitrifying bacteria are chemolithotrophic organisms that include species of genera such as Nitrosomonas, Nitrosococcus, Nitrobacter, Nitrospina, Nitrospira and Nitrococcus. These bacteria get their energy from the oxidation of inorganic nitrogen compounds. Types include ammonia-oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB). Many species of nitrifying bacteria have complex internal membrane systems that are the location for key enzymes in nitrification: ammonia monooxygenase (which oxidizes ammonia to hydroxylamine), hydroxylamine oxidoreductase (which oxidizes hydroxylamine to nitric oxide - which is further oxidized to nitrite by a currently unidentified enzyme), and nitrite oxidoreductase (which oxidizes nitrite to nitrate). Ecology Nitrifying bacteria are present in distinct taxonomical groups and are found in highest numbers where considerable amounts of ammonia are present (such as areas with extensive protein decomposition, and sewage treatment plants). Nitrifying bacteria thrive in lakes, streams, and rivers with high inputs and outputs of sewage, wastewater and freshwater because of the high ammonia content. Oxidation of ammonia to nitrate Nitrification in nature is a two-step oxidation process of ammonium () or ammonia () to nitrite () and then to nitrate () catalyzed by two ubiquitous bacterial groups growing together. The first reaction is oxidation of ammonium to nitrite by ammonia oxidizing bacteria (AOB) represented by members of Betaproteobacteria and Gammaproteobacteria. Further organisms able to oxidize ammonia are Archaea (AOA). The second reaction is oxidation of nitrite () to nitrate by nitrite-oxidizing bacteria (NOB), represented by the members of Nitrospinota, Nitrospirota, Pseudomonadota, and Chloroflexota. This two-step process was described already in 1890 by the Ukrainian microbiologist Sergei Winogradsky. Ammonia can be also oxidized completely to nitrate by one comammox bacterium. Ammonia-to-nitrite mechanism Ammonia oxidation in autotrophic nitrification is a complex process that requires several enzymes as well as oxygen as a reactant. The key enzymes necessary for releasing energy during oxidation of ammonia to nitrite are ammonia monooxygenase (AMO) and hydroxylamine oxidoreductase (HAO). The first is a transmembrane copper protein which catalyzes the oxidation of ammonia to hydroxylamine () taking two electrons directly from the quinone pool. This reaction requires O2. The second step of this process has recently fallen into question. For the past few decades, the common view was that a trimeric multiheme c-type HAO converts hydroxylamine into nitrite in the periplasm with production of four electrons (). The stream of four electrons is channeled through cytochrome c554 to a membrane-bound cytochrome c552. Two of the electrons are routed back to AMO, where they are used for the oxidation of ammonia (quinol pool). The remaining two electrons are used to generate a proton motive force and reduce NAD(P) through reverse electron transport. Recent results, however, show that HAO does not produce nitrite as a direct product of catalysis. This enzyme instead produces nitric oxide and three electrons. Nitric oxide can then be oxidized by other enzymes (or oxygen) to nitrite. In this paradigm, the electron balance for overall metabolism needs to be reconsidered. Nitrite-to-nitrate mechanism Nitrite produced in the first step of autotrophic nitrification is oxidized to nitrate by nitrite oxidoreductase (NXR) (). It is a membrane-associated iron-sulfur molybdo protein and is part of an electron transfer chain which channels electrons from nitrite to molecular oxygen. The enzymatic mechanisms involved in nitrite-oxidizing bacteria are less described than that of ammonium oxidation. Recent research (e.g. Woźnica A. et al., 2013) proposes a new hypothetical model of NOB electron transport chain and NXR mechanisms. Here, in contrast to earlier models, the NXR would act on the outside of the plasma membrane and directly contribute to a mechanism of proton gradient generation as postulated by Spieck and coworkers. Nevertheless, the molecular mechanism of nitrite oxidation is an open question. Comammox bacteria The two-step conversion of ammonia to nitrate observed in ammonia-oxidizing bacteria, ammonia-oxidizing archaea and nitrite-oxidizing bacteria (such as Nitrobacter) is puzzling to researchers. Complete nitrification, the conversion of ammonia to nitrate in a single step known as comammox, has an energy yield (∆G°′) of −349 kJ mol−1 NH3, while the energy yields for the ammonia-oxidation and nitrite-oxidation steps of the observed two-step reaction are −275 kJ mol−1 NH3, and −74 kJ mol−1 NO2−, respectively. These values indicate that it would be energetically favourable for an organism to carry out complete nitrification from ammonia to nitrate (comammox), rather than conduct only one of the two steps. The evolutionary motivation for a decoupled, two-step nitrification reaction is an area of ongoing research. In 2015, it was discovered that the species Nitrospira inopinata possesses all the enzymes required for carrying out complete nitrification in one step, suggesting that this reaction does occur. Table of characteristics See also Root nodule Denitrification Denitrifying bacteria f-ratio Nitrification Nitrogen cycle Nitrogen deficiency Nitrogen fixation Electron transport chain Comammox References Bacteriology Nitrogen cycle Metabolism Soil biology
Nitrifying bacteria
[ "Chemistry", "Biology" ]
1,280
[ "Nitrogen cycle", "Soil biology", "Cellular processes", "Biochemistry", "Metabolism" ]
15,945,893
https://en.wikipedia.org/wiki/Stage%20machinery
Stage machinery, also known as stage mechanics, comprises the mechanical devices used to create special effects in theatrical productions, including scene changes, lowering actors through the stage floor (traps) and enabling actors to 'fly' over the stage. Alexandra Palace Theatre, London and the Gaiety Theatre, Isle of Man are two theatres which have retained stage machinery of all types under the stage. Scene Changing The wings of a theatre stage had to be at least half the width of the stage each side of the proscenium arch and the fly system for flying scenery had to be twice the height of the stage. Drum and Shaft This consisted of a shaft around which was built one or more circular drums which had a much larger diameter than the shaft. A rope wound round the drum was pulled in order to rotate the shaft and if there was more than one drum on the shaft, several pieces of scenery could be moved at the same time to raise the scenery wings and backdrops. Slote/Sloat This was a pair of vertical runners used to raise or lower a long profile of low scenery such as a groundrow, pieces of scenery made of canvas stretched over wood and used to represent items such as water or flowers, through a narrow slot in the stage floor. Column Wave The column wave, developed by the Italian architect Nicola Sabbatini, was a 16th-century stage machine used to provide the appearance waves on the sea. Bridge This was a heavy wooden platform with counterweights which were used to raise and lower either heavy pieces of scenery or a group of actors, from below the stage to stage level. Scruto Scruto consisted of narrow strips of wood attached side by side on canvas material forming a continuous sheet which could be rolled. The scruto could be mounted vertically and rolled up or down to change the scenery or horizontally in the stage floor to form a trap cover. Traps Anapiesma was the ancient Greek version of the stage trap we know today. It was a concealed opening under the stage floor, where actors and props would be hidden before they appeared on stage. The joists of the stage floor were cut and the opening was concealed in different ways, depending on the type of trap. In the 19th century many different kinds of traps were used. All except the Corsican trap were located downstage near the proscenium arch. The trap room is the large space below the stage where actors prepared to make their entrance and where the winches, drums and other machinery needed to operate traps and scenery were kept. It was referred to as "hell". Newspaper advertisements looked for trap performers and newspaper notices for shows might advertise how high a performer flew out of a trap. Grave Trap This trap was positioned centrally and was named after its use in Shakespeare's Hamlet. It measured about 6 by 3 feet and consisted of a platform below the stage which could be raised or lowered. Star Trap These were counterweighted traps which could be used to allow actors playing supernatural beings, such as ghosts in melodrama and demons and fairies in pantomime, to appear suddenly. The hole through which the actor appeared consisted of triangular flaps, hinged with leather, which opened upwards, resembling a star. The actor stood on a small platform below the trap and counterweights of up to 200 kg, attached to the platform, were raised by stage hands using ropes, at which point the platform moved up rapidly and the actor 'flew' through the trap. The trap closed immediately with no visible opening, giving the illusion that the actor had appeared through the solid stage floor. The star traps were hazardous. The first pantomime at Alexandra Palace Theatre, 'The Yellow Dwarf' had to be delayed when an actor twisted his spine and sprained muscles in his back in preparation for the role. Despite this, they were still used in the first half of the 20th century until banned by the actors' union Equity. Bristle Trap To create bristle traps the wood in the stage floor was replaced by bristles which were painted to match the stage floor. Vampire Trap This trap was invented for the James Planché 1820 adaption of Polidori's The Vampyre. It involved two hinged traps in the stage floor that an actor could step through in order to vanish from the stage. The trap then immediately closed, giving the impression that the actor was passing through solid matter. Leap Trap This trap consisted of two hinged traps in scenery that an actor could step through in a single jump to either enter or leave the stage. It closed immediately, giving the impression that the actor was passing through solid matter. Corsican Trap These traps used a counterweighted platform and slatted shutters, sometimes made of scruto, which allowed an actor to rise through the stage floor while at the same time moving across it. It was developed for the play The Corsican Brothers by Dion Boucicault, in which the ghost of a murdered man rose slowly across the stage and through the stage floor to haunt his twin brother. It was played at the Princess's Theatre London in 1852. It consisted of a bristle trap set between 2 long sliders positioned across the stage, the first drawing the trap across the stage and the second closing behind. The actor stood on a small truck which ran along an inclined track below the stage which started 6 feet below the stage and rose to stage level. The only working Corsican trap in the world now is at the Gaiety Theatre in the Isle of Man, where they also have a model demonstrating how it works Cauldron Trap This trap, named from the witches' scene in Macbeth, was usually just a square opening through which items could be passed into a bottomless cauldron. Corner Traps These had an area of about 2 feet square, covered by a piece of scruto and would have been situated at each side of the stage near the proscenium arch. They could be used to raise or lower a person through the stage. This idea was further developed in Italy in the late 14th century using ropes and pulleys so that many actors could descend or ascend together. Flying machines Theatrical machinery was used by the Greeks in the 5th century BC to lower actors to the stage. In England, by the end of the 18th century, diagrams of complicated flights were drawn and by the mid 19th century the fly systems used consisted of pulleys and counterweights. Towards the end of the 19th century, George Kirby founded a company specifically for equipment used for flying actors and produced the effects needed to fly actors in the early productions of Peter Pan. George's son Joseph continued the business and founded Kirby's Flying Ballet troupe, which performed in the first half of the 20th century. The lines to which the actors were attached were known as Kirby lines. References Scenic design Theatre
Stage machinery
[ "Engineering" ]
1,384
[ "Scenic design", "Design" ]
15,946,878
https://en.wikipedia.org/wiki/Soil%20steam%20sterilization
Soil steam sterilization (soil steaming) is a farming technique that sterilizes soil with steam in open fields or greenhouses. Pests of plant cultures such as weeds, bacteria, fungi and viruses are killed through induced hot steam which causes vital cellular proteins to unfold. Biologically, the method is considered a partial disinfection. Important heat-resistant, spore-forming bacteria can survive and revitalize the soil after cooling down. Soil fatigue can be cured through the release of nutritive substances blocked within the soil. Steaming leads to a better starting position, quicker growth and strengthened resistance against plant disease and pests. Today, the application of hot steam is considered the best and most effective way to disinfect sick soil, potting soil and compost. It is being used as an alternative to bromomethane, whose production and use was curtailed by the Montreal Protocol. "Steam effectively kills pathogens by heating the soil to levels that cause protein coagulation or enzyme inactivation." Benefits of soil steaming Soil sterilization provides secure and quick relief of soils from substances and organisms harmful to plants such as: Bacteria Viruses Fungi Nematodes Other pests Further positive effects are: All weeds and weed seeds are killed Significant increase of crop yields Relief from soil fatigue through activation of chemical – biological reactions Blocked nutritive substances in the soil are tapped and made available for plants Alternative to methyl bromide and other critical chemicals in agriculture Steaming with superheated steam Through modern steaming methods with superheated steam at 180–200 °C, an optimal soil disinfection can be achieved. Soil only absorbs a small amount of humidity. Micro organisms become active once the soil has cooled down. This creates an optimal environment for instant tillage with seedlings and seeds. Additionally the method of integrated steaming can promote a target-oriented resettlement of steamed soil with beneficial organisms. In the process, the soil is first freed from all organisms and then revitalized and microbiologically buffered through the injection of a soil activator based on compost which contains a natural mixture of favorable microorganisms (e.g. Bacillus subtilis, etc.). Different types of such steam application are also available in practice, including substrate steaming, surface steaming, and deep soil steaming. Surface steaming Several methods for surface steaming are in use amongst which are: area sheet steaming, the steaming hood, the steaming harrow, the steaming plough and vacuum steaming with drainage pipes or mobile pipe systems. In order to pick the most suitable steaming method, certain factors have to be considered such as soil structure, plant culture and area performance. At present, more advanced methods are being developed, such as sandwich steaming or partially integrated sandwich steaming in order to minimize energy consumption and associated costs as much as possible. Deep soil steaming Deep soil steaming is a concept adopted by the Norwegian company Soil Steam international AS. They have developed a technology that gets the steam down to 30 cm deep in the soil. This is done in a continuous process and their last prototype managed to treat 1 hectare in 20 hours. When steaming the soil this deep, they get deep enough to prevent fall plowing from bringing up new seeds, fungi or nematodes. This means that the soil stays free from weeds, seeds, fungi and nematodes for many years after one deep soil steam operation. Sheet steaming Surface steaming with special sheets (sheet steaming) is a method which has been established for decades in order to steam large areas reaching from 15 to 400 m2 in one step. If properly applied, sheet steaming is simple and highly economic. The usage of heat resistant, non-decomposing insulation fleece saves up to 50% energy, reduces the steaming time significantly and improves penetration. Single working step areas up to 400 m2 can be steamed in 4–5 hours down to 25–30 cm depth / 90 °C. The usage of heat resistant and non-decomposing synthetic insulation fleece, 5 mm thick, 500 gr / m2, can reduce steaming time by about 30%. Through a steam injector or a perforated pipe, steam is injected underneath the sheet after it has been laid out and weighted with sand sacks. The area performance in one working step depends on the capacity of the steam generator (e.g. steam boiler): The steaming time depends on soil structure as well as outside temperature and amounts to 1–1.5 hours per 10 cm steaming depth. Hereby the soil reaches a temperature of about 85 °C. Milling for soil loosening is not recommended since soil structure may become too fine which reduces its penetrability for steam. The usage of spading machines is ideal for soil loosening. The best results can be achieved if the soil is cloddy at greater depth and granulated at lesser depth. In practice, working with at least two sheets simultaneously has proven to be highly effective. While one sheet is used for steaming the other one is prepared for steam injection, therefore unnecessary steaming recesses are avoided. Depth steaming with vacuum Steaming with vacuum which is induced through a mobile or fixed installed pipe system in the depth of the area to be steamed, is the method that reaches the best penetration. Despite high capital cost, the fixed installation of drainage systems is reasonable for intensively used areas since steaming depths of up to 80 cm can be achieved. In contrast to fixed installed drainage systems, pipes in mobile suction systems are on the surface. A central suction pipeline consisting of zinc-coated, fast-coupling pipes are connected in a regular spacing of 1.50 m and the ends of the hoses are pushed into the soil to the desired depth with a special tool. The steaming area is covered with a special steaming sheet and weighted all around as with sheet steaming. The steam is injected underneath the sheet through an injector and protection tunnel. While with short areas up to 30 m length steam is frontally injected, with longer areas steam is induced in the middle of the sheet using a T-connection branching out to both sides. As soon as the sheet is inflated to approximately 1 m by the steam pressure, the suction turbine is switched on. First, the air in the soil is removed via the suction hoses. A partial vacuum is formed and the steam is pulled downward. During the final phase, when the required steaming depth has been reached, the ventilator runs non-stop and surplus steam is blown out. To ensure that this surplus steam is not lost, it is fed back under the sheet. As with all other steaming systems, a post-steaming period of approximately 20–30 minutes is required. Steaming time is approximately 1 hour per 10 cm steaming depth. The steam requirement is approximately 7–8 kg/m2. The most important requirement, as with all steaming systems, is that the soil is well loosened before steaming, to ensure optimal penetration. Negative pressure technique Negative pressure technique generates appropriate soil temperature at a 60 cm depth and complete control of nematodes, fungi and weeds is achieved. In this technique, the steam is introduced under the steaming sheath and forced to enter the soil profile by a negative pressure. The negative pressure is created by a fan that sucks the air out of the soil through buried perforated polypropylene pipes. This system requires a permanent installation of perforated pipes into the soil, at a depth of at least 60 cm to be protected from plough. Steaming with hoods A steaming hood is a mobile device consisting of corrosion-resistant materials such as aluminum, which is put down onto the area to be steamed. In contrast to sheet steaming, cost-intensive working steps such as laying out and weighting the sheets don't occur, however the area steamed per working step is smaller in accordance to the size of the hood. Outdoors, a hood is positioned either manually or via tractor with a special pre-stressed 4 point suspension arm. Steaming time amounts to 30 min for a penetration down to 25 cm depth. Hereby a temperature of 90 °C can be reached. In large stable glasshouses, the hoods are attached to tracks. They are lifted and moved by pneumatic cylinders. Small and medium-sized hoods up to 12 m2 are lifted manually using a tipping lever or moved electrically with special winches. Combined surface and depth injection of steam (Sandwich Steaming) Sandwich steaming, which was developed in a project among DEIAFA, University of Turin (Italy, www.deiafa.unito.it) and Ferrari Costruzioni Meccaniche (see image), represents a combination of depth and surface steaming, offers an efficient method to induce hot steam into the soil. The steam is simultaneously pushed into the soil from the surface and from the depth. For this purpose, the area, which must be equipped with a deep steaming injection system, is covered with a steaming hood. The steam enters the soil from the top and the bottom at the same time. Sheets are not suitable, since a high pressure up to 30 mm water column arises underneath the cover. Sandwich steaming offers several advantages. On the one hand, application of energy can be increased to up to 120 kg steam per m2/h. In comparison to other steaming methods up to 30% energy savings can be achieved and the usage of fuel (e.g. heating oil) accordingly decreases. The increased application of energy leads to a quick heating of the soil which reduces the loss of heat. On the other hand, only half of the regular steaming time is needed. Comparison of sandwich steaming with other steam injection methods relating to steam output and energy demand(*): (*) in soil max 30% moisture Clearly, Sandwich steaming reaches the highest steam output at the lowest energy demand. Partially integrated sandwich steaming The partial integrated sandwich steaming is an advanced combined method for steaming merely the areas which shall be planted and purposely leaving out those areas which shall not be used. In order to avoid risk of re-infection of steamed areas with pest from unsteamed areas, beneficial organisms can directly be injected into the hygenized soil via a soil activator (e.g. special compost). The partial sandwich steaming unlocks further potential savings in the steaming process. Container / Stack steaming Stack steaming is used when thermically treating compost and substrates such as turf. Depending on the amount, the material to be steamed is piled up to 70 cm height in steaming boxes or in small dump trailers. Steam is evenly injected via manifolds. For huge amounts, steaming containers and soil boxes are used which are equipped with suction systems to improve steaming results. Midget amounts can be steamed in special small steaming devices. The amount of soil steamed should be tuned in a way that steaming time amounts to at most 1.5 h in order to avoid large quantities of condensed water in the bottom layers of the soil. In light substrates, such as turf, the performance per hour is significantly higher. History Modern soil steam sterilization was first discovered in 1888 (by Frank in Germany) and was first commercially used in the United States (by Rudd) in 1893 (Baker 1962). Since then, a wide variety of steam machines have been built to disinfest both commercial greenhouse and nursery field soils (Grossman and Liebman 1995). In the 1950s, for example, steam sterilization technologies expanded from disinfestation of potting soil and greenhouse mixes to commercial production of steam rakes and tractor-drawn steam blades for fumigating small acres of cut flowers and other high-value field crops (Langedijk 1959). Today, even more effective steam technologies are being developed. Application of hot steam In horticulture as well as nurseries for sterilization of substrates and top soil In agriculture for sterilization and treatment of food waste for pig fattening and heating of molasses In mushroom cultivation for pasteurization of growing rooms, sterilization of top soil and combined application as heating In wineries as combination boiler for sterilization and cleaning of storage tanks, tempering of mash and for warm water generation. References Further reading (1997): STEAM - The Hottest Alternative to Methyl Bromide. American Nurseryman, August 15, 1997, 37–43 (1995): Agricultiural Production Without Methyl Bromide - Four Case Studies; CSIRO Division of Entomology and UNEP IE's OzonAction Programme under the Multilateral Fund, 1995 (1993): Steaming is still the most effective way of treating contaminated media. Greenhouse Manager 1993, 110(10), 88–89 (1992): Beneficial fungus increases yields, profits in commercial production. Greenhouse Manager 1992, 10, 105 (2008): Introduction to Soil Steaming (1962): Principles of heat treatment of soil and planting material. J. Austral. Inst. Of Agric. Sci. 1962, 28(2), 118–126 (1954): Steampressure in soil sterilisation I. In bins. The Journal of horticultural Science 1954 /29, 89–97 (1955): Steampressure in soil sterilisation II. Glasshouse in situ sterilising. Journal of horticultural Science 1955 /1, 43–55 (2003): Modeling and control of steam soil disinfestation processes. Biosystems Engineering, 84, 3, 247–256, (1964): Temperature distribution and performance in balloon soil steaming. Hort. Res. 4 1964 /1, 27–41 (1960): Investigations of the technique of soil steaming. Acta Agriculturae Skandinavica Supplementum 9, Stockholm (1946): Soil steaming for disease control. Soil Science 61, 83–92 (1947): Studies on stam sterilisation of soils I. Some effects on physical, chemical and biological properties. Canadian Journal of Research, C, 25, 189–208 (1957): The use of steam for soil sterilisation. I. Physical aspects of soil sterilisation - II. Practical aspects of soil sterilisation - III. Selection of boiler, steam mains and distribution systems. Industrial Heating Engineer 19, 3–6, 24, 38–41, 67–71 (1957): Some experiments on the steam sterilising of soil. I. Passage of heat through fine soil - II. Heating of clods of soil. Journal of Agricultural Engineering Research 2, 4, 262 (1959): Some experiments on steam sterilising of soil. III. The passage of heat downwards below the point of injection. Journal of Agricultural Engineering Research 4, 153–160 (1959): Some experiments on steam sterilising of soil. IV. The effect of covering the soil surface. Journal of Agricultural Engineering Research 4, 153–160 (1955): Desinfection of soil by heat, flooding and fumagation. Botanical Review 21, 189–250 (1957): Heat treatment of soil, ed. The UC System for producing healthy container-grown plants. University of California, Division of Agricultural Sciences, Oakland, CA External links Soil Steaming and Steam Boiler Blog Boilers Organic food Soil science Steam power
Soil steam sterilization
[ "Physics", "Chemistry" ]
3,079
[ "Physical quantities", "Steam power", "Power (physics)", "Boilers", "Pressure vessels" ]
15,947,157
https://en.wikipedia.org/wiki/Non-smooth%20mechanics
Non-smooth mechanics is a modeling approach in mechanics which does not require the time evolutions of the positions and of the velocities to be smooth functions. Due to possible impacts, the velocities of the mechanical system are allowed to undergo jumps at certain time instants in order to fulfill the kinematical restrictions. Consider for example a rigid model of a ball which falls on the ground. Just before the impact between ball and ground, the ball has non-vanishing pre-impact velocity. At the impact time instant, the velocity must jump to a post-impact velocity which is at least zero, or else penetration would occur. Non-smooth mechanical models are often used in contact dynamics. See also Contact dynamics Unilateral contact Jean Jacques Moreau References Acary V., Brogliato, B. Numerical Methods for Nonsmooth Dynamical Systems. Applications in Mechanics and Electronics. Springer Verlag, LNACM 35, Heidelberg, 2008. Brogliato B. Nonsmooth Mechanics. Models, Dynamics and Control. Communications and Control Engineering Series, Springer-Verlag, London, 2016 (3rd Ed.) Demyanov, V.F., Stavroulakis, G.E., Polyakova, L.N., Panagiotopoulos, P.D. "Quasidifferentiability and Nonsmooth Modelling in Mechanics, Engineering and Economics", Springer 1996. Yang Gao, David, Ogden, Ray W., Stavroulakis, Georgios E. (Eds.) "Nonsmooth/Nonconvex Mechanics Modeling, Analysis and Numerical Methods", Springer 2001 Glocker, Ch. Dynamik von Starrkoerpersystemen mit Reibung und Stoessen, volume 18/182 of VDI Fortschrittsberichte Mechanik/Bruchmechanik. VDI Verlag, Düsseldorf, 1995 Glocker Ch. and Studer C. Formulation and preparation for Numerical Evaluation of Linear Complementarity Systems. Multibody System Dynamics 13(4):447-463, 2005 Jean M. The non-smooth contact dynamics method. Computer Methods in Applied mechanics and Engineering 177(3-4):235-257, 1999 Mistakidis, E.S., Stavroulakis, Georgios E. "Nonconvex Optimization in Mechanics Algorithms, Heuristics and Engineering Applications by the F.E.M.", Springer, 1998 Moreau J.J. Unilateral Contact and Dry Friction in Finite Freedom Dynamics, volume 302 of Non-smooth Mechanics and Applications, CISM Courses and Lectures. Springer, Wien, 1988 Pfeiffer F., Foerg M. and Ulbrich H. Numerical aspects of non-smooth multibody dynamics. Comput. Methods Appl. Mech. Engrg 195(50-51):6891-6908, 2006 Potra F.A., Anitescu M., Gavrea B. and Trinkle J. A linearly implicit trapezoidal method for integrating stiff multibody dynamics with contacts, joints and friction. Int. J. Numer. Meth. Engng 66(7):1079-1124, 2006 Stewart D.E. and Trinkle J.C. An Implicit Time-Stepping Scheme for Rigid Body Dynamics with Inelastic Collisions and Coulomb Friction. Int. J. Numer. Methods Engineering 39(15):2673-2691, 1996 Mechanics Dynamical systems
Non-smooth mechanics
[ "Physics", "Mathematics", "Engineering" ]
736
[ "Mechanics", "Mechanical engineering", "Dynamical systems" ]
15,947,535
https://en.wikipedia.org/wiki/6-APA
6-APA ((+)-6-aminopenicillanic acid) is a chemical compound used as an intermediate in the synthesis of β–lactam antibiotics. The major commercial source of 6-APA is still natural penicillin G. The semi-synthetic penicillins derived from 6-APA are also referred to as penicillins and are considered part of the penicillin family of antibiotics. In 1958, Beecham scientists from Brockham Park, Surrey, found a way to obtain 6-APA from penicillin. Other β-lactam antibiotics could then be synthesized by attaching various side-chains to the nucleus. The reason why this was achieved so many years after the commercial development of penicillin by Howard Florey and Ernst Chain lies in the fact that penicillin itself is very susceptible to hydrolysis, so direct replacement of the side-chain was not a practical route to other β-lactam antibiotics. References Beta-lactam antibiotics Sulfur heterocycles Amines
6-APA
[ "Chemistry" ]
212
[ "Amines", "Bases (chemistry)", "Functional groups" ]
15,948,517
https://en.wikipedia.org/wiki/Narada%20multicast%20protocol
The Narada multicast protocol is a set of specifications which can be used to implement overlay multicast functionality on computer networks. It constructs an overlay tree from a redundantly meshed graph of nodes, source specific shortest path trees are then constructed from reverse paths. The group management is equally distributed on all nodes because each overlay node keeps track of all its group members through periodic heartbeats of all members. The discovery and tree building is similar to DVMRP. External links "An Evaluation of Three Application-Layer Multicast Protocols" "Overlay Multicast & Content distribution" References Yang-hua Chu, et al. A case for end system multicast, IEEE Journal on selected areas in communications, 2002. Computer networking Routing protocols
Narada multicast protocol
[ "Technology", "Engineering" ]
153
[ "Computer networking", "Computer engineering", "Computer network stubs", "Computer science", "Computing stubs" ]
15,948,564
https://en.wikipedia.org/wiki/Radha%20Gobinda%20Chandra
Radhagobinda Chandra OARF FAAVSO, FBBA, AFOEV (; July 17, 1878 – April 3, 1975) was a Bengali astronomer. He was a pioneer of observational astronomy in the region of Bengal, comprising modern-day Bangladesh and West Bengal. He was born in Jessore, Undived Bengal, British India. Radha Gobinda is especially famous for his observation of variable stars. He observed more than 49,700 variable stars and became one of the first international members of American Association of Variable Star Observers. Biography Radha Gobinda was born in a small village named Bagchar in the district proper of Jessore in Undivided Bengal, British India on July 17, 1878. His father, Gorachand was the assistant of a local doctor and mother, Padmamukh, was a housewife. His primary education started in Bakchar School. After passing from here he was admitted in Jessore Zilla School at the age of 10. At this time Chandra started observing the stars in the sky by naked eye. But he was not that interested in formal education. Instead his main interest was centered on the mysterious night sky. Due to this lack of attention in formal education he failed to pass Secondary School Certificate (SSC) three times subsequently. In 1899, at an age of 21, Radhagobinda married Gobinda Mohini, a girl from Murshidabad who was just 9 years old at that time. Together they had a boy named Kal (English: time) and a girl named Barsha (English: rain). After marriage Chandra attended the SSC exam for the last time and failed again. Due to these failures he left formal education and planned to start a job. After searching a job for two years he was finally able get a job of ordinary treasurer in Jessore collectorate office. Then his monthly salary was only 15 Taka. Later he became treasury clerk and after that chief treasurer of in this office. From a very early age he had immense interest in Astronomy and later in life started doing amateur astronomy on his own. When he was in grade 6 at school they used to had a textbook called Charupath from where he read an inspiring prose on Astronomy and Cosmology by Bengal writer Akshay Kumar Datta. He wanted to become an astronomer reading this. Later he wrote about this in his autobiography, He was first practically introduced with the sky when he got scientific apprenticeship with a lawyer named Kalinath Mukherjee who was then editing the 'Star Atlas'. Observation of Halley's Comet In the year of 1910 (April–July), Chandra observed the Halley's Comet from Jessore with his small binocular as he did not have any powerful binocular or other helping tool. He wrote the details information of his observation in the 'Hindu Magazine' of that time. Telescope collection In 1912, Chandra purchased a 3-inch lens telescope from England for 13 pounds. Contributions in astronomy From then on, he continued regular observation of variable stars with the help of the 'Star Atlas' by Kalinath Mukhajee. He communicated a total of 37,215 trained-eye observations up to 1954. He worked at a longitude far from that of most observers, improving the temporal completeness of the observational records for the stars he observed. Discovery of nova It was June 7, 1918. Chandra used to observe stars most of the nights at that time. He suddenly noticed a bright star. He tried to match it with the Star Map but did not find any. He observed it for the next few days and came to decision that it is a new one. In the language of astronomy it was a Nova. He published the detail description of this Nova in the 'Probashi' magazine of that time. Later this nova was named as 'Nova Aquila-3'. Membership of AAVSO Chandra sent his observatory report to Edward Charles Pickering who was then a researcher of Harvard Space Observatory. Pickering gave him a lot of inspiration and sent some books to him. He invited him to become a member of 'American Association of Variable Star Observers (AAVSO)'. He became a member of AAVSO which, in 1926, gave him a 6-inch aperture telescope sent from the USA. The first international member of the AAVSO was Giovanni B. Lacchini of Faenza, Italy. In his lifetime Lacchini contributed over 58,000 observations to the AAVSO. Among other early international observers was Radha G. Chandra who made over 49,700 observations up to 1954. Retirement It was 1954 when Chandra finally retired from observing at the age of 76. He was asked to pass on the AAVSO telescope to Manali Kallat Vainu Bappu (1927–82) then at Nainital. The Elmer-Chandra telescope, one of the very few American telescopes in British India is now at Kavalur Observatory. This most dedicated observer of the time worked outside the pale of the astronomical society died in the year 1975. References External links The AAVSO and International Cooperation, about the works of Radha Gobinda. Radha Gobinda Chandra - A Pioneer in Astronomical Observations in India by Amalendu Bandyopadhyay, Birla Planetarium and Ranatosh Chakraborti, Surendranath College. 1878 births 1975 deaths Astronomers from British India Indian astronomers 19th-century Indian astronomers 20th-century Indian astronomers 19th-century Hindus Scientists from West Bengal Bengali astronomers Bangladeshi astronomers
Radha Gobinda Chandra
[ "Astronomy" ]
1,125
[ "Astronomers", "Bengali astronomers" ]
15,949,402
https://en.wikipedia.org/wiki/HD%20111031
HD 111031 (50 G. Corvi) is a double star in the southern constellation of Corvus. With an apparent visual magnitude of 6.87, it is considered too faint to be readily visible to the naked eye. The distance to this star is 102 light years, but it is drifting closer to the Sun with a radial velocity of −20 km/s. It has an absolute magnitude of 4.42. The star has a relatively large proper motion, traversing the celestial sphere at an angular rate of . This object is a solar analog with a stellar classification of G5 V; a G-type main-sequence star like the Sun that is generating energy through core hydrogen fusion. It is around five billion years old and is chromospherically inactive, with a projected rotational velocity of 1.7 km/s. The star has 1.13 times the mass and 1.27 times the radius of the Sun. It is radiating 1.5 times the Sun's luminosity from its photosphere at an effective temperature of 5,836 K. In 2020, a stellar companion was identified using high-contrast imaging. The study authors deem this most likely a K-type main-sequence star with a class of K5V, an angular separation of along a position angle (PA) of 300° corresponding to a projected separation of , and around 11–15% of the mass of the Sun. An independent study published in 2021 identified a companion through speckle imaging. They propose this is a faint red dwarf with a class of M6 or later and a visual magnitude difference of 7.9 or more compared to the primary. It is located at a separation of along a PA of 121°, as of 2021. A 2022 study claimed the presence of a brown dwarf companion to this star based on radial velocity and astrometry observations, but according to a 2023 follow-up study this was in fact a detection of the previously known stellar companion, poorly characterized due to the baseline of observations being much shorter than the companion's orbital period. References G-type main-sequence stars Double stars Corvus (constellation) BD-11 3361 3746 Corvi, 50 111031 062345
HD 111031
[ "Astronomy" ]
461
[ "Corvus (constellation)", "Constellations" ]
15,949,545
https://en.wikipedia.org/wiki/HD%2024040
HD 24040 is a star with two orbiting exoplanets in the equatorial constellation of Taurus. The star is too faint to be viewed with the naked eye, having an apparent visual magnitude of 7.50. Based on parallax measurements, it is located at a distance of 152 light years. However, it is drifting closer to the Sun with a radial velocity of −9.4 km/s. This is a G-type main-sequence star with a stellar classification of G2V. It is a metal-rich star with an age of around 4.8 billion years. The star has 14% more mass than the Sun and 128% of the Sun's radius. It is radiating 1.8 times the luminosity of the Sun from its photosphere at an effective temperature of 5,917 K. The star is spinning slowly with a projected rotational velocity of 2.4 km/s. Planetary system A long period planet was discovered in 2006 based on observations made at the W. M. Keck Observatory in Hawaii. However because the observations covered less than one complete orbit there were only weak constraints on the period and mass. The first reliable orbit for HD 24040b was obtained by astronomers at Haute-Provence Observatory in 2012 who combined the keck measurements with ones from the SOPHIE and ELODIE spectrographs. The most recent orbit published in 2015 added additional keck measurements and refined the orbital parameters. A linear trend in the radial velocities indicating a possible additional companion was detected at Haute-Provence Observatory and was also detected at keck but at a much smaller amplitude. The linear trend was confirmed in 2021, together with the discovery of another planet, HD 24040 c. Planet c is in the habitable zone, with an ecc of <0.2, and may have a habitable moon. See also HD 154345 List of extrasolar planets References External links G-type main-sequence stars Planetary systems with two confirmed planets Taurus (constellation) Durchmusterung objects 024040 017960
HD 24040
[ "Astronomy" ]
424
[ "Taurus (constellation)", "Constellations" ]
15,949,734
https://en.wikipedia.org/wiki/Live%20Communications%20Server%202003
Microsoft Office Live Communications Server 2003 provides real-time communications platform allowing for voice, video, and instant messaging. Microsoft Office Live Communications Server 2003 (LCS), provided many capabilities that were notably absent from the company's earlier Exchange IM solution, including encryption, logging, and standards-based protocols. While LCS delivers a compelling glimpse of the future of corporate IM, we expect it will truly come into its own only after companies roll out Windows Server 2003 and the Office 2003 suite. Other versions Live Communications Server 2005 References Microsoft Office Live Communications Server 2003 Overview Instant messaging server software
Live Communications Server 2003
[ "Technology" ]
118
[ "Instant messaging", "Instant messaging server software" ]
15,949,839
https://en.wikipedia.org/wiki/S%20Coronae%20Borealis
S Coronae Borealis (S CrB) is a Mira variable star in the constellation Corona Borealis. Its apparent magnitude varies between 5.3 and 13.6, with a period of 360 days—just under a year. Within the constellation, it lies to the west of Theta Coronae Borealis, and around 1 degree southeast of the eclipsing binary star U Coronae Borealis. Variability S Coronae Borealis was discovered to vary in brightness by German amateur astronomer Karl Ludwig Hencke in 1860. It was classified as a long period variable star as other similar objects were discovered, and later as a Mira variable. The maximum range of variation is from magnitude 5.3 to 13.6 although individual maxima and minima can vary in brightness. The period of 360 days is fairly predictable. Properties S Coronae Borealis is a cool red giant on the asymptotic giant branch (AGB). It pulsates, which causes its radius and temperature to change. One calculation found a temperature range of 2,350 K to 2,600 K, although a more modern calculation gives a temperature of 2,864 K. Similarly a calculation of the varying radius gives although a modern calculation of the radius gives . The bolometric luminosity varies much less than the visual magnitude and is estimated to be . Its parallax has been measured by very-long-baseline interferometry (VLBI), yielding a result of 2.39 ± 0.17 millarcseconds, which converts to a distance of 1300 ± 100 light-years. The masses of AGB stars are poorly known and cannot be calculated from their physical properties, but they can be estimated using asteroseismology. The pulsations of S Coronae Borealis lead to a mass estimate of 1.34 times that of the Sun. References Corona Borealis Mira variables Coronae Borealis, S M-type giants 136753 BD+31 2725 075143 Emission-line stars
S Coronae Borealis
[ "Astronomy" ]
414
[ "Corona Borealis", "Constellations" ]
15,950,012
https://en.wikipedia.org/wiki/Timeline%20of%20binary%20prefixes
This timeline of binary prefixes lists events in the history of the evolution, development, and use of units of measure that are germane to the definition of the binary prefixes by the International Electrotechnical Commission (IEC) in 1998, used primarily with units of information such as the bit and the byte. Historically, computers have used many systems of internal data representation, methods of operating on data elements, and data addressing. Early decimal computers included the ENIAC, UNIVAC 1, IBM 702, IBM 705, IBM 650, IBM 1400 series, and IBM 1620. Early binary addressed computers included Zuse Z3, Colossus, Whirlwind, AN/FSQ-7, IBM 701, IBM 704, IBM 709, IBM 7030, IBM 7090, IBM 7040, IBM System/360 and DEC PDP series. Decimal systems typically had memory configured in whole decimal multiples, e.g., blocks of 100 and later . The unit abbreviation '' or '' if it was used, represented multiplication by . Binary memory had sizes of powers of two or small multiples thereof. In this context, '' or '' was sometimes used to denote multiples of units or just the approximate size, e.g., either '64K' or '65K' for (216). 1790s 1793 The French Commission temporaire de Poids & Mesures rêpublicaines, Décrets de la Convention Nationale, proposes the binary prefixes double and demi, denoting a factor of 2 (21) and (2−1), respectively, in 1793. 1795 The prefixes double and demi are part of the original metric system adopted by France (with kilo for 1000) in 1795. These were not retained when the decadic SI prefixes were internationally adopted by the 11th CGPM conference in 1960. 1870s Metric prefix "" established in 1873. 1930s Metric prefixes "" (established 1795) and "" (established 1873) are widely used as decimal multipliers 1,000 and 1,000,000 for units of frequency and impedance in the electronics industry. The Committee of the Verband Deutscher Elektrotechniker publishes suggested names and symbols for the metric prefixes with decimal meaning, i.e., ( = 109) and ( = 1012). 1940s 1943–1944 J. W. Tukey coins the word "bit" as an abbreviation of "binary digit". 1947 "The Whirlwind I Computer is planned with a storage capacity of numbers of 16 binary digits each." 1948 Tukey's "bit" is referenced in the work of information theorist Claude Shannon. 1950s In the 1950s, "1 bit" meant bits: "In the '50s, amazingly enough—and only total coincidence—I actually was given the job of writing the operational specifications [...] for what was called cross telling. They handed me this thing and said, 'You're going to define how the hand-over process works between direction centers', [...] and I had no idea what they were talking about. But we had [...] one-kilobit lines connecting the direction centers and I thought, 'Good God! bits a second. Well, we'll surely be able to figure out something to do with that. — Saverah Warenstein, former programmer at Lincoln Laboratory, IBM 1952 The first magnetic core memory, from the IBM 405 Alphabetical Accounting Machine, is tested successfully in April 1952. (The image shows 10 × 12 cores; presumably one of 8) "Teaming up with a more experienced engineer, [Mike Haynes] built a core memory with just enough capacity to store all the information in an IBM punched card: 960 bits in an 80 × 12 array. In May 1952 it was successfully tested as a data buffer between a Type 405 alphabetical accounting machine and a Type 517 summary punch. This first functional test of a ferrite core memory was made in the same month that a four-times smaller 16 × 16-bit ferrite core array was successfully tested at MIT." The IBM 701, a binary-addressed computer containing 72 Williams tubes of bits each, is released in April. Principles of Operation Type 701 does not use prefixes with lengths of words or size of storage. For example, it specifies that memory tubes hold words each. The IBM 737 optional magnetic core storage stores 36-bit words. Each plane stored 64 × 64 = bits. 1955 The IBM 704 (a binary machine) manual uses decimal arithmetic for powers of two, without prefixes "Magnetic core storage units are available with capacities of either or core storage registers; or two magnetic core storage units, each with a capacity of core storage registers, may be used. Thus, magnetic core storage units are available to give the calculator a capacity of , , or core storage registers." "Each drum has a storage capacity of words." 1956 The IBM 702 (a decimally addressed machine) Preliminary Manual of Information uses decimal arithmetic for powers of ten, without prefixes. "Electrostatic memory is the principal storage medium within the machine. It consists of cathode ray tubes which can store up to characters of information in the form of electrostatic charges ... Additional storage, as required, may be provided through the use of magnetic drum storage units, each having a capacity of characters." "A character may be a letter of the alphabet, a decimal number, or any of eleven different punctuation marks or symbols used in report printing." "Each one of the positions of memory is numbered from to and each stored character must occupy one of these positions." (page 8) The word byte, meaning eight bits, is coined by Dr. Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer. IBM 650 RAMAC (a decimal addressed machine) announcement "The 650 RAMAC combines the IBM 650 Magnetic Drum Data Processing Machine with a series of disk memory units which are capable of storing a total of 24-million digits. The 305 RAMAC is an entirely new machine which contains its own input and output devices and processing unit as well as a built-in 5-million-digit disk memory." 1957 The IBM 705 (a decimal addressed machine) Operating manual uses decimal arithmetic for powers of ten, without prefixes. "A total of characters can be stored within the main storage unit of the Type 705." "Each one of the positions in memory is numbered from 0000 to ." (page 17) "One or more magnetic drums are available as optional equipment with a capacity of characters each." Lewis, W. D., Coordinated broadband mobile telephone system Earliest instance of "kilobit" in both IEEE explore and Google Scholar: "Central controls the mobile link with a rate of 20 kilobits per second, or less". 1958 "64 million (226) bytes" is used in a memo by Dr. Werner Buchholz 1959 The term 32k is used in print to refer to a memory size of 32768 (215). The author is with the Westinghouse Electric Corporation. 1960s 1960 The 11th Conférence Générale des Poids et Mesures (CGPM) announces the Système International d'Unités (SI) and adds the decimal metric prefixes giga, and tera, defined as 109 and 1012 Frequency Diversity Communications System is filed on May 13, 1960: "In actual construction, the delay line, which provides a total delay from one end to the other of one baud (10 microseconds for a 100 kilobit per second information rate), may be fabricated from lumped parameter elements, i.e., inductors and capacitors, in a well-known manner." "At a 100 kilobit per second information rate, both mark and space signals will generally be transmitted in any 0.0001 sec, interval, and therefore this requirement is easily met with conventional resistors and capacitors." The 8K core stores were getting fairly common in this country in 1954. The 32K store started mass production in 1956; it is the standard now for large machines and at least 200 machines of the size (or its equivalent in the character addressable machines) are in existence today (and at least 100 were in existence in mid-1959). 1955–1961 A search of the Computer History Museum's Stretch collection of 931 text documents dated from September 1955 through September 1961 shows no usage of 'k' or 'K' to describe main storage size. 1961 Quoted in OED as first instance of "bit", though "it is more usual" suggests it is already in common use (see timeline entry for 1957) Described device contains 512 words, 24 bits each (=  bits) "It is no longer reasonable to spend as much time to transmit an 80-bit address as 12 bits of message information – a 1500 to 1 ratio ... We have theoretically and experimentally proved that speech can be compressed from the straightforward requirement for 48 bit PCM channel capability to 2400 bits by the application of the Dudley syllabic vocoder." The IBM 7090 Data Processing System (a binary machine), Additional Core Storage (65 means 'approximately 65000') "The Additional Core Storage feature for the IBM 7090 Data Processing System provides a second IBM 7302 Core Storage, increasing the capacity of main storage by words. The block of storage represented by both 7302 units is referred to as "main storage unit". "Additional core storage provides two methods of using main storage: (1) The 65 mode – the computer program is enabled to address both of the main storage units, and (2) the 32 mode—the computer program is able to address only one storage unit, so that main storage capacity available to that program is effectively 32,768 words." The IBM 1410 Data Processing System, which used modified decimal addressing, uses decimal arithmetic for powers of ten, without prefixes "Core storage units are available in -, - or -character position capacities." "The matrix switch makes it possible to address any one of the 100 X-drive lines (in a 10 core array)." "The 40 core array requires valid five-position addresses from to ." "This operation check detects errors in programming that cause invalid addresses. Examples: -and-above on a 40 core array; -and-above on a 20 core array. On a 10 core array, invalid addresses are detected by the address-bus validity check." 1962 A reference to a "4 IBM 1401" meant characters of storage (memory). 1963 Ludwig uses bit in the decimal sense DEC Serial Drum Type 24 "Drums are equipped to store either 64, 128, or 256 data blocks, providing a memory capability of 16384, 32768, or 65536 computer words" (no abbreviations) Honeywell 200 Summary Description "The main memory is a magnetic core ... The memory unit supplied as part of the basic central processor has a capacity of characters, each of which is stored in a separate, addressable, memory location. This capacity may be expanded in modular increments by adding one -character module and additional 4,096-character modules." "Random access disc file and control (disc capacities of up to 100 million characters are available.)" "Up to eight drum storage units can be connected to the Model 270 Random Access Drum Control. Each drum provides storage for characters, allowing a total capacity of approximately 21 million characters." 1964 Gene Amdahl's seminal April 1964 article on IBM System/360 used 1 to mean 1024. Leng, Gordon Bell, et al., use in the binary sense: "The computer has two blocks of 4, 18-bit words of memory, (1 = 1024 words), attached to its central processor" Data Processing Division press release distributed on April 7, 1964. "System/360 core storage memory capacity ranges from characters of information to more than ." IBM 7090/7094 Support Package for IBM System/360 – November "An IBM 1401 Data Processing System with the following minimum configuration is also required: 1. 4K positions of core storage" – ADDRESS SELECTION CONTROL APPARATUS – Filed April 6, 1964 "To facilitate understanding of the invention, the main storage area has been illustrated as being of 8K capacity; however, it is to be understood that the main storage area may be of larger capacity (e.g., 16K, 32K or 64K) by storing address selection control data in bit positions '2', '1' and '0' of M register 197, respectively." 1965 "Each IBM 2315 disk cartridge can hold the equivalent of more than one million characters of information. "One method of designing a slave memory for instructions is as follows. Suppose that the main memory has 64 words (where  = 1024) and, therefore, 16 address bits, and that the slave memory has 32 words and, therefore, 5 address bits." IBM 1620 CPU Model 1 (a decimal machine) System Reference Library, dated 19 July 1965, states: "A core storage module, which is addressable positions of magnetic core storage, is located in the 1620. Two additional modules are available ... Each core storage module ( positions) is made up of 12 core planes as shown in Figure 3. Each core plane contains all cores for a specific bit value." 1966 CONTIGUOUS BULK STORAGE ADDRESSING is filed on 3 January 1966 "Note that '' as used herein indicates 'thousands'. Each storage location in the present embodiment includes 64 data bits and 8 related parity bits, as described herein." "Thus, if only storage unit 1A were provided, it would contain addresses 0 through 32K; storage IB would include addresses between 32K and 64K, storage 2A would contain addresses between 64K and 96K, ...". 1968 A Univac 9400 disc based computer system ..." can have 2–8 8411 drives for 14.5–58 bytes capacity. The 8411 has a transfer rate of 156K bytes per second." - using megabytes in a decimal sense Donald Morrison proposes to use the Greek letter kappa ("") to denote bytes, "" to denote 1024 × 1024, and so on.<ref name="Morrison"></ref> (At the time, memory size was small, and only 'K' was in widespread use.) Wallace Givens responded with a proposal to use "" as an abbreviation for 1024 and "" or "" for 1024 × 1024, though he noted that neither the Greek letter nor lowercase letter "b" would be easy to reproduce on computer printers of the day. Bruce Alan Martin of Brookhaven National Laboratory further proposed that the prefixes be abandoned altogether, and the letter B be used to indicate a base-2 exponent in binary scientific notation, similar to E in decimal scientific notation, to create shorthands like 3B20 for 3 × 220 1969 IBM 1401 (a decimal machine) Simulator for IBM OS/360 "1401 features supported are advanced programming, sense switches, tapes, multiply, divide, 16K core, and all standard instructions except Select Stacker." "1401 core is simulated by bytes of S/360 core obtained dynamically." "Enough core must be available to allow at least 70K for a problem program area. If tape simulation is not required, this core requirement may be reduced to 50K with the removal of the tape Buffer area." HIGH DENSITY PERMANENT DATA STORAGE AND RETRIEVAL SYSTEM is filed on March 17, 1969, earliest Google Patent search containing "kilobyte". "The data word processor 606 handles the inflow and out-flow of byte-oriented input/output data and interleaved signals at a rate of, for example, 500 kilobytes per second. Instruction processing rates of four to eight per microsecond are required for such a data flow." Memory Control System is filed on October 29, 1969 "FIG. 2a shows a practical example of an operand address which consists of, for example 24 bits. It is assumed herein that each block includes 32 bytes, each sector includes 1 kilobyte, the buffer memory 116 includes 4 kilobytes, and read data is represented by one double word or 64 bits, as one word in this case consists of 32 bits." IBM System/360 Component Descriptions (IBM 2314 Direct Access Storage Facility) "Each module can store 29.17 million bytes or 58.35 million packed decimal digits ... total on-line storage capacity is 233.4 million bytes" "Each 11-disc pack (20 surfaces) has a storage capacity of 29 megabytes; maximum storage capacity with the largest version using a ninth drive as a spare) is bytes." DEC PDP-11 (a binary-addressed machine) Handbook "PDP-11 addressing modes include ... and direct addressing to 32K words" (Page 2) This appears to be the only use of 'K' in this manual, though; elsewhere sizes are spelled out in full. Contrast the 1973 PDP-11/40 Manual, which defines 'K' as (below). "... each removable disc has a capacity of 2.3 million bytes or 3.07 million 6-bit characters. Up to four drives can be attached to a single controller, resulting in a total storage capacity of 9.2 bytes." Usage of "million" and "" in decimal sense to describe HDD. 1970s 1970 "The following are excerpts from an IBM Data Processing Division press technical fact sheet distributed on 30 June 1970. Users of the Model 165 will have a choice of five main core storage sizes, ranging from to over 3-million bytes. Seven main memory sizes are available for the Model 155, ranging from to over 2-million bytes." "Each of the five system/360 model 75 computers (Fig. 2) has one megabyte of primary core storage plus four megabytes of large core storage (LCS, IBM 2361)." 1971 IBM System/360 Operating System: Storage Estimates uses K in a binary sense approximately 450 times, such as "System/360 Configuration: Model 40 with 64K bytes of storage and storage protection". Note the letter "K" is also sometimes used as a variable in this document (see page 23). 1972 Lin and Mattson introduce the term Mbyte. 1973 OCEANPORT, N.J., SEPT. 25, 1973 – A 16-bit minicomputer priced at under $2,000.00 in quantities and a 32-bit minicomputer priced at under $6,000.00 in quantities were introduced today by Interdata, Inc. The 16-bit mini, the Model 7/16, includes an 8KB memory unit in its basic configuration, and will be available for delivery in the first quarter of 1974. The single unit price of the 7/16 is $3,200.00. The 32-bit mini, the Model 7/32, includes a 32KB memory unit and will be available for delivery in the second quarter of 1974. The single unit price of the 7/32 is $9,950.00. DEC PDP-11/40 Manual "Direct addressing of 32 16-bit words or 64 8-bit bytes ( = )" (Page 1-1) Contrast the 1969 PDP-11 Handbook, which avoids this usage almost everywhere (above). 1974 The seminal 1974 Winchester HDD article makes extensive use of bytes with M being used in the conventional, 106 sense. Arguably all of today's HDD's derive from this technology. The October 1974 CDC Product Line Card unambiguously uses B to characterize HDD capacity in millions of bytes. 1975 The 15th CGPM defines the SI prefixes as 1015 and as 1018. Byte Magazine December 1975 article on IBM 5100 includes the following: "User memory starts at 16K bytes in the minimum configuration and can be expanded to 64K bytes (65,536)." Gordon Bell uses the term megabytes: 1976 DEC RK05/RK05J/RK05F disk drive maintenance manual "Bit Capacities (unformatted)" "25 million" | "50 million" ( bits/track × 406 | 812 tracks = | bits) The Memorex 1976 annual report has 10 instances of the use of megabyte to describe storage devices and media. Caleus Model 206-306 Maintenance Manual uses 3B to characterize a drive having bytes capacity. The first 5 inch floppy disk drive, the Shugart SA 400, is introduced in August 1976. The drive had 35 tracks and was single sided. The data sheet gives the unformatted capacity as 3125 bytes per track for a total of 109.4 bytes ( × 35 = ). When formatted with 256 byte sectors and 10 sectors per track the capacity is 89.6 bytes (256 × 10 × 35 = ). 1977 HP 7905A Disc Drive Operator's Manual "nearly 15 million bytes" with no other abbreviations 1977 Disk/Trend Report – Rigid Disk Drives, published June 1977 This first edition of the annual report on the hard disk drive industry makes extensive use of B as 106 bytes. The industry, in 1977, is segmented into nine segments ranging from "Disk Cartridge Drives, up to 12 B" to "Fixed Disk Drives, over 200 B." While the categories changed during the next 22 years of publication, Disk/Trend, the principal marketing study of the hard disk drive industry always and consistently categorized the industry in segments using prefixes and later in the decimal sense. VAX-11/780 Architecture Handbook 1977–78. Copyright 1977 Digital Equipment Corporation. Page 2-1 "physical address space of 1 gigabyte (30 bits of address)" The initial hardware was limited to 2 M bytes of memory utilizing the 4K MOS RAM chips. The VAX11/780 handbooks use M byte and Mbyte in the same paragraph. 1978 DEC RM02/03 Adapter Technical Description Manual "The RM02 or RM03 Disk Drive (Figure 1-1) is an 80 byte (unformatted; 67 byte formatted) ... storage device ... in the 16-bit format, the maximum storage capacity is data words per disk pack" ( × 16 / 8 = 8-bit bytes) 1979 Fujitsu M228X Manual "Storage capacity (unformatted)" "67.4 B", "84.2 B", etc. " Bytes" per track, 4 tracks per cylinder, 808+15 cylinders = bytes Sperry Univac Series V77 Microcomputer Systems Brochure, Circa 1978, Printed July 1979 Page 5: Table list memory options as 64KB, 128KB, and 256KB. Memory Expansion is up to 2048KB Page 9: "Memory for the V77-800 is available in 128K byte and 256K byte increments up to a maximum of 2 megabytes" Page 21: Moving Head Disks – units up to 232 million byte disk pack systems. Diskette – storage of 0.5 MB per drive. 1980s 1980 Shugart Associates Product Brochure, published June 1980 specifies the capacity of its two HDDs using megabytes and MB in a decimal sense, e.g. SA1000 formatted capacity is stated as "8.4 B" and is 256 × 32 × 1024 = bytes. Shugart Associates SA410/460 Data Sheet published October 1980 contains capacity specifications as follows: The same data sheet uses MByte in a decimal sense. 1981 8086 Object Module Formats "The 8086 MAS is 1 byte ()" Quantum Q2000 8" Media Fixed Disk Drive Service Manual "four models ... the Q2010 having an unformatted 10.66 Mb capacity on one disk platter and two heads, the ... 21.33 Mb ... 32.00 Mb ... 42.66 Mb" (1024 tracks × "10.40Kb" per track = 10649 "Kb", which they write as "10.66Mb", so 1 "Mb" = 1000 "Kb") (256 Bytes per sector, 32 Sectors/tk = 8192 bytes, which they write as "8.20Kb" per track) "Storage capacity of 10, 20, 30, or 40 megabytes" 4.34M bits/second transfer rate" Apple Disk III data sheet "Formatted Data Capacity: 140K bytes" Apple uses K in a binary sense since the actual formatted capacity is 35 tracks × 16 sectors/track × 256 bytes/sector = 140 KiB = 143.360 kB 1982 Brochure for the IBM Personal Computer (PC) "User memory: 16KB to more than 512KB", "single-sided 160KB or double-sided 320KB diskette drives" IBM Technical Reference: Personal Computer Hardware Reference Library "The drives are soft sectored, single or double sided, with 40 tracks per side. They are Modified Frequency Modulation (MFM) coded in 512 byte sectors, giving a formatted capacity of bytes per drive for single sided and bytes per drive for double sided." Seagate ST 506/412 OEM Manual "Total formatted capacity [...] is 5/10 megabytes (32 sectors per track, 256 bytes per sector, 612/1224 tracks)" 1983 IBM S/360 S/370 Principles Of Operation GA22-7000 includes as statement: "In this publication, the letters , and denote the multipliers 210, 220 and 230 respectively. Although the letters are borrowed from the decimal system and stand for 103, 106 and 109 they do not have decimal meaning but instead present the power of 2 closest to the corresponding power of 10." IBM 341 4-inch Diskette Drive unformatted capacity " bytes" "Total unformatted capacity (in kilobytes): 358.0" Maxtor XT-1000 brochure "Capacity, unformatted" 9.57 MB per surface = 10,416 bytes per track × 918 tracks per surface = 9,561,888 byte (decimal MB) Shugart Associates SA300/350 Data Sheet published c. November 1983 (one of the first MIC standard 3.5" FDDs) contains capacity specifications as follows: Shugart Associates, one of the leading FD companies used k in a decimal sense. 1984 The Macintosh Operating System is the earliest known operating system using the prefix K in a binary sense to report both memory size and HDD capacity. In the original 1984 Apple Macintosh ad, page 8, Apple characterized its 3 floppy disk as "400", that is, 800 × 512 byte sectors or bytes = 400 KiB. Similarly, the February 1984 Byte Magazine review describes the FD as "400 bytes". 1985 Exabyte Corp. founded September 1985. Apple introduced Macintosh Finder 5.0 with HFS (Hierarchical File System)along with the Mac's first hard drive, the Hard Disk 20. Finder 5.x displayed drive capacity in binary K units. The Hard Disk 20 Manual specified the HDD as having "Data capacity (formatted): bytes Bytes per block: 532 (512 user data, 20 system data) Total disk blocks: and has the following definition in its glossary:megabyteApproximately one million bytes () of information. A 20 megabyte hard disk holds 20 million bytes of information, or 20,000 kilobytes (20,000K) (Apple Hard Disk 20 Manual)The user data is × 512 = bytes here. 1986 Apple IIgs introduced September 1986 ProDos16 uses MB in a binary sense. Similar usage in "ProDOS Technical Reference Manual" (c) 1985, p. 5 & p. 163 Digital Large System Mass Storage Handbook (c) dated September 1986 "GByte: An abbreviation for one billion (one thousand million) bytes." p. 442 "M: An abbreviation for one million. Typically combined with a unit of measure, such as bytes (MBytes), or Hertz (MHz)." p. 444 1987 Seagate Universal Installation Handbook ST125 listed as 21 "Megabytes" formatted capacity, later document seems to confirm that this is decimal Disk/Trend Report – Rigid Disk Drives, October 1987 First use of GB in a decimal sense in this HDD marketing survey; Figure 1 states "FIXED DISK DRIVES more than 1 GB" market size as $10,786.6 million. Webster's Ninth New Collegiate Dictionary (1987) has binary definitions for kilobyte and megabyte. byte n [from the fact that (210) is the power of 2 closest to ] (1970): bytes byte n (1970): bytes 1988 Imprimis Wren VII 5 Inch Rigid Disk Drive Data Sheet, printed 11/88 "Capacity of 1.2 gigabyte (GB)" 1989 IBM Enterprise Systems Architecture/370, Reference Summary (GX20-0406-0), p. 50 (the last page), has a two table, one to recap the decimal value of power of 2 and 16 to 260, and one that read: Electronic News, 25 September 1989, "Market 1.5GB Drives" "Imprimis and Maxtor are the only two drive makers to offer the new generation of drives in the 1.5GB capacity range ..." "IBM, Hewlett-Packard, Fujitsu, Toshiba, Hitachi and Micropolis are expected to enter the market for 1.5GB capacity..." 1990s 1990 Matsuda et al. refer to bits (32 × 32 optoelectronic switches) as "1-kb memory". GEOS ad "512K of memory" The enhanced DOS command line processor 4DOS 3.00 supports a number of additional conditions (DISKFREE, DOSMEM/DOSFREE, EMS, EXTENDED, FILESIZE and XMS) in IF commands, which allow to test for sizes in bytes, kilobytes (by appending a K) or megabytes (by appending an M), where 1 is defined as 1024 bytes and 1 is defined as bytes. DEC RA90/RA92 Disk Drive Service Manual "RA90 Disk Drive ... Storage capacity, formatted 1.216 bytes" ( × × = ) 1991 The 19th CGPM defines the SI prefixes , and as 1021 and 1024. May 13: Apple releases Macintosh System 7 containing Finder 7.0, which uses M in a binary sense to describe HDD capacity. The HP 95LX uses "1MB" in a binary sense to describe its RAM capacity. Micropolis 1528 Rigid Disk Drive Product Description "1.53 Bytes" ... "Up to 1.53 bytes (unformatted) per drive" "Bytes/Unit: 1531.1" ( × × 15 = ) Similar to a feature in 4DOS 3.00, the enhanced command line processor 4DOS 4.00 adds support for a number of variable functions (like %@FILESIZE[...]%), taking special arguments to control the format of the returned values: The lowercase letters k and m are used as decimal prefixes, whereas the uppercase letters K and M are used in their binary meaning. T. Smith, W. Moorman and T. Dang refer to 220 microseconds as a "-second (MUS)", mixing binary use of the prefix "mega-" with the conventional decimal prefix micro. 1993 While the HP 48G calculators are labelled 32K or 128K to describe their built-in SRAM capacity in a binary sense, the user manual variably uses the terms KB, KBytes and kilobytes in the same meaning. The enhanced command line processor 4DOS 5.00 introduces the concept of a general size range parameter /[smin,max] for file selection, recognizing lowercase letters k and m as decimal prefixes and uppercase letters K and M as binary prefixes. 1994 Feb: Microsoft Windows for Workgroup 3.11 File Manager uses MB in a binary sense to describe HDD capacity. Prior versions of Windows only used K in a binary sense to describe HDD capacity. Micropolis 4410 Disk Drive Information "1,052 MB Formatted Capacity" "Unformatted Per Drive 1,205 MB" (133.85 MB per surface, 9 read-write heads) The HP 200LX models use "1MB"/"2MB"/"4MB" in a binary sense to describe their RAM capacity. 1995 August: The International Union of Pure and Applied Chemistry's Interdivisional Committee on Nomenclature and Symbols proposed new prefixes (symbol ), (), () and (), etc. for powers of 1024. 1996 FOLDOC defines the byte (1 B) as 1024 bytes (1024 B), with byte used in the binary sense of 10245 B. Markus Kuhn proposes a system with di prefixes, like the "byte" (B) and "byte" (B). It did not see significant adoption. 1997 January: Bruce Barrow endorses the International Union of Pure and Applied Chemistry's proposal for prefixes , , , etc. in "A Lesson in Megabytes" in IEEE Standards Bearer IEEE requires prefixes to take the standard SI meaning (e.g., always to mean 10002). Exceptions for binary meaning ( to mean 10242) are permitted as an interim measure (where pointed out on a case-by-case basis) until a binary prefix could be standardised. FOLDOC defines the byte (1 B) as 1024 bytes (1024 B) and the byte (1 B) as 1024 bytes (1024 B). 1998 December: IEC establishes unambiguous prefixes for binary multiples (B, B, B, B, B and B), reserving B, B, B and so on for their decimal sense. Formally published in January 1999. 1999 Donald Knuth, who uses decimal notation like 1 B = 1000 B, expresses "astonishment" that the proposal was adopted by the IEC, calling them "funny-sounding", and proposes that the powers of 1024 be designated as "large kilobytes" and "large megabytes" (abbreviated B and B, as "doubling the letter connotes both binary-ness and large-ness"). Double prefixes were formerly used in the metric system, however, with a multiplicative meaning ("B" would be equivalent to "B"), and this proposed usage never gained any traction. In their November 1999 paper, Steven W. Schlosser, John Linwood Griffin, David F. Nagle and Gregory R. Ganger adopt the symbol B for byte and quote data throughput in bytes per second "... Although these numbers appear to yield a capacity of 2.98 B per sled, the capacity decreases ... This yields an effective capacity of about 2.098 B per sled. ..." "maximum throughput (B/s)" The IEEE 802.11-1999 standard introduces the time unit TU defined as 1024 μs. 2000s 2001 IBM, z/Architecture, Reference Summary Page 59, list the power of 2 and 16, and their decimal value. There is a column name 'Symbol', which list K (kilo), M (mega), G (giga), T (tera), P (peta) and E (exa) for the power of 2 of, respectively, 10, 20, 30, 40, 50, 60. Peuhkuri adopts IEC prefixes in his paper at the 2001 Internet Measurement Conference: "... allows maximum size of 224 that requires 1 GiB of RAM ... or acknowledgement numer [sic] is within 32 KiB range. ... on a PC with Celeron processor with 512 MiB of memory ..." The Linux kernel uses IEC prefixes. 2002 Marcus Kuhn introduces the term hertz to mean 1024 Hz. "Most embedded clocks (state of the art is still a calibrated 32 hertz crystal) have a frequency error of at least 10−5 (10 ppm), and therefore drift away from the TAI rate faster than 1 second per week." Mackenzie et al 2002: use byte (B), byte (B), byte (B) use the symbols B, B, accompanied by notes explaining that these are "a GNU extension to IEC 60027-2" JEDEC publishes the standard JESD100B.01, which lists the prefixes , and qualified "as a prefix to units of semiconductor storage capacity", in "contrast with the SI prefix mega () equal to 106, as in a 1-b/s data transfer rate, which is equal to bits per second." A table lists the binary prefixes (210)1, (210)2, (210)3, (210)4 and the decimal prefixes (103)1, (103)2, (103)3, (103)4. 2003 The World Wide Web Consortium publishes a Working Group Note describing how to incorporate IEC prefixes into mathematical markup. 2004 2004 revision of IEEE Standard Letter Symbols for Units of Measurement (SI Units, Customary Inch-Pound Units, and Certain Other Units), IEEE Std 260.1, incorporates IEC definitions for B, B etc., reserving the symbols B, B etc. for their decimal counterparts. Chris Hurley refers to 1.024 milliseconds as a "second", mixing binary use of the prefix "" with the conventional decimal prefix . Thomas Maufer draws an equivalence between the "-second" and "Time Unit" (TU) that was introduced by the IEEE 802.11-1999 standard. 2005 IEC extends binary prefixes to include zebi (Zi) and yobi (Yi) IEC prefixes are adopted by the IEEE after a two-year trial period. On March 19, 2005, the IEEE standard IEEE 1541-2002 (Prefixes for Binary Multiples) was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. 2006 The BIPM publishes the 8th SI Brochure including the note "These SI prefixes refer strictly to powers of 10. They should not be used to indicate powers of 2 [...]. The IEC has adopted prefixes for binary powers [...]. Although these prefixes are not part of the SI, they should be used in the field of information technology to avoid the incorrect usage of the SI prefixes." In addition to the k and m decimal as well as the K and M binary prefixes, 4DOS 7.50.141 (2006-12-24) adds support for g and G as decimal respective binary prefixes in variable functions and size range parameters. 2007 Windows Vista still uses the binary conventions (e.g., 1 KB = 1024 bytes, 1 MB = 1048576 bytes) for file and drive sizes, and for data rates GParted uses IEC prefixes for partition sizes Advanced Packaging Tool and Synaptic Package Manager use standard SI prefixes for file sizes IBM uses "exabyte" to mean 10246 bytes. "Each address space, called a 64-bit address space, is 16 exabytes (EB) in size; an exabyte is slightly more than one billion gigabytes. The new address space has logically 264 addresses. It is 8 billion times the size of the former 2-gigabyte address space, or 18,446,744,073,709,600,000 bytes." Edward Michael McDonald III uses the terms zebibyte, yobibyte to mean 270 and 280 bytes, respectively, pointing out the need to distinguish between decimal and binary prefixes when dealing with the storage capacity of high performance computers. 2008 The US National Institute of Standards and Technology guidelines prohibit use of SI prefixes k, M, ... in the binary sense, and suggest IEC prefixes Ki, Mi ... for binary multiples p. 29, "The names and symbols for the prefixes corresponding to 210, 220, 230, 240, 250, and 260 are, respectively: , ; , ; , ; , ; , ; and , . Thus, for example, one byte is also written as 1 B = 210 B = 1024 B, where B denotes the unit byte. Although these prefixes are not part of the SI, they should be used in the field of information technology to avoid the non-standard usage of the SI prefixes." The binary prefixes are defined in IEC Standard IEC 80000-13, formally incorporating them into the ISO/IEC series of standards of quantities and units. IBM WebSphere describes data transfer using unambiguous IEC prefixes "Current file. The name of the file currently being transferred. The part of the individual file that has already been transferred is displayed in B, B, B, B, or B along with total size of the file in parentheses. The unit of measurement displayed depends on the size of the file. B is bytes per second. B/s is kibibytes per second, where 1 byte equals 1024 bytes. B/s is bytes per second, where 1 byte equals bytes. B/s is bytes per second where 1 byte equals bytes. B/s is bytes per second where 1 byte equals bytes." "The rate the file is being transferred in B/s (bytes per second, where 1 byte equals 1024 bytes.)" 2009 Apple Inc. uses the SI decimal definitions for capacity (e.g., 1 kilobyte = 1000 bytes) in the Mac OS X v10.6 operating system to conform with standards body recommendations and avoid conflict with hard drive manufacturers' specifications. Frank Löffler and co-workers report disk size and computer memory in tebibytes. "For the largest simulations using 2048 cores this sums up to about 650 GiB per complete checkpoint and about 6.4 TiB in total (for 10 checkpoints)." the SourceForge web site "For example, in 2009, the SourceForge web site reported file sizes using binary prefixes for several months before changing back to SI prefixes but switching the file sizes to powers of ten." The binary prefixes, as defined by IEC 80000-13, are incorporated into ISO 80000-1, including a note that "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2." In ISO 80000-1, the application of the binary prefixes is not limited to computer technology. For example, 1 KiHz = 1024 Hz. 2010s 2010 The Ubuntu operating system uses the SI prefixes for base-10 numbers and IEC prefixes for base-2 numbers as of the 10.10 release. Baba Arimilli and co-workers use the pebibyte (PiB) for computer memory and disk storage and exbibyte (EiB) for archival storage "Blue Waters will comprise more than 300.000 POWER7 cores, more than 1 PiB memory, more than 10 PiB disk storage, more than 0.5 EiB archival storage, and achieve around 10 PF/s peak performance." HP publishes a leaflet explaining use of SI and binary prefixes "To reduce confusion, vendors are pursuing one of two remedies: they are changing SI prefixes to the new binary prefixes, or they are recalculating the numbers as powers of ten." "For disk and file capacities, the latter remedy is more popular because it is much easier to recognize that 300 GB is the same as 300,000 MB than to recognize that 279.4 GiB is the same as 286,102 MiB." "For memory capacities, binary prefixes are more natural. For example, reporting a Smart Array controller cache size of 512 MiB is preferable to reporting it as 536.9 MB." "HP is considering modifying its storage utilities to report disk capacity with correct decimal and binary values side-by-side (for example, '300 GB (279.4 GiB)'), and report cache sizes with binary prefixes ('1 GiB')." 2011 The GNU operating system uses the SI prefixes for base-10 numbers and IEC prefixes for base-2 numbers as of the parted-2.4 release (May 2011). "specifying partition start or end values using MiB, GiB, etc. suffixes now makes parted do what I want, i.e., use that precise value, and not some other that is up to 500KiB or 500MiB away from what I specified. Before, to get that behavior, you would have had to use carefully chosen values with units of bytes ('B') or sectors ('s') to obtain the same result, and with sectors, your usage would not be portable between devices with varying sector sizes. This change does not affect how parted handles suffixes like KB, MB, GB, etc." "Note that as of parted-2.4, when you specify start and/or end values using IEC binary units like 'MiB', 'GiB', 'TiB', etc., parted treats those values as exact, and equivalent to the same number specified in bytes (i.e., with the 'B' suffix), in that it provides no 'helpful' range of sloppiness. Contrast that with a partition start request of '4GB', which may actually resolve to some sector up to 500MB before or after that point. Thus, when creating a partition, you should prefer to specify units of bytes ('B'), sectors ('s'), or IEC binary units like 'MiB', but not 'MB', 'GB', etc." On its Archive Project Request Form, the University of Oxford uses IEC prefixes: "The initial amount of data to be archived (MiB GiB TiB )" The IBM Style Guide permits IEC prefixes or "SI prefixes" if used consistently and explained to the user "Whether you choose to use IEC prefixes for powers of 2 and SI prefixes for powers of 10, or use SI prefixes for a dual purpose ... be consistent in your usage and explain to the user your adopted system." 2012 June: Toshiba describes data transfer rates in units of MiB/s. In the same press release, SSD storage capacity is given in decimal gigabytes, accompanied by the footnote "One Gigabyte (GB) means 109 = 1,000,000,000 bytes using powers of 10. A computer operating system, however, reports storage capacity using powers of 2 for the definition of 1 GB = 1,073,741,824 bytes and therefore shows less storage capacity" July: Ola BRUSET and Tor Øyvind VEDAL are granted a patent citing the binary unit KiHz to mean 1024 hertz The Minnesota Supercomputing Institute of the University of Minnesota uses IEC prefixes to describe its supercomputing facilities "Itasca is an HP Linux cluster with 1,091 HP ProLiant BL280c G6 blade servers, each with two quad-core 2.8 GHz Intel Xeon X5560 'Nehalem EP' processors sharing 24 GiB of system memory, with a 40-gigabit QDR InfiniBand (IB) interconnect. In total, Itasca consists of 8,728 compute cores and 24 TiB of main memory." "Cascade consists of a Dell R710 head/login node, 48 GiB of memory; eight Dell compute nodes, each with dual X5675 six-core 3.06 GHz processors and 96 GiB of main memory; and 32 Nvidia M2070 GPGPUs. A compute node is connected to four GPGPUs, each of which has 448 3.13 GHz cores and 5 GiB of memory. Each GPU is capable of 1.2 single-precision TFLOPS and 0.5 double-precision TFLOPs." Phidgets Inc describes PhidgetSBC3 as a "Single board computer running Debian 7.0 with 128 MiB DDR2 SDRAM, 1 GiB Flash, integrated 1018 and 6 USB 2.0 High Speed 480Mbits/s ports". IBM's Customer Information Center uses IEC prefixes to disambiguate "To reduce the possibility of confusion, this information center represents data storage using both decimal and binary units. Data storage values are displayed using the following format:#### decimal unit (binary unit). By this example, the value 512 terabytes is displayed as: 512 TB (465.6 TiB)" 2013 February: Toshiba distinguishes unambiguously between decimal and binary prefixes by means of footnotes. Hybrid drives MQ01ABD100H and MQ01ABD075H are described as having a buffer size of 32 B. "1 B (megabytes) = bytes, 1 B (bytes) = bytes, 1 B (bytes) = bytes" "B () = (210 bytes), B (bytes) = (220) bytes, B (gibibytes) = (230) bytes". March: Kevin Klughart uses the byte (B) and byte (B) as units for maximum volume size PRACE Best Practice Guide uses IEC prefixes for net capacity (300 B) and throughput (2 B/s). Nicla Andersson, of Sweden's National Supercomputer Centre, Sweden, refers to the NSC's Triolith as having "42.75 B memory" and "75 B/s aggregate memory BW" and to a 2018 DARPA target of "32–64 B memory" August: Mitsuo Yokokawa, of Kobe University, describes the Japanese K Computer as having "1.27 (1.34) PiB" of memory. The official file server of the University of Stuttgart reports file sizes in bytes (B) and bytes (B). In their book IBM Virtualization Engine TS7700 with R3.0, Coyne et al. use IEC prefixes to distinguish them from decimal prefixes. Examples are "Larger, 1.1 B (1 B) internal buffer on Model E06/EU6, 536.9 B (512 B) for Model E05, 134.2 B (128 B) for Model J1A" "Up to 160 bit/sec. native data rate for the Models E06 and EU6, four times faster than the model J1A at 40 bit/sec. (Up to 100 bit/sec. for the Model E05)" Maple 17 uses B and B as units of memory usage. November: The online computer dictionary FOLDOC defines the byte as one thousand (1000) bytes, the byte as one million (10002) bytes, and the byte as one billion (10003) bytes. 2014 February: Rahul Bali writes "the [Sequia (IBM)] contains in total 1,572,864 processor cores with 1.5 PiB memory" "The total CPU plus coprocessor memory [of the Tianhe-2 (NUDT)] is 1,375 TiB." CDBurnerXP states disc sizes in mebibytes (MiB) and gibibytes (GiB), clarifying that "in Windows, if you see GB or MB it usually refers to GiB or MiB respectively". September: HP 3PAR StoreServ Storage best practices guide uses binary prefixes for storage and decimal prefixes for speed. 2017 K Liao and co-authors approximate the year as 30 mebiseconds (30 Mis) 2019 The BIPM publishes the 9th SI brochure, confirming the position from its 8th brochure (published in 2006), with the note "The SI prefixes refer strictly to powers of 10. They should not be used to indicate powers of 2 [...]" 2020s 2020 A Californian court finds that, as the NIST specifies that prefixes such as "" are decimal rather than binary, and that California law specifies that the NIST definitions of measure "shall govern ... transactions in this state", and because the vendor of a 64 B flash drive with 64 billion bytes indicated on the packaging of the drive that 1 B = bytes, they did not deceive consumers into believing that the drive had 64 × 1024 × 1024 × 1024 bytes. 2021 Ainslie, Halvorsen and Robinson point out the parallel with the confusion between a one-third octave and a one-tenth decade in acoustics. "The near coincidence between ten octaves and three decades (210 ≈ 103) is identical to the one that causes confusion in the computer industry by use of the term 'byte' to mean 1024 B ... when the internationally accepted use of the prefix requires it to mean 1000 B." 2022 February: IEEE 1541 is amended to include the prefixes and . November: The additional decimal prefixes for 10009 and for 100010 are adopted by the International Bureau of Weights and Measures (BIPM). Binary counterparts to and were suggested in a consultation paper of the Consultative Committee for Units (CCU) for the International Committee for Weights and Measures as (, 10249) and (, 102410), but so far they have not been adopted by the IEC or ISO. References B Units of information
Timeline of binary prefixes
[ "Mathematics" ]
11,242
[ "Units of information", "Quantity", "Units of measurement" ]
15,950,691
https://en.wikipedia.org/wiki/MPEG-D
MPEG-D is a group of standards for audio coding formally known as ISO/IEC 23003 - MPEG audio technologies, published since 2007. MPEG-D consists of four parts: MPEG-D Part 1: MPEG Surround (a.k.a. Spatial Audio Coding) MPEG-D Part 2: Spatial Audio Object Coding (SAOC) MPEG-D Part 3: Unified speech and audio coding MPEG-D Part 4: Dynamic Range Control MPEG-D Part 5: Uncompressed audio in MPEG-4 File Format See also ISO/IEC JTC 1/SC 29 References ISO/IEC standards Audio codecs MPEG
MPEG-D
[ "Technology" ]
139
[ "Multimedia", "MPEG" ]
15,950,771
https://en.wikipedia.org/wiki/S%20Normae
S Normae (S Nor) is a yellow supergiant variable star in the constellation Norma. It is the brightest member of the open cluster NGC 6087. S Normae is a Classical Cepheid variable with a visual magnitude range of 6.12 to 6.77 and a period of 9.75411 days. The spectral type varies during the pulsation cycle from F8 to G0. Its mass has been measured at with reference to a close orbital companion, and it is over 3,000 times as luminous as the sun. Companions S Normae is a spectroscopic binary, although the companion has now been resolved using the Hubble Space Telescope Wide Field Camera 3. The separation was 0.90" in April 2011, corresponding to 817 AU. This gives the rare opportunity for a direct determination of the mass of a Cepheid variable star and confirmation of other properties. It is a supergiant that is 6.3 times as massive as the Sun and 2,800 times as luminous. The companion a blue-white main sequence star of spectral type B9.5. There is a more distant 10th magnitude companion at 30", unsurprising in the centre of an open cluster. It is TYC 8719-794-1, a chemically peculiar A or B class star. Three fainter companions have also been found: a 14th magnitude star at 14"; and two 16th magnitude stars at 20". References External links Light curves in UBVRI colours Norma (constellation) Normae, S F-type supergiants 146323 6062 Classical Cepheid variables B-type main-sequence stars Durchmusterung objects 079932 Binary stars
S Normae
[ "Astronomy" ]
358
[ "Norma (constellation)", "Constellations" ]
15,951,109
https://en.wikipedia.org/wiki/List%20of%20space%20telescopes
This list of space telescopes (astronomical space observatories) is grouped by major frequency ranges: gamma ray, x-ray, ultraviolet, visible, infrared, microwave, and radio. Telescopes that work in multiple frequency bands are included in all of the appropriate sections. Space telescopes that collect particles, such as cosmic ray nuclei and/or electrons, as well as instruments that aim to detect gravitational waves, are also listed. Missions with specific targets within the Solar System (e.g., the Sun and its planets), are excluded; see List of Solar System probes for these, and List of Earth observation satellites for missions targeting Earth. Two values are provided for the dimensions of the initial orbit. For telescopes in Earth orbit, the minimum and maximum altitude are given in kilometers. For telescopes in solar orbit, the minimum distance (periapsis) and the maximum distance (apoapsis) between the telescope and the center of mass of the Sun are given in astronomical units (AU). Gamma ray Gamma-ray telescopes collect and measure individual, high energy gamma rays from astrophysical sources. These are absorbed by the atmosphere, requiring that observations are done by high-altitude balloons or space missions. Gamma rays can be generated by supernovae, neutron stars, pulsars and black holes. Gamma ray bursts, with extremely high energies, have also been detected but have yet to be identified. X-ray X-ray telescopes measure high-energy photons called X-rays. These can not travel a long distance through the atmosphere, meaning that they can only be observed high in the atmosphere or in space. Several types of astrophysical objects emit X-rays, from galaxy clusters, through black holes in active galactic nuclei to galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. Ultraviolet Ultraviolet telescopes make observations at ultraviolet wavelengths, i.e. between approximately 10 and 320 nm. Light at these wavelengths is absorbed by the Earth's atmosphere, so observations at these wavelengths must be performed from the upper atmosphere or from space. Objects emitting ultraviolet radiation include the Sun, other stars and galaxies. UV ranges listed at Ultraviolet astronomy#Ultraviolet space telescopes. Visible light The oldest form of astronomy, optical or visible-light astronomy, observes wavelengths of light from approximately 400 to 700 nm. Positioning an optical telescope in space eliminates the distortions and limitations that hamper that ground-based optical telescopes (see Astronomical seeing), providing higher resolution images. Optical telescopes are used to look at planets, stars, galaxies, planetary nebulae and protoplanetary disks, amongst many other things. Infrared and submillimetre Infrared light is of lower energy than visible light, hence is emitted by sources that are either cooler, or moving away from the observer (in present context: Earth) at high speed. As such, the following can be viewed in the infrared: cool stars (including brown dwarves), nebulae, and redshifted galaxies. Microwave Microwave space telescopes have primarily been used to measure cosmological parameters from the Cosmic Microwave Background. They also measure synchrotron radiation, free-free emission and spinning dust from the Milky Way Galaxy, as well as extragalactic compact sources and galaxy clusters through the Sunyaev-Zel'dovich effect. Radio As the atmosphere is transparent for radio waves, radio telescopes in space are most useful for Very Long Baseline Interferometry: doing simultaneous observations of a source with both a satellite and a ground-based telescope and by correlating their signals to simulate a radio telescope the size of the separation between the two telescopes. Typical targets for observations include supernova remnants, masers, gravitational lenses, and starburst galaxies. Particle detection Spacecraft and space-based modules that do particle detection, looking for cosmic rays and electrons. These can be emitted by the Sun (Solar Energetic Particles), the Milky Way galaxy (Galactic cosmic rays) and extragalactic sources (Extragalactic cosmic rays). There are also Ultra-high-energy cosmic rays from active galactic nuclei, those can be detected by ground-based detectors via their particle showers. Gravitational waves A type of telescope that detects gravitational waves; ripples in space-time generated by colliding neutron stars or black holes. To be launched See also List of proposed space observatories List of heliophysics missions List of solar telescopes Lists of telescopes Lists of spacecraft Great Observatories program References External links Space telescopes telescopes Telescopes
List of space telescopes
[ "Astronomy" ]
988
[ "Space telescopes", "Astronomy-related lists", "Lists of telescopes" ]
15,951,436
https://en.wikipedia.org/wiki/78xx
78xx (sometimes L78xx, LM78xx, MC78xx...) is a family of self-contained fixed linear voltage regulator integrated circuits. The 78xx family is commonly used in electronic circuits requiring a regulated power supply due to their ease-of-use and low cost. Nomenclature and packaging For ICs within the 78xx family, the xx is replaced with two digits, indicating the output voltage (for example, the 7805 has a 5-volt output, while the 7812 produces 12 volts). The 78xx line are positive voltage regulators: they produce a voltage that is positive relative to a common ground. There is a related line of 79xx devices which are complementary negative voltage regulators. 78xx and 79xx ICs can be used in combination to provide positive and negative supply voltages in the same circuit. 78xx ICs have three terminals and are commonly found in the TO-220 form factor, although they are also available in TO-92, TO-3 'through hole' and SOT-23 surface-mount packages. These devices support an input voltage anywhere from around 2.5 volts over the intended output voltage up to a maximum of 35 to 40 volts depending on the model, and typically provide 1 or 1.5 amperes of current (though smaller or larger packages may have a lower or higher current rating). Family members 78xx There are common configurations for 78xx ICs, including 7805 (5 V), 7806 (6 V), 7808 (8 V), 7809 (9 V), 7810 (10 V), 7812 (12 V), 7815 (15 V), 7818 (18 V), and 7824 (24 V) versions. The 7805 is the most common, as its regulated 5-volt supply provides a convenient power source for most TTL components. Less common are lower-power versions such as the LM78Mxx series (500 mA) and LM78Lxx series (100 mA) from National Semiconductor. Some devices provide slightly different voltages than usual, such as the LM78L62 (6.2 volts) and LM78L82 (8.2 volts) as well as the STMicroelectronics L78L33ACZ (3.3 volts). The 7805 has been used in some ATX power supply designs for the +5 VSB (+5 V standby) output. 79xx The 79xx devices have a similar "part number" to "voltage output" scheme, but their outputs are negative voltage, for example 7905 is −5 V and 7912 is −12 V. The 7905 and/or 7912 were popular in many older ATX power supply designs, and some newer ATX power supplies may have a 7912. Unrelated devices The LM78S40 from Fairchild is not part of the 78xx family and does not use the same design. It is a component in switching regulator designs and is not a linear regulator like other 78xx devices. The 7803SR from Datel is a full switching power supply module (designed as a drop-in replacement for 78xx chips), and not a linear regulator like the 78xx ICs. Advantages While external capacitors are typically required, 78xx series ICs do not require additional components to set their output voltage. 78xx designs are simple in comparison to switch-mode power supply designs. 78xx series ICs have built-in protection against a circuit drawing too much current. They have protection against overheating and short-circuits, making them robust in most applications. Disadvantages The input voltage must always be higher than the output voltage by some minimum amount (typically 2.5 volts). This can make these devices unsuitable for powering some devices from certain types of power sources (for example, powering a circuit that requires 5 volts using 6-volt batteries will not work using a 7805). For input voltages closer to the output voltage, a pin-compatible low-dropout regulator (LDO) can be used instead. As they are based on a linear regulator design, the input current required is always the same as the output current. As the input voltage must always be higher than the output voltage, this means that the total power (voltage multiplied by current) going into the 78xx will be more than the output power provided. The difference is dissipated as heat. This means both that for some applications an adequate heatsink must be provided, and also that a (often substantial) portion of the input power is wasted during the process, rendering them less efficient than some other types of power supplies. When the input voltage is significantly higher than the regulated output voltage (for example, powering a 7805 using a 24 volt power source), this inefficiency can be a significant issue. Buck converters may be preferred over 78xx regulators because they are more efficient and do not require heat sinks, though they might be more expensive. See also DC to DC converter – A class of devices which convert one DC voltage level to another. Linear regulators (and thus 78xx devices) are a form of DC to DC converter. List of linear integrated circuits List of LM-series integrated circuits LM317 – A similar linear regulator chip with a configurable output voltage. References Further reading App Notes Power Supply Design Basics; AN253; 1995; SGS-Thomson Microelectronics (now ST) External links Reverse engineering a 7805 voltage regulator, detailed information about how a 7805 works and reference links 7800 series voltage regulators - description & circuits. Datasheets General Purpose Linear Devices Databook (Historical 1989), National Semiconductor (now TI) LM78xx / LM340 (positive), Texas Instruments L78xx (positive), STMicroelectronics LM79xx (negative), Texas Instruments L79xx (negative), STMicroelectronics Linear integrated circuits Voltage regulation de:Spannungsregler#Typenbezeichnungen 78xx
78xx
[ "Physics" ]
1,285
[ "Voltage", "Physical quantities", "Voltage regulation" ]
15,951,454
https://en.wikipedia.org/wiki/G%20185-32
G 185-32, also known by the variable star designation PY Vulpeculae, is a white dwarf in the constellation Vulpecula. Located approximately distant, the stellar remnant is a ZZ Ceti variable, varying by 0.02 apparent magnitudes from the mean of 13.00. Observational history This star was first noticed during a survey for high proper motion stars by Henry L. Giclas, at Lowell Observatory, who listed it as a suspected white dwarf. The white dwarf designation was confirmed spectroscopically in 1970 by astronomer Jesse L. Greenstein of the California Institute of Technology. References Vulpecula Pulsating white dwarfs Vulpeculae, PY 1241
G 185-32
[ "Astronomy" ]
148
[ "Vulpecula", "Constellations" ]
15,952,810
https://en.wikipedia.org/wiki/Inductive%20output%20tube
The inductive output tube (IOT) or klystrode is a variety of linear-beam vacuum tube, similar to a klystron, used as a power amplifier for high frequency radio waves. It evolved in the 1980s to meet increasing efficiency requirements for high-power RF amplifiers in radio transmitters. The primary commercial use of IOTs is in UHF television transmitters, where they have mostly replaced klystrons because of their higher efficiencies (35% to 40%) and smaller size. IOTs are also used in particle accelerators. They are capable of producing power output up to about 30 kW continuous and 7 MW pulsed and power gains of 20–23 dB at frequencies up to about a gigahertz. History The inductive output tube (IOT) was invented in 1938 by Andrew V. Haeff. A patent was later issued for the IOT to Andrew V. Haeff and assigned to the Radio Corporation of America (RCA). During the 1939 New York World's Fair the IOT was used in the transmission of the first television images from the Empire State Building to the fair grounds. RCA sold a small IOT commercially for a short time, under the type number 825. It was soon made obsolete by newer developments, and the technology lay more or less dormant for years. The inductive output tube has re-emerged within the last twenty years after having been discovered to possess particularly suitable characteristics (broadband linearity) for the transmission of digital television and high-definition digital television. In research undertaken prior to the transition from analog to digital television broadcasting, it was discovered that electromagnetic interference from lightning, high voltage AC power transmission, AC rectifiers, and ballasts used in fluorescent lighting, greatly affected low-band VHF channels (In North America, channels 2,3,4,5, & 6) making it difficult to impossible to use them for digital television. These low-numbered channels were often the first television broadcasters in a given city, and were often large, vital operations which had no choice but to relocate to UHF. In so doing, it made modern digital television predominantly a UHF medium, and IOTs have become the output tube of choice for the power output section of those transmitters. The power output of the modern 21st century IOTs is orders of magnitude higher than the first IOTs produced by the RCA in 1940–1941 but the fundamental principle of operation basically remains the same. IOTs since the 1970s have been designed with electromagnetic modeling computer software that has greatly improved their electrodynamic performance. How it works The IOT is a linear beam vacuum tube. As in the cathode-ray tube found in old televisions, electrons are produced by a heated negative electrode or cathode and accelerated by a high positive voltage in a structure called an electron gun at one end, forming a beam traveling down the tube. At the other end of the tube the beam does not produce a glowing phosphor picture as in a CRT, but passes through a resonant cavity which extracts its energy, then strikes a positive electrode and is absorbed. IOTs have been described as a cross between a klystron and a tetrode, hence Eimac's trade name for them, Klystrode. They have an electron gun like a klystron, but with a control grid in front of it like a triode, with a very close spacing of around 0.1 mm. The high frequency RF voltage on the grid allows the electrons through in bunches. High voltage DC on a cylindrical anode accelerates the modulated electron beam through a small drift tube like a klystron. This drift tube prevents backflow of electromagnetic radiation. The bunched electron beam passes through the hollow anode into a resonant cavity, similar to the output cavity of a klystron, and strikes a collector electrode. As in a klystron, each bunch passes into the cavity at a time when the electric field decelerates it, transforming the kinetic energy of the beam into potential energy of the RF field, amplifying the signal. The oscillating electromagnetic energy in the cavity is extracted by a coaxial transmission line. An axial magnetic field prevents space charge spreading of the beam. The collector electrode is at a lower potential than the anode (depressed collector) which recovers some of the energy from the beam, increasing efficiency. Two differences from the klystron give it a lower cost and higher efficiency. First, the klystron uses velocity modulation to create bunching; its beam current is constant. It requires a drift tube several feet in length to allow the electrons to bunch. In contrast the IOT uses current modulation like an ordinary triode; most of the bunching is done by the grid, so the tube can be much shorter, making it less expensive to build and mount, and less bulky. Secondly, since the klystron has beam current throughout the RF cycle, it can only operate as an inefficient class-A amplifier, while the grid of the IOT allows more versatile operating modes. The grid can be biased so the beam current can be cut off during part of the cycle, enabling it to operate in the more efficient class B or AB mode. The highest frequency achievable in an IOT is limited by the grid-to-cathode spacing. The electrons must be accelerated off the cathode and pass the grid before the RF electric field reverses direction. The upper limit on frequency is approximately . The gain of the IOT is 20–23 dB versus 35–40 dB for a klystron. The lower gain is usually not a problem because at 20 dB the requirements for drive power (1% of output power) are within the capabilities of economical solid state UHF amplifiers. Recent advances The latest versions of IOTs achieve even higher efficiencies (60%-70%) through the use of a Multistage Depressed Collector (MSDC). One manufacturer's version is called the Constant Efficiency Amplifier (CEA), while another manufacturer markets their version as the ESCIOT (Energy Saving Collector IOT). The initial design difficulties of MSDCIOTs were overcome through the use of recirculating high dielectric transformer oil as a combined coolant and insulation medium to prevent arcing and erosion between the closely spaced collector stages and to provide reliable low-maintenance collector cooling for the life of the tube. Earlier MSDC versions had to be air cooled (limited power) or used de-ionized water that had to be filtered, regularly exchanged and provided no freezing or corrosion protection. Disadvantages Thermal radiation from the cathode heats the grid. As a result, low-work-function cathode material evaporates and condenses on the grid. This eventually leads to a short between cathode and grid, as the material accreting on the grid narrows the gap between it and the cathode. In addition, the emissive cathode material on the grid causes a negative grid current (reverse electron flow from the grid to the cathode). This can swamp the grid power supply if this reverse current gets too high, changing the grid (bias) voltage and, consequently, the operating point of the tube. Today's IOTs are equipped with coated cathodes that work at relatively low operating temperatures, and hence have slower evaporation rates, minimizing this effect. Like most linear beam tubes having external tuning cavities, IOTs are vulnerable to arcing, and must be protected with arc detectors located in the output cavities that trigger a crowbar circuit based on a hydrogen thyratron or a triggered spark gap in the high-voltage supply. The purpose of the crowbar circuit is to instantly dump the massive electrical charge stored in the high voltage beam supply before this energy can damage the tube assembly during an uncontrolled cavity, collector or cathode arc. See also Free-electron laser References External links http://www.bext.com/iot-an-old-dream-now-come-true/ http://www.ebu.ch/departments/technical/trev/trev_273-heppinstall.pdf http://www.davidsarnoff.org/kil-chapter03.html http://www.allaboutcircuits.com/vol_3/chpt_13/11.html http://www.harris.com/view_pressrelease.asp?act=lookup&pr_id=2037 http://epaper.kek.jp/p95/ARTICLES/TAQ/TAQ02.PDF Microwave technology Television technology Vacuum tubes
Inductive output tube
[ "Physics", "Technology" ]
1,827
[ "Information and communications technology", "Television technology", "Vacuum tubes", "Vacuum", "Matter" ]
15,954,084
https://en.wikipedia.org/wiki/Shear%20and%20moment%20diagram
Shear force and bending moment diagrams are analytical tools used in conjunction with structural analysis to help perform structural design by determining the value of shear forces and bending moments at a given point of a structural element such as a beam. These diagrams can be used to easily determine the type, size, and material of a member in a structure so that a given set of loads can be supported without structural failure. Another application of shear and moment diagrams is that the deflection of a beam can be easily determined using either the moment area method or the conjugate beam method. Convention Although these conventions are relative and any convention can be used if stated explicitly, practicing engineers have adopted a standard convention used in design practices. Normal convention The normal convention used in most engineering applications is to label a positive shear force - one that spins an element clockwise (up on the left, and down on the right). Likewise the normal convention for a positive bending moment is to warp the element in a "u" shape manner (Clockwise on the left, and counterclockwise on the right). Another way to remember this is if the moment is bending the beam into a "smile" then the moment is positive, with compression at the top of the beam and tension on the bottom. This convention was selected to simplify the analysis of beams. Since a horizontal member is usually analyzed from left to right and positive in the vertical direction is normally taken to be up, the positive shear convention was chosen to be up from the left, and to make all drawings consistent down from the right. The positive bending convention was chosen such that a positive shear force would tend to create a positive moment. Alternative drawing convention In structural engineering and in particular concrete design the positive moment is drawn on the tension side of the member. This convention puts the positive moment below the beam described above. A convention of placing moment diagram on the tension side allows for frames to be dealt with more easily and clearly. Additionally, placing the moment on the tension side of the member shows the general shape of the deformation and indicates on which side of a concrete member rebar should be placed, as concrete is weak in tension. Relationships among load, shear, and moment diagrams Since this method can easily become unnecessarily complicated with relatively simple problems, it can be quite helpful to understand different relations between the loading, shear, and moment diagram. The first of these is the relationship between a distributed load on the loading diagram and the shear diagram. Since a distributed load varies the shear load according to its magnitude it can be derived that the slope of the shear diagram is equal to the magnitude of the distributed load. The relationship, described by Schwedler's theorem, between distributed load and shear force magnitude is: Some direct results of this is that a shear diagram will have a point change in magnitude if a point load is applied to a member, and a linearly varying shear magnitude as a result of a constant distributed load. Similarly it can be shown that the slope of the moment diagram at a given point is equal to the magnitude of the shear diagram at that distance. The relationship between distributed shear force and bending moment is: A direct result of this is that at every point the shear diagram crosses zero the moment diagram will have a local maximum or minimum. Also if the shear diagram is zero over a length of the member, the moment diagram will have a constant value over that length. By calculus it can be shown that a point load will lead to a linearly varying moment diagram, and a constant distributed load will lead to a quadratic moment diagram. Practical considerations In practical applications the entire stepwise function is rarely written out. The only parts of the stepwise function that would be written out are the moment equations in a nonlinear portion of the moment diagram; this occurs whenever a distributed load is applied to the member. For constant portions the value of the shear and/or moment diagram is written right on the diagram, and for linearly varying portions of a member the beginning value, end value, and slope or the portion of the member are all that are required. See also Bending Euler–Bernoulli beam theory Bending moment Singularity function#Example beam calculation References Further reading Cheng, Fa-Hwa. "Shear Forces and Bending Moments in Beams" Statics and Strength of Materials. New York: Glencoe, McGraw-Hill, 1997. Print. Spotts, Merhyle Franklin, Terry E. Shoup, and Lee Emrey. Hornberger. "Shear and Bending Moment Diagrams." Design of Machine Elements. Upper Saddle River, NJ: Pearson/Prentice Hall, 2004. Print. External links Beam theory Continuum mechanics Diagrams Moment (physics) Structural analysis
Shear and moment diagram
[ "Physics", "Mathematics", "Engineering" ]
948
[ "Structural engineering", "Physical quantities", "Continuum mechanics", "Quantity", "Structural analysis", "Classical mechanics", "Mechanical engineering", "Aerospace engineering", "Moment (physics)" ]
15,954,950
https://en.wikipedia.org/wiki/Here%20Ai%27a
Here Ai’a (), also known as Te Pupu Here Ai'a Te Nunaa ia Ora (), is a pro-independence political party in French Polynesia. It was founded by John Teariki and Henri Bouvier in 1965 following the banning of the pro-independence Democratic Rally of the Tahitian People (RDPT) by the colonial French government. Supported mainly by rural Polynesians, the party was a significant force in French Polynesian politics from its foundation until the early 1980's, before entering a decline following Teariki's death in 1983. The party is currently led by Gustave Taputu. The party was founded on 9 February 1965. In order to avoid being seen as an illegal re-establishment of the RDPT, the party avoided placing former RDPT leaders in leadership positions, and stated that its objective was "a democratic development of French Polynesia in close collaboration with the French people and according to the preamble of the Constitution of 1958”. It held its first congress on 2 July 1966, the day of the first French nuclear test at Moruroa, and passed a motion stating that it would use all peaceful and legal means to end nuclear testing. The party won 7 seats out of 30 at the 1967 French Polynesian legislative election and formed a coalition government with the pro-autonomy E'a Api led by Francis Sanford. One of the first moves of the new government was to establish an Assembly investigation into the question of internal self-government. The proposals of the government were ignored by the French colonial authorities, leading to a deterioration in relations. At the 1971 municipal elections the party won the mayoralty of Uturoa, and was part of an autonomist coalition which won all 27 seats on the Papeete council, despite French interference. Pouvanaa a Oopa led the party into the 1972 election. The party won six seats, but lost power to an anti-autonomy coalition. The party contested the 1977 election with E'a Api and other minor parties as part of the United Front for Internal Autonomy. The coalition won 13 seats and was able to form a government with allies. In government the party suffered from several scandals, but managed to survive them and the subsequent breakup of the coalition. At the 1982 election, which it contested separately, it won 6 seats, but was relegated to opposition. The party's leader John Teariki died in 1983, and he was succeeded by Jean Juventin. The party suffered from an internal power-struggle, and when the colonial government finally provided a weak form of autonomy in 1985, advocacy for real independence was left to other parties such as Ia Mana te Nunaa. While the party won 5 seats at the 1986 election, it continued to decline in relevance. In 1991 it backed anti-independence President Gaston Flosse. In the 2004 election it joined Oscar Temaru's Union For Democracy (UPLD) coalition. It remained part of the UPLD in the 2008 election. At the 2013 election it formed an electoral alliance with Porinetia Ora, but gained no seats. In the 2018 election it supported Tahoera'a Huiraatira. After being inactive for 15 years, the party announced on 21 January 2023 that it would contest the 2023 election, and that its program would focus on independence. It held its first party congress in 15 years in February 2023, announcing it was seeking allies in Heiura-Les Verts or Tau Hotu rau. Election results Territorial elections References Political parties in French Polynesia Anti-nuclear organizations 1965 establishments in French Polynesia Political parties established in 1965 Separatist political parties in France
Here Ai'a
[ "Engineering" ]
747
[ "Nuclear organizations", "Anti-nuclear organizations" ]
15,955,691
https://en.wikipedia.org/wiki/List%20of%20anti%E2%80%93nuclear%20power%20groups
Anti-nuclear power groups have emerged in every country that has had a nuclear power programme. Protest movements against nuclear power first emerged in the US, at the local level, and spread quickly to Europe and the rest of the world. National nuclear campaigns emerged in the late 1970s. Fuelled by the Three Mile Island accident and the Chernobyl disaster, the anti-nuclear power movement mobilised political and economic forces which for some years "made nuclear energy untenable in many countries". Some of these anti-nuclear power organisations are reported to have developed considerable expertise on nuclear power and energy issues. In 1992, the chairman of the Nuclear Regulatory Commission said that "his agency had been pushed in the right direction on safety issues because of the pleas and protests of nuclear watchdog groups". International Friends of the Earth International, a network of environmental organizations in 77 countries. Greenpeace International, a non-governmental environmental organization with offices in over 41 countries and headquarters in Amsterdam, Netherlands. International Network of Engineers and Scientists for Global Responsibility Nuclear Information and Resource Service Pax Christi International, a Catholic group which took a "sharply anti-nuclear stand". Pugwash Conferences on Science and World Affairs Socialist International, the world body of social democratic parties. Sōka Gakkai, a peace-orientated Buddhist organisation, which held anti-nuclear exhibitions in Japanese cities during the late 1970s, and gathered 10 million signatures on petitions calling for the abolition of nuclear weapons. World Information Service on Energy, based in Amsterdam, the Netherlands World Nuclear Industry Status Report World Union for Protection of Life Australia Campaign Against Nuclear Energy Greenpeace Australia Pacific Canada Canadian Coalition for Nuclear Responsibility Pembina Institute Sortir du nucléaire (Canada) France Sortir du nucléaire (France) CRIIRAD Groupement des scientifiques pour l'information sur l'énergie nucléaire Japan Citizens' Nuclear Information Center Green Action Japan New Zealand Greenpeace Aotearoa New Zealand South Africa Koeberg Alert Spain ETA United Kingdom Friends of the Earth (EWNI) Friends of the Earth Scotland Sustainable Development Commission United States Arms Control Association Abalone Alliance Clamshell Alliance Institute for Energy and Environmental Research Musicians United for Safe Energy Natural Resources Defense Council New England Coalition Shad Alliance Sierra Club See also List of nuclear power groups Non-nuclear future List of anti-nuclear groups in the United States References Lists of environmental organizations Nuclear technology-related lists
List of anti–nuclear power groups
[ "Engineering" ]
494
[ "Nuclear organizations", "Anti-nuclear organizations" ]
15,956,296
https://en.wikipedia.org/wiki/R%20Corvi
R Corvi (R Crv) is a Mira variable star in the constellation Corvus, which ranges from a magnitude of 6.7 to 14.4 with a period of approximately 317 days. In the sky it appears close to Gamma Corvi and can be seen in the same binocular field. Extrapolating its luminosity from its period of 317 days yields a distance of 810 parsecs. References Mira variables Corvus (constellation) 107199 Corvi, R M-type giants 060106 Durchmusterung objects Emission-line stars
R Corvi
[ "Astronomy" ]
125
[ "Corvus (constellation)", "Constellations" ]