id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
581,885
https://en.wikipedia.org/wiki/Concurrent%20user
In computer science, the number of concurrent users (sometimes abbreviated CCU) for a resource in a location, with the location being a computing network or a single computer, refers to the total number of people simultaneously accessing or using the resource. The resource can, for example, be a computer program, a file, or the computer as a whole. Keeping track of concurrent users is important in several cases. First, some operating system models such as time-sharing operating systems allow several users to access a resource on the computer at the same time. As system performance may degrade due to the complexity of processing multiple jobs from multiple users at the same time, the capacity of such a system may be measured in terms of maximum concurrent users. Second, commercial software vendors often license a software product by means of a concurrent users restriction. This allows a fixed number of users access to the product at a given time and contrasts with an unlimited user license. For example: Company X buys software and pays for 20 concurrent users. However, there are 100 logins created at implementation. Only 20 of those 100 can be in the system at the same time, this is known as floating licensing. Concurrent user licensing allows firms to purchase computer systems and software at a lower cost because the maximum number of concurrent users expected to use the system or software at any given time (those users all logged in together) is only a portion of the total system users employed at a company. The concurrent licenses are global and shared by anyone who needs to use the system. This contrasts with "named-seats" licensing, in which one license must be purchased for each and every individual user, whether they are using the system or not. If a company employs 400 system users in which 275 work during the day and 125 work at night, then they can opt to purchase only 275 concurrent user licenses since there will never be more than 275 users on the system during a normal work day. The night workers share 125 of the day users' licenses to use the system. For named-seat licenses, this same company would have to purchase 400 individual licenses, one for each user, and licenses would not be globally shared. The available options for licensing are entirely at the discretion of the vendor selling the product. See also Floating licensing References Computing terminology
Concurrent user
[ "Technology" ]
461
[ "Computing terminology" ]
581,888
https://en.wikipedia.org/wiki/Luminous%20flux
In photometry, luminous flux or luminous power is the measure of the perceived power of light. It differs from radiant flux, the measure of the total power of electromagnetic radiation (including infrared, ultraviolet, and visible light), in that luminous flux is adjusted to reflect the varying sensitivity of the human eye to different wavelengths of light. Units The SI unit of luminous flux is the lumen (lm). One lumen is defined as the luminous flux of light produced by a light source that emits one candela of luminous intensity over a solid angle of one steradian. In other systems of units, luminous flux may have units of power. Weighting The luminous flux accounts for the sensitivity of the eye by weighting the power at each wavelength with the luminosity function, which represents the eye's response to different wavelengths. The luminous flux is a weighted sum of the power at all wavelengths in the visible band. Light outside the visible band does not contribute. The ratio of the total luminous flux to the radiant flux is called the luminous efficacy. This model of the human visual brightness perception, is standardized by the CIE and ISO. Context Luminous flux is often used as an objective measure of the useful light emitted by a light source, and is typically reported on the packaging for light bulbs, although it is not always prominent. Consumers commonly compare the luminous flux of different light bulbs since it provides an estimate of the apparent amount of light the bulb will produce, and a lightbulb with a higher ratio of luminous flux to consumed power is more efficient. Luminous flux is not used to compare brightness, as this is a subjective perception which varies according to the distance from the light source and the angular spread of the light from the source. Measurement Luminous flux of artificial light sources is typically measured using an integrating sphere, or a goniophotometer outfitted with a photometer or a spectroradiometer. Relationship to luminous intensity Luminous flux (in lumens) is a measure of the total amount of light a lamp puts out. The luminous intensity (in candelas) is a measure of how bright the beam in a particular direction is. If a lamp has a 1 lumen bulb and the optics of the lamp are set up to focus the light evenly into a 1 steradian beam, then the beam would have a luminous intensity of 1 candela. If the optics were changed to concentrate the beam into 1/2 steradian then the source would have a luminous intensity of 2 candela. The resulting beam is narrower and brighter, however the luminous flux remains the same. Examples References Physical quantities Photometry Temporal rates
Luminous flux
[ "Physics", "Mathematics" ]
537
[ "Temporal quantities", "Physical phenomena", "Physical quantities", "Quantity", "Temporal rates", "Physical properties" ]
581,913
https://en.wikipedia.org/wiki/Directional%20selection
In population genetics, directional selection is a type of natural selection in which one extreme phenotype is favored over both the other extreme and moderate phenotypes. This genetic selection causes the allele frequency to shift toward the chosen extreme over time as allele ratios change from generation to generation. The advantageous extreme allele will increase in frequency among the population as a consequence of survival and reproduction differences among the different present phenotypes in the population. The allele fluctuations as a result of directional selection can be independent of the dominance of the allele, and in some cases if the allele is recessive, it can eventually become fixed in the population. Directional selection was first identified and described by naturalist Charles Darwin in his book On the Origin of Species published in 1859. He identified it as a type of natural selection along with stabilizing selection and disruptive selection. These types of selection also operate by favoring a specific allele and influencing the population's future phenotypic ratio. Disruptive selection favors both extreme phenotypes while the moderate phenotype will be selected against. The frequency of both extreme alleles will increase while the frequency of the moderate allele will decrease, differing from the trend in directional selection in which only one extreme allele is favored. Stabilizing selection favors the moderate phenotype and will select against both extreme phenotypes. Directional selection can be observed in finch beak size, peppered moth color, African cichlid mouth types, and sockeye salmon migration periods. If there is continuous allele frequency change as a result of directional selection generation from generation, there will be observable changes in the phenotypes of the entire population over time. Directional selection can change the genotypic and phenotypic variation of a population and cause a trend toward one specific phenotype. This selection is an important mechanism in the selection of complex and diversifying traits, and is also a primary force of speciation. Changes in a genotype and consequently a phenotype can either be advantageous, harmful, or neutral and depend on the environment in which the phenotypic shift is happening. Evidence Detection methods Directional selection most often occurs during environmental changes or population migrations to new areas with different environmental pressures. Directional selection allows for swift changes in allele frequency that can accompany rapidly changing environmental factors and plays a major role in speciation. Analysis on quantitative trait locus (QTL) effects has been used to examine the impact of directional selection in phenotypic diversification. QTL is a region of a gene that corresponds to a specific phenotypic trait, and the measuring the statistical frequencies of the traits can be helpful in analyzing phenotypic trends. In one study, the analysis showed that directional changes in QTLs affecting various traits were more common than expected by chance among diverse species. This was an indication that directional selection is a primary cause of the phenotypic diversification that can eventually result in speciation. There are different statistical tests that can be run to test for the presence of directional selection in a population. A highly indicative test of changes in allele frequencies is the QTL sign test, and other tests include the Ka/Ks ratio test and the relative rate test. The QTL sign test compares the number of antagonistic QTL to a neutral model, and allows for testing of directional selection against genetic drift. The Ka/Ks ratio test compares the number of non-synonymous to synonymous substitutions, and a ratio that is greater than 1 indicates directional selection. The relative ratio test looks at the accumulation of advantageous traits against a neutral model, but needs a phylogenetic tree for comparison. This can prove difficult if the full phylogenic history is not known or is not specific enough for the test comparison. Examples Finch beak size Another example of directional selection is the beak size in a specific population of finches. Darwin first observed this in the publication of his book, On the Origin of Species, and he details how the size of the finches beak differs based on environmental factors. On the Galápagos Islands west of the coast of Ecuador, there were groups of finches displaying different beak phenotypes. In one group, the beaks ranged from large and tough to small and smooth. Throughout the wet years, small seeds were more common than large seeds, and because of the large supply of small seeds the finches rarely ate large seeds. During the dry years, neither the small or large seeds were in great abundance, and the birds trended towards eating larger seeds. The changes in diet of the finches based on the environmental wet and dry seasons affected the depth of the birds’ beaks in future generations. The beaks most beneficial to the more plentiful type of seed would be selected for because the birds were able to feed themselves and reproduce. Peppered moths A significant example of directional selection in populations is the fluctuations of light and dark phenotypes in peppered moths in the 1800s. During the industrial revolution, environmental conditions were rapidly changing with the newfound emission of dark, black smoke from factories that would change the color of trees, rocks, and other niches of moths. Before the industrial revolution, the most prominent phenotype in the peppered moth population was the lighter, speckled moths. They thrived on the light birch trees and their phenotype would provide them with better camouflage from predators. After the Industrial Revolution as the trees become darker with soot, the moths with the darker phenotype were able to blend in and avoid predators better than their white counterparts. As time went on, the darker moths were positively directionally selected for and the allele frequency began to shift due to the increase in the number of darker moths. African cichlids African cichlids are known to be a diverse fish species, with evidence indicating that they evolved extremely quickly. These fish evolved within the same habitat, but have a variety of morphologies, especially pertaining to the mouth and jaw. Experiments pertaining the cichlid jaw phenotypes was done by Albertson and others in 2003 by crossing two species of African cichlids with very different mouth morphologies. The cross between Labeotropheus fuelleborni (subterminal mouth for biting algae off rocks) and Metriaclima zebra (terminal mouth for suction feeding) allowed for mapping of QTLs affecting feeding morphology. Using the QTL sign test, definitive evidence was used to support the existence of directional selection in the oral jaw apparatus in African cichlids. However, this was not the case for the suspensorium or skull QTLs, suggesting genetic drift or stabilizing selection as mechanisms for the speciation. Sockeye salmon Sockeye salmon are one of the many species of fish that are anadromous, in which individuals migrate to the same rivers in which they were born to reproduce. These migrations happen around the same time every year, but a 2007 study shows that sockeye salmon found in the waters of the Bristol Bay in Alaska have recently undergone directional selection on the timing of migration. In this study, two populations of sockeye salmon, Egegik and Ugashik, were observed. Data from 1969–2003 provided by the Alaska Department of Fish and Game were divided into five sets of seven years and plotted for average arrival to the fishery. After analyzing the data, it was determined that in both populations the average migration date was earlier and the populations were undergoing directional selection as a result of changing ecological conditions. The Egegik population experienced stronger selection and the migration date shifted four days. The paper suggests that fisheries can be a factor driving this selection because fishing occurs more often in the later periods of migration (especially in the Egegik district), preventing those fish from reproducing. This discovery also goes to show that in addition to environmental changes, human behaviors can also have massive effects on the selection of species around them. Bears hunting sockeye salmon Studies carried out in Little Togiak Lake in Alaska, indicate that bear predation has a significant impact on sockeye salmon populations, especially in shallow streams. Bears often focus on larger male salmon and tend to prefer those that have just arrived at the spawning grounds, particularly in smaller streams where they can catch them more easily. This predation may accelerate the aging of salmon by favoring later arrivals. Additionally, the impact of predation varies among different salmon populations based on their habitat and density; it tends to be more selective in areas where fish are readily accessible. While high levels of bear predation can occur, healthy salmon populations usually maintain strong reproductive potential, although the effects are more pronounced when populations are low. Overall, these dynamics illustrate how bear predation affects salmon behavior and life cycles, influencing their evolutionary processes. Large Felids This study examines the role of lineage-specific directional selection on body size evolution in felids, revealing that several species, including those in the Panthera genus (lions, tigers, leopards, jaguars, snow leopards), the cheetah, and the puma, exhibit evidence of directional selection favoring larger body mass. These larger body sizes are likely linked to hunting large prey and solitary hunting strategies, which favor physical strength and size. Conversely, the clouded leopard did not show evidence of directional selection for body size, suggesting different ecological pressures, and the jaguarundi showed no clear selection for smaller size despite being smaller than its relatives. These findings highlight that body size evolution in felids is not uniform and is strongly influenced by ecological factors such as prey size and hunting behavior. The study concludes that directional selection for increased body size is likely associated with the need for larger predators to capture large prey, and solitary hunting may accelerate this selection, although the evolutionary paths for different felid lineages can vary considerably. Soapberry Bugs Soapberry Bugs (Jadera haematoloma) primarily feed on seeds produced by plants of the Sapindaceae family. These soapberry bugs use their beaks to feed on the seeds within the fruits of these plants, so it is crucial that their beak size is long enough to reach the seeds from the exterior of the fruits. However, the distance from the exterior of the fruit to the seed can vary. Scott Carroll and Christin Boyd (1992) conducted an experiment where they would observe how three newly introduced plant species introduced to North America that were colonized by these soapberry bugs would affect the natural selection of the insect’s beak length. Each new plant species hosted fruits of different sizes compared to the native hosts. They found that there was indeed a close correlation between the radius of the fruit and the length of the beak. There was a positive directional selection for larger beaks when the radius of the fruit was larger, and there was a positive directional selection for smaller beaks when the radius of the fruit was smaller. To confirm that these differences were caused by genetic differences and not through phenotypic plasticity, Carroll raised young soapberry bugs from the populations based on the introduced plant species and found that their beak length was retained when they were developed on the alternative host. Ecological impact Directional selection can quickly lead to vast changes in allele frequencies in a population because of the cumulative nature of reproduction of the fittest. Because the main cause for directional selection is different and changing environmental pressures, rapidly changing environments, such as those affected by climate change, can cause drastic changes within populations. Diversity Limiting the number of genotypes in a certain population can be deleterious to the ecosystem as a whole by shrink the potential genetic gene pool. Low amount of genetic variation can lead to mass extinctions and endangered species because of the large impact one mutation can have on the entire population if there are only a few specific genes present throughout. Urban Influence It is important to note the impact that humans have on genetic diversity as well, and be aware of the ways to reduce harmful impacts on natural environments. Major roads, waterway pollution, and urbanization all cause environmental selection and could potentially result in changes in allele frequencies. Hunting may also play a role in directional selection, albeit more so in smaller populations. Timescale Typically directional selection acts strongly for short bursts and is not sustained over long periods of time. If it was sustained, a population might hit biological constraints such that it no longer responds to selection. However, it is possible for directional selection to take a very long time to find a local optimum on a fitness landscape. A possible example of long-term directional selection is the tendency of proteins to become more hydrophobic over time, and to have their hydrophobic amino acids more interspersed along the sequence. See also Adaptive evolution in the human genome Balancing selection Disruptive selection Frequency-dependent foraging by pollinators Negative selection (natural selection) Stabilizing selection Peppered moth evolution Fluctuating selection References Further reading Types of Selection Natural Selection Modern Theories of Evolution Selection
Directional selection
[ "Biology" ]
2,622
[ "Evolutionary processes", "Selection" ]
581,974
https://en.wikipedia.org/wiki/Coq%20%28software%29
Coq is an interactive theorem prover first released in 1989. It allows for expressing mathematical assertions, mechanically checks proofs of these assertions, helps find formal proofs, and extracts a certified program from the constructive proof of its formal specification. Coq works within the theory of the calculus of inductive constructions, a derivative of the calculus of constructions. Coq is not an automated theorem prover but includes automatic theorem proving tactics (procedures) and various decision procedures. The Association for Computing Machinery awarded Thierry Coquand, Gérard Huet, Christine Paulin-Mohring, Bruno Barras, Jean-Christophe Filliâtre, Hugo Herbelin, Chetan Murthy, Yves Bertot, and Pierre Castéran with the 2013 ACM Software System Award for Coq. The name Coq is a wordplay on the name of Thierry Coquand, calculus of constructions or CoC and follows the French computer science tradition of naming software after animals (coq in French meaning rooster). On October 11, 2023, the development team announced that Coq will be renamed The Rocq Prover in coming months, and began updating the code base, website, and associated tools. Overview When viewed as a programming language, Coq implements a dependently typed functional programming model; when viewed as a logical system, it implements a higher-order type theory. The development of Coq has been supported since 1984 by French Institute for Research in Computer Science and Automation (INRIA), now in collaboration with École Polytechnique, University of Paris-Sud, Paris Diderot University, and French National Centre for Scientific Research (CNRS). In the 1990s, École normale supérieure de Lyon (ENS Lyon) was also part of the project. The development of Coq was initiated by Gérard Huet and Thierry Coquand, and more than 40 people, mainly researchers, have contributed features to the core system since its inception. The implementation team has successively been coordinated by Gérard Huet, Christine Paulin-Mohring, Hugo Herbelin, and Matthieu Sozeau. Coq is mainly implemented in OCaml with a bit of C. The core system can be extended by way of a plug-in mechanism. The name means 'rooster' in French and stems from a French tradition of naming research development tools after animals. Up until 1991, Coquand was implementing a language called the calculus of constructions and it was simply called CoC then. In 1991, a new implementation based on the extended calculus of inductive constructions was begun and the name changed from CoC to Coq in an indirect reference to Coquand, who developed the calculus of constructions along with Gérard Huet and contributed to the calculus of inductive constructions with Christine Paulin-Mohring. Coq provides a specification language called Gallina ("hen" in Latin, Spanish, Italian and Catalan). Programs written in Gallina have the weak normalization property, implying that they always terminate. This is a distinctive property of the language, since infinite loops (non-terminating programs) are common in other programming languages, and is one way to avoid the halting problem. As an example, consider a proof of a lemma that taking the successor of a natural number flips its parity. The fold-unfold tactic introduced by Danvy is used to help keep the proof simple.Ltac fold_unfold_tactic name := intros; unfold name; fold name; reflexivity. Require Import Arith Nat Bool. Fixpoint is_even (n : nat) : bool := match n with | 0 => true | S n' => eqb (is_even n') false end. Lemma fold_unfold_is_even_0: is_even 0 = true. Proof. fold_unfold_tactic is_even. Qed. Lemma fold_unfold_is_even_S: forall n' : nat, is_even (S n') = eqb (is_even n') false. Proof. fold_unfold_tactic is_even. Qed. Lemma successor_flips_evenness: forall n : nat, is_even n = negb (is_even (S n)). Proof. intro n. rewrite -> (fold_unfold_is_even_S n). destruct (is_even n). * simpl. reflexivity. * simpl. reflexivity. Qed. Notable uses Four color theorem and SSReflect extension Georges Gonthier of Microsoft Research in Cambridge, England and Benjamin Werner of INRIA used Coq to create a surveyable proof of the four color theorem, which was completed in 2002. Their work led to the development of the SSReflect ("Small Scale Reflection") package, which was a significant extension to Coq. Despite its name, most of the features added to Coq by SSReflect are general-purpose features and are not limited to the computational reflective programming style of proof. These features include: Added convenient notations for irrefutable and refutable pattern matching, on inductive types with one or two constructors Implicit arguments for functions applied to zero arguments, which is useful when programming with higher-order functions Concise anonymous arguments An improved set tactic with more powerful matching Support for reflection SSReflect 1.11 is freely available, dual-licensed under the open source CeCILL-B or CeCILL-2.0 license, and compatible with Coq 8.11. Other applications CompCert: an optimizing compiler for almost all of the C programming language which is largely programmed and proven correct in Coq. Disjoint-set data structure: correctness proof in Coq was published in 2007. Feit–Thompson theorem: formal proof using Coq was completed in September 2012. Busy beaver: The value of the 5-state winning busy beaver was discovered by Heiner Marxen and Jürgen Buntrock in 1989, but only proved to be the winning fifth busy beaver — stylized as BB(5) — in 2024 using a proof in Coq. Tactic language In addition to constructing Gallina terms explicitly, Coq supports the use of tactics written in the built-in language Ltac or in OCaml. These tactics automate the construction of proofs, carrying out trivial or obvious steps in proofs. Several tactics implement decision procedures for various theories. For example, the "ring" tactic decides the theory of equality modulo ring or semiring axioms via associative-commutative rewriting. For example, the following proof establishes a complex equality in the ring of integers in just one line of proof: Require Import ZArith. Open Scope Z_scope. Goal forall a b c:Z, (a + b + c) ^ 2 = a * a + b ^ 2 + c * c + 2 * a * b + 2 * a * c + 2 * b * c. intros; ring. Qed. Built-in decision procedures are also available for the empty theory ("congruence"), propositional logic ("tauto"), quantifier-free linear integer arithmetic ("lia"), and linear rational/real arithmetic ("lra"). Further decision procedures have been developed as libraries, including one for Kleene algebras and another for certain geometric goals. See also Calculus of constructions Curry–Howard correspondence Intuitionistic type theory List of proof assistants References External links , in English , source code repository JsCoq Interactive Online System – allows Coq to run in a web browser, with no need to install extra software Alectryon – a library to process Coq snippets embedded in documents, showing goals and messages for each Coq sentence Coq Wiki Mathematical Components library – widely used library of mathematical structures, part of which is the SSReflect proof language Constructive Coq Repository at Nijmegen Math Classes Textbooks The Coq'Art – a book on Coq by Yves Bertot and Pierre Castéran Certified Programming with Dependent Types – online and printed textbook by Adam Chlipala Software Foundations – online textbook by Benjamin C. Pierce et al. An introduction to small scale reflection in Coq – a tutorial on SSReflect by Georges Gonthier and Assia Mahboubi Modeling and Proving in Computational Type Theory Using the Coq Proof Assistant – a textbook by Gert Smolka used for a course in computational logic – see also course resources at Saarland University Tutorials Introduction to the Coq Proof Assistant – video lecture by Andrew Appel at Institute for Advanced Study Coq Video tutorials by Andrej Bauer Proof assistants Free theorem provers Dependently typed languages Educational math software OCaml software Free software programmed in OCaml Functional languages Programming languages created in 1984 1989 software Extensible syntax programming languages Articles with example OCaml code
Coq (software)
[ "Mathematics" ]
1,864
[ "Educational math software", "Mathematical software" ]
582,024
https://en.wikipedia.org/wiki/Pushout%20%28category%20theory%29
In category theory, a branch of mathematics, a pushout (also called a fibered coproduct or fibered sum or cocartesian square or amalgamated sum) is the colimit of a diagram consisting of two morphisms f : Z → X and g : Z → Y with a common domain. The pushout consists of an object P along with two morphisms X → P and Y → P that complete a commutative square with the two given morphisms f and g. In fact, the defining universal property of the pushout (given below) essentially says that the pushout is the "most general" way to complete this commutative square. Common notations for the pushout are and . The pushout is the categorical dual of the pullback. Universal property Explicitly, the pushout of the morphisms f and g consists of an object P and two morphisms i1 : X → P and i2 : Y → P such that the diagram commutes and such that (P, i1, i2) is universal with respect to this diagram. That is, for any other such triple (Q, j1, j2) for which the following diagram commutes, there must exist a unique u : P → Q also making the diagram commute: As with all universal constructions, the pushout, if it exists, is unique up to a unique isomorphism. Examples of pushouts Here are some examples of pushouts in familiar categories. Note that in each case, we are only providing a construction of an object in the isomorphism class of pushouts; as mentioned above, though there may be other ways to construct it, they are all equivalent. Suppose that X, Y, and Z as above are sets, and that f : Z → X and g : Z → Y are set functions. The pushout of f and g is the disjoint union of X and Y, where elements sharing a common preimage (in Z) are identified, together with the morphisms i1, i2 from X and Y, i.e. where ~ is the finest equivalence relation (cf. also this) such that f(z) ~ g(z) for all z in Z. In particular, if X and Y are subsets of some larger set W and Z is their intersection, with f and g the inclusion maps of Z into X and Y, then the pushout can be canonically identified with the union . A specific case of this is the cograph of a function. If is a function, then the cograph of a function is the pushout of along the identity function of . In elementary terms, the cograph is the quotient of by the equivalence relation generated by identifying with . A function may be recovered by its cograph because each equivalence class in contains precisely one element of . Cographs are dual to graphs of functions since the graph may be defined as the pullback of along the identity of . The construction of adjunction spaces is an example of pushouts in the category of topological spaces. More precisely, if Z is a subspace of Y and g : Z → Y is the inclusion map we can "glue" Y to another space X along Z using an "attaching map" f : Z → X. The result is the adjunction space , which is just the pushout of f and g. More generally, all identification spaces may be regarded as pushouts in this way. A special case of the above is the wedge sum or one-point union; here we take X and Y to be pointed spaces and Z the one-point space. Then the pushout is , the space obtained by gluing the basepoint of X to the basepoint of Y. In the category of abelian groups, pushouts can be thought of as "direct sum with gluing" in the same way we think of adjunction spaces as "disjoint union with gluing". The zero group is a subgroup of every group, so for any abelian groups A and B, we have homomorphisms and . The pushout of these maps is the direct sum of A and B. Generalizing to the case where f and g are arbitrary homomorphisms from a common domain Z, one obtains for the pushout a quotient group of the direct sum; namely, we mod out by the subgroup consisting of pairs (f(z), −g(z)). Thus we have "glued" along the images of Z under f and g. A similar approach yields the pushout in the category of R-modules for any ring R. In the category of groups, the pushout is called the free product with amalgamation. It shows up in the Seifert–van Kampen theorem of algebraic topology (see below). In CRing, the category of commutative rings (a full subcategory of the category of rings), the pushout is given by the tensor product of rings with the morphisms and that satisfy . In fact, since the pushout is the colimit of a span and the pullback is the limit of a cospan, we can think of the tensor product of rings and the fibered product of rings (see the examples section) as dual notions to each other. In particular, let A, B, and C be objects (commutative rings with identity) in CRing and let f : C → A and g : C → B be morphisms (ring homomorphisms) in CRing. Then the tensor product is: See Free product of associative algebras for the case of non-commutative rings. In the multiplicative monoid of positive integers , considered as a category with one object, the pushout of two positive integers m and n is just the pair , where the numerators are both the least common multiple of m and n. Note that the same pair is also the pullback. Properties Whenever the pushout A ⊔C B exists, then B ⊔C A exists as well and there is a natural isomorphism A ⊔C B ≅ B ⊔C A. In an abelian category all pushouts exist, and they preserve cokernels in the following sense: if (P, i1, i2) is the pushout of f : Z → X and g : Z → Y, then the natural map coker(f) → coker(i2) is an isomorphism, and so is the natural map coker(g) → coker(i1). There is a natural isomorphism (A ⊔C B) ⊔B D ≅ A ⊔C D. Explicitly, this means: if maps f : C → A, g : C → B and h : B → D are given and the pushout of f and g is given by i : A → P and j : B → P, and the pushout of j and h is given by k : P → Q and l : D → Q, then the pushout of f and hg is given by ki : A → Q and l : D → Q. Graphically this means that two pushout squares, placed side by side and sharing one morphism, form a larger pushout square when ignoring the inner shared morphism. Construction via coproducts and coequalizers Pushouts are equivalent to coproducts and coequalizers (if there is an initial object) in the sense that: Coproducts are a pushout from the initial object, and the coequalizer of f, g : X → Y is the pushout of [f, g] and [1X, 1X], so if there are pushouts (and an initial object), then there are coequalizers and coproducts; Pushouts can be constructed from coproducts and coequalizers, as described below (the pushout is the coequalizer of the maps to the coproduct). All of the above examples may be regarded as special cases of the following very general construction, which works in any category C satisfying: For any objects A and B of C, their coproduct exists in C; For any morphisms j and k of C with the same domain and the same target, the coequalizer of j and k exists in C. In this setup, we obtain the pushout of morphisms f : Z → X and g : Z → Y by first forming the coproduct of the targets X and Y. We then have two morphisms from Z to this coproduct. We can either go from Z to X via f, then include into the coproduct, or we can go from Z to Y via g, then include into the coproduct. The pushout of f and g is the coequalizer of these new maps. Application: the Seifert–van Kampen theorem The Seifert–van Kampen theorem answers the following question. Suppose we have a path-connected space , covered by path-connected open subspaces and whose intersection is also path-connected. (Assume also that the basepoint lies in the intersection of A and B.) If we know the fundamental groups of , and can we recover the fundamental group of ? The answer is yes, provided we also know the induced homomorphisms and The theorem then says that the fundamental group of is the pushout of these two induced maps. Of course, is the pushout of the two inclusion maps of into and . Thus we may interpret the theorem as confirming that the fundamental group functor preserves pushouts of inclusions. We might expect this to be simplest when is simply connected, since then both homomorphisms above have trivial domain. Indeed, this is the case, since then the pushout (of groups) reduces to the free product, which is the coproduct in the category of groups. In a most general case we will be speaking of a free product with amalgamation. There is a detailed exposition of this, in a slightly more general setting (covering groupoids) in the book by J. P. May listed in the references. References May, J. P. A concise course in algebraic topology. University of Chicago Press, 1999. An introduction to categorical approaches to algebraic topology: the focus is on the algebra, and assumes a topological background. Ronald Brown "Topology and Groupoids" pdf available Gives an account of some categorical methods in topology, use the fundamental groupoid on a set of base points to give a generalisation of the Seifert-van Kampen Theorem. Philip J. Higgins, "Categories and Groupoids" free download Explains some uses of groupoids in group theory and topology. References External links pushout in nLab Limits (category theory)
Pushout (category theory)
[ "Mathematics" ]
2,237
[ "Mathematical structures", "Category theory", "Limits (category theory)" ]
582,075
https://en.wikipedia.org/wiki/Lev%20Pontryagin
Lev Semyonovich Pontryagin (, also written Pontriagin or Pontrjagin, first name sometimes anglicized as Leon) (3 September 1908 – 3 May 1988) was a Soviet mathematician. Completely blind from the age of 14, he made major discoveries in a number of fields of mathematics, including algebraic topology, differential topology and optimal control. Early life and career He was born in Moscow and lost his eyesight completely due to an unsuccessful eye surgery after a primus stove explosion when he was 14. His mother Tatyana Andreyevna, who did not know mathematical symbols, read mathematical books and papers (notably those of Heinz Hopf, J. H. C. Whitehead, and Hassler Whitney) to him, and later worked as his secretary. His mother used alternative names for math symbols, such as "tails up" for the set-union symbol . In 1925 he entered Moscow State University, where he was strongly influenced by the lectures of Pavel Alexandrov who would become his doctoral thesis advisor. After graduating in 1929, he obtained a position at Moscow State University. In 1934 he joined the Steklov Institute in Moscow. In 1970 he became vice president of the International Mathematical Union. Work Pontryagin worked on duality theory for homology while still a student. He went on to lay foundations for the abstract theory of the Fourier transform, now called Pontryagin duality. Using these tools, he was able to solve the case of Hilbert's fifth problem for abelian groups in 1934. In 1935, he was able to compute the homology groups of the classical compact Lie groups, which he would later call his greatest achievement. With René Thom, he is regarded as one of the co-founders of cobordism theory, and co-discoverers of the central idea of this theory, that framed cobordism and stable homotopy are equivalent. This led to the introduction around 1940 of a theory of certain characteristic classes, now called Pontryagin classes, designed to vanish on a manifold that is a boundary. In 1942 he introduced the cohomology operations now called Pontryagin squares. Moreover, in operator theory there are specific instances of Krein spaces called Pontryagin spaces. Starting in 1952, he worked in optimal control theory. His maximum principle is fundamental to the modern theory of optimization. He also introduced the idea of a bang–bang principle, to describe situations where the applied control at each moment is either the maximum positive 'steer', or the maximum negative 'steer'. Pontryagin authored several influential monographs as well as popular textbooks in mathematics. Pontryagin's students include Dmitri Anosov, Vladimir Boltyansky, Revaz Gamkrelidze, Yevgeny Mishchenko, Mikhail Postnikov, Vladimir Rokhlin, and Mikhail Zelikin. Controversy and antisemitism allegations Pontryagin participated in a few notorious political campaigns in the Soviet Union. In 1930, he and several other young members of the Moscow Mathematical Society publicly denounced as counter-revolutionary the Society's head Dmitri Egorov, who openly supported the Russian Orthodox Church and had recently been arrested. They then proceeded to follow their plan of reorganizing the Society. Pontryagin was accused of anti-Semitism on several occasions. For example, he attacked Nathan Jacobson for being a "mediocre scientist" representing the "Zionism movement", while both men were vice-presidents of the International Mathematical Union. When a prominent Soviet Jewish mathematician, Grigory Margulis, was selected by the IMU to receive the Fields Medal at the upcoming 1978 ICM, Pontryagin, who was a member of the executive committee of the IMU at the time, vigorously objected. Although the IMU stood by its decision to award Margulis the Fields Medal, Margulis was denied a Soviet exit visa by the Soviet authorities and was unable to attend the 1978 ICM in person. Pontryagin rejected charges of antisemitism in an article published in Science in 1979. In his memoirs Pontryagin claims that he struggled with Zionism, which he considered a form of racism. Publications (translated by Emma Lehmer) 1952 - Foundations of Combinatorial Topology (translated from 1947 original Russian edition) 2015 Dover reprint 1962 - Ordinary Differential Equations (translated from Russian by Leonas Kacinskas and Walter B. Counts) 1962 - with Vladimir Boltyansky, Revaz Gamkrelidze, and : The Mathematical Theory of Optimal Processes See also Andronov–Pontryagin criterion for planar dynamical systems Kuratowski's theorem, also called the Pontryagin–Kuratowski theorem, on planar graphs Pontryagin class Pontryagin duality Pontryagin's maximum principle Notes External links Autobiography of Pontryagin (in Russian) Kutateladze S. S., Sic Transit... or Heroes, Villains, and Rights of Memory. Kutateladze S. S., The Tragedy of Mathematics in Russia 1908 births 1988 deaths 20th-century Russian mathematicians Mathematicians from Moscow Academic staff of Moscow State University Full Members of the USSR Academy of Sciences Heroes of Socialist Labour Recipients of the Lenin Prize Recipients of the Order of the Badge of Honour Recipients of the Order of Lenin Recipients of the Order of the October Revolution Recipients of the Order of the Red Banner of Labour Recipients of the Stalin Prize Recipients of the USSR State Prize Blind scholars and academics Control theorists Scientists with disabilities Topologists Russian blind people Soviet blind people Soviet mathematicians Burials at Novodevichy Cemetery
Lev Pontryagin
[ "Mathematics", "Engineering" ]
1,139
[ "Topologists", "Topology", "Control engineering", "Control theorists" ]
582,127
https://en.wikipedia.org/wiki/Antenna%20tuner
An antenna tuner, a matchbox, transmatch, antenna tuning unit (ATU), antenna coupler, or feedline coupler is a device connected between a radio transmitter or receiver and its antenna to improve power transfer between them by matching the impedance of the radio to the antenna's feedline. Antenna tuners are particularly important for use with transmitters. Transmitters feed power into a resistive load, very often 50 ohms, for which the transmitter is optimally designed for power output, efficiency, and low distortion. If the load seen by the transmitter departs from this design value due to improper tuning of the antenna/feedline combination the power output will change, distortion may occur and the transmitter may overheat. ATUs are a standard part of almost all radio transmitters; they may be a circuit included inside the transmitter itself or a separate piece of equipment connected between the transmitter and the antenna. In transmitters in which the antenna is mounted separate from the transmitter and connected to it by a transmission line (feedline), there may be a second ATU (or matching network) at the antenna to match the impedance of the antenna to the transmission line. In low power transmitters with attached antennas, such as cell phones and walkie-talkies, the ATU is fixed to work with the antenna. In high power transmitters like radio stations, the ATU is adjustable to accommodate changes in the antenna or transmitter, and adjusting the ATU to match the transmitter to the antenna is an important procedure done after any changes to these components have been made. This adjustment is done with an instrument called a SWR meter. In radio receivers ATUs are not so important, because in the low frequency part of the radio spectrum the signal to noise ratio (SNR) is dominated by atmospheric noise. It does not matter if the impedance of the antenna and receiver are mismatched so some of the incoming power from the antenna is reflected and does not reach the receiver, because the signal can be amplified to make up for it. However in high frequency receivers the receiver's SNR is dominated by noise in the receiver's front end, so it is important that the receiving antenna is impedance-matched to the receiver to give maximum signal amplitude in the front end stages, to overcome noise. Overview An antenna's impedance is different at different frequencies. An antenna tuner matches a radio with a fixed impedance (typically 50 Ohms for modern transceivers) to the combination of the feedline and the antenna; useful when the impedance seen at the input end of the feedline is unknown, complex, or otherwise different from the transceiver. Coupling through an ATU allows the use of one antenna on a broad range of frequencies. However, despite its name, an antenna tuner ' actually matches the transmitter only to the complex impedance reflected back to the input end of the feedline. If both tuner and transmission line were lossless, tuning at the transmitter end would indeed produce a match at every point in the transmitter-feedline-antenna system. However, in practical systems feedline losses limit the ability of the antenna 'tuner' to match the antenna or change its resonant frequency. If the loss of power is very low in the line carrying the transmitter's signal into the antenna, a tuner at the transmitter end can produce a worthwhile degree of matching and tuning for the antenna and feedline network as a whole. With lossy feedlines (such as commonly used 50 Ohm coaxial cable) maximum power transfer only occurs if matching is done at both ends of the line. If there is still a high SWR (multiple reflections) in the feedline beyond the ATU, any loss in the feedline is multiplied several times by the transmitted waves reflecting back and forth between the tuner and the antenna, heating the wire instead of sending out a signal. Even with a matching unit at both ends of the feedline – the near ATU matching the transmitter to the feedline and the remote ATU matching the feedline to the antenna – losses in the circuitry of the two ATUs will reduce power delivered to the antenna. Therefore, operating an antenna far from its design frequency and compensating with a transmatch between the transmitter and the feedline is not as efficient as using a resonant antenna with a matched-impedance feedline, nor as efficient as a matched feedline from the transmitter to a remote antenna tuner attached directly to the antenna. Broad band matching methods Transformers, autotransformers, and baluns are sometimes incorporated into the design of narrow band antenna tuners and antenna cabling connections. They will all usually have little effect on the resonant frequency of either the antenna or the narrow band transmitter circuits, but can widen the range of impedances that the antenna tuner can match, and/or convert between balanced and unbalanced cabling where needed. Ferrite transformers Solid-state power amplifiers operating from 1–30 MHz typically use one or more wideband transformers wound on ferrite cores. MOSFETs and bipolar junction transistors are designed to operate into a low impedance, so the transformer primary typically has a single turn, while the 50 Ohm secondary will have 2 to 4 turns. This feedline system design has the advantage of reducing the retuning required when the operating frequency is changed. A similar design can match an antenna to a transmission line; For example, many TV antennas have a 300 Ohm impedance and feed the signal to the TV via a 75 Ohm coaxial line. A small ferrite core transformer makes the broad band impedance transformation. This transformer does not need, nor is it capable of adjustment. For receive-only use in a TV the small SWR variation with frequency is not a major problem. It should be added that many ferrite based transformers perform a balanced to unbalanced transformation along with the impedance change. When the balanced to unbalanced function is present these transformers are called a balun (otherwise an unun). The most common baluns have either a 1:1 or a 1:4 impedance transformation. Autotransformers There are several designs for impedance matching using an autotransformer, which is a single-wire transformer with different connection points or taps spaced along the windings. They are distinguished mainly by their impedance transform ratio (1:1, 1:4, 1:9, etc., the square of the winding ratio), and whether the input and output sides share a common ground, or are matched from a cable that is grounded on one side (unbalanced) to an ungrounded (usually balanced) cable. When autotransformers connect balanced and unbalanced lines they are called baluns, just as two-winding transformers. When two differently-grounded cables or circuits must be connected but the grounds kept independent, a full, two-winding transformer with the desired ratio is used instead. The circuit pictured at the right has three identical windings wrapped in the same direction around either an "air" core (for very high frequencies) or ferrite core (for middle, or low frequencies). The three equal windings shown are wired for a common ground shared by two unbalanced lines (so this design is called an unun), and can be used as 1:1, 1:4, or 1:9 impedance match, depending on the tap chosen. (The same windings could be connected differently to make a balun instead.) For example, if the right-hand side is connected to a resistive load of 10 Ohms, the user can attach a source at any of the three ungrounded terminals on the left side of the autotransformer to get a different impedance. Notice that on the left side, the line with more windings measures greater impedance for the same 10 Ohm load on the right. Narrow band design The "narrow-band" methods described below cover a very much smaller span of frequencies, by comparison with the broadband methods described above. Antenna matching methods that use transformers tend to cover a wide range of frequencies. A single, typical, commercially available balun can cover frequencies from 3.5–30.0 MHz, or nearly the entire shortwave radio band. Matching to an antenna using a cut segment of transmission line (described below) is perhaps the most efficient of all matching schemes in terms of electrical power, but typically can only cover a range about 3.5–3.7 MHz wide – a very small range indeed, compared to a broadband balun. Antenna coupling or feedline matching circuits are also narrowband for any single setting, but can be re-tuned more conveniently. However they are perhaps the least efficient in terms of power-loss (aside from having no impedance matching at all!). Transmission line antenna tuning methods The insertion of a special section of transmission line, whose characteristic impedance differs from that of the main line, can be used to match the main line to the antenna. An inserted line with the proper impedance and connected at the proper location can perform complicated matching effects with very high efficiency, but spans a very limited frequency range. The simplest example this method is the quarter-wave impedance transformer formed by a section of mismatched transmission line. If a quarter-wavelength of 75 Ohm coaxial cable is linked to a 50 Ohm load, the SWR in the 75 Ohm quarter wavelength of line can be calculated as 75Ω / 50Ω = 1.5; the quarter-wavelength of line transforms the mismatched impedance to 112.5 Ohms (75 Ohms × 1.5 = 112.5 Ohms). Thus this inserted section matches a 112 Ohm antenna to a 50 Ohm main line. The  wavelength coaxial transformer is a useful way to match 50 to 75 Ohms using the same general method. The theoretical basis is discussion by the inventor, and wider application of the method is found here: Branham, P. (1959). A Convenient Transformer for matching Co-axial lines. Geneva: CERN. A second common method is the use of a stub: A shorted, or open section of line is connected in parallel with the main line. With coax this is done using a 'T'-connector. The length of the stub and its location can be chosen so as to produce a matched line below the stub, regardless of the complex impedance or SWR of the antenna itself. The J-pole antenna is an example of an antenna with a built-in stub match. Basic lumped circuit matching using the L network The basic circuit required when lumped capacitances and inductors are used is shown below. This circuit is important in that many automatic antenna tuners use it, and also because more complex circuits can be analyzed as groups of L-networks. This is called an L network not because it contains an inductor, (in fact some L-networks consist of two capacitors), but because the two components are at right angles to each other, having the shape of a rotated and sometimes reversed English letter 'L'. The 'T' ("Tee") network and the π ("Pi") network also have a shape similar to the English and Greek letters they are named after. This basic network is able to act as an impedance transformer. If the output has an impedance consisting of resistance Rload and reactance j Xload, while the input is to be attached to a source which has an impedance of Rsource resistance and j Xsource reactance, then and . In this example circuit, XL and XC can be swapped. All the ATU circuits below create this network, which exists between systems with different impedances. For instance, if the source has a resistive impedance of 50 Ω and the load has a resistive impedance of 1000 Ω : If the frequency is 28 MHz, As, then, So, While as, then, Theory and practice A parallel network, consisting of a resistive element (1000 Ω) and a reactive element (−j 229.415 Ω), will have the same impedance and power factor as a series network consisting of resistive (50 Ω) and reactive elements (−j 217.94 Ω). By adding another element in series (which has a reactive impedance of +j 217.94 Ω), the impedance is 50 Ω (resistive). Types of L networks and their use The L-network can have eight different configurations, six of which are shown here. The two missing configurations are the same as the bottom row, but with the parallel element (wires vertical) on the right side of the series element (wires horizontal), instead of on the left, as shown. In discussion of the diagrams that follows the in connector comes from the transmitter or "source"; the out connector goes to the antenna or "load". The general rule (with some exceptions, described below) is that the series element of an L-network goes on the side with the lowest impedance. So for example, the three circuits in the left column and the two in the bottom row have the series (horizontal) element on the out side are generally used for stepping up from a low-impedance input (transmitter) to a high-impedance output (antenna), similar to the example analyzed in the section above. The top two circuits in the right column, with the series (horizontal) element on the in side, are generally useful for stepping down from a higher input to a lower output impedance. The general rule only applies to loads that are mainly resistive, with very little reactance. In cases where the load is highly reactive – such as an antenna fed with a signals whose frequency is far away from any resonance – the opposite configuration may be required. If far from resonance, the bottom two step down (high-in to low-out) circuits would instead be used to connect for a step up (low-in to high-out that is mostly reactance). The low- and high-pass versions of the four circuits shown in the top two rows use only one inductor and one capacitor. Normally, the low-pass would be preferred with a transmitter, in order to attenuate harmonics, but the high-pass configuration may be chosen if the components are more conveniently obtained, or if the radio already contains an internal low-pass filter, or if attenuation of low frequencies is desirable – for example when a local AM station broadcasting on a medium frequency may be overloading a high frequency receiver. The Low R, high C circuit is shown feeding a short vertical antenna, such as would be the case for a compact, mobile antenna or otherwise on frequencies below an antenna's lowest natural resonant frequency. Here the inherent capacitance of a short, random wire antenna is so high that the L-network is best realized with two inductors, instead of aggravating the problem by using a capacitor. The Low R, high L circuit is shown feeding a small loop antenna. Below resonance this type of antenna has so much inductance, that more inductance from adding a coil would make the reactance even worse. Therefore, the L-network is composed of two capacitors. An L-network is the simplest circuit that will achieve the desired transformation; for any one given antenna and frequency, once a circuit is selected from the eight possible configurations (of which six are shown above) only one set of component values will match the in impedance to the out impedance. In contrast, the circuits described below all have three or more components, and hence have many more choices for inductance and capacitance that will produce an impedance match. The radio operator must experiment, test, and use judgement to choose among the many adjustments that produce the same impedance match. Antenna system losses Loss in Antenna tuners Every means of impedance match will introduce some power loss. This will vary from a few percent for a transformer with a ferrite core, to 50% or more for a complex ATU that is improperly tuned or working at the limits of its tuning range. With the narrow band tuners, the L-network has the lowest loss, partly because it has the fewest components, but mainly because it necessarily operates at the lowest possible for a given impedance transformation. With the L-network, the loaded is not adjustable, but is fixed midway between the source and load impedances. Since most of the loss in practical tuners will be in the coil, choosing either the low-pass or high-pass network may reduce the loss somewhat. The L-network using only capacitors will have the lowest loss, but this network only works where the load impedance is very inductive, making it a good choice for a small loop antenna. Inductive impedance also occurs with straight-wire antennas used at frequencies slightly above a resonant frequency, where the antenna is too long – for example, between a quarter and a half wave long at the operating frequency. However, problematic straight-wire antennas are typically too short for the frequency in use. With the high-pass T-network, the loss in the tuner can vary from a few percent – if tuned for lowest loss – to over 50% if the tuner is not properly adjusted. Using the maximum available capacitance will give less loss, than if one simply tunes for a match without regard for the settings. This is because using more capacitance means using fewer inductor turns, and the loss is mainly in the inductor. With the SPC tuner the losses will be somewhat higher than with the T-network, since the added capacitance across the inductor will shunt some reactive current to ground which must be cancelled by additional current in the inductor. The trade-off is that the effective inductance of the coil is increased, thus allowing operation at lower frequencies than would otherwise be possible. If additional filtering is desired, the inductor can be deliberately set to larger values, thus providing a partial band pass effect. Either the high-pass T, low-pass π, or the SPC tuner can be adjusted in this manner. The additional attenuation at harmonic frequencies can be increased significantly with only a small percentage of additional loss at the tuned frequency. When adjusted for minimum loss, the SPC tuner will have better harmonic rejection than the high-pass T due to its internal tank circuit. Either type is capable of good harmonic rejection if a small additional loss is acceptable. The low-pass π has exceptional harmonic attenuation at any setting, including the lowest-loss. ATU location An ATU will be inserted somewhere along the line connecting the radio transmitter or receiver to the antenna. The antenna feedpoint is usually high in the air (for example, a dipole antenna) or far away (for example, an end-fed random wire antenna). A transmission line, or feedline, must carry the signal between the transmitter and the antenna. The ATU can be placed anywhere along the feedline: at the transmitter, at the antenna, or somewhere in between. Antenna tuning is best done as close to the antenna as possible to minimize loss, increase bandwidth, and reduce voltage and current on the transmission line. Also, when the information being transmitted has frequency components whose wavelength is a significant fraction of the electrical length of the feed line, distortion of the transmitted information will occur if there are standing waves on the line. Analog TV and FM stereo broadcasts are affected in this way. For those modes, matching at the antenna is required. When possible, an automatic or remotely-controlled tuner in a weather-proof case at or near the antenna is convenient and makes for an efficient system. With such a tuner, it is possible to match a wide range of antennas (including stealth antennas).SGC World: Smart Tuners for Stealth Antennas. When the ATU must be located near the radio for convenient adjustment, any significant SWR will increase the loss in the feedline. For that reason, when using an ATU at the transmitter, low-loss, high-impedance feedline is a great advantage (open-wire line, for example). A short length of low-loss coaxial line is acceptable, but with longer lossy lines the additional loss due to SWR becomes very high. It is very important to remember that when matching the transmitter to the line, as is done when the ATU is near the transmitter, there is no change in the SWR in the feedline. The backlash currents reflected from the antenna are retro-reflected by the ATU – usually several times between the two – and so are invisible on the transmitter-side of the ATU. The result of the multiple reflections is compounded loss, higher voltage or higher currents, and narrowed bandwidth, none of which can be corrected by the ATU. Standing wave ratio It is a common misconception that a high standing wave ratio (SWR) per se causes loss. A well-adjusted ATU feeding an antenna through a low-loss line may have only a small percentage of additional loss compared with an intrinsically matched antenna, even with a high SWR (4:1, for example). An ATU sitting beside the transmitter just re-reflects energy reflected from the antenna ("backlash current") back yet again along the feedline to the antenna ("retro-reflection"). High losses arise from RF resistance in the feedline and antenna, and those multiple reflections due to high SWR cause feedline losses to be compounded. Using low-loss, high-impedance feedline with an ATU results in very little loss, even with multiple reflections. However, if the feedline-antenna combination is 'lossy' then an identical high SWR may lose a considerable fraction of the transmitter's power output. High impedance lines – such as most parallel-wire lines – carry power mostly as high voltage rather than high current, and current alone determines the power lost to line resistance. So despite high SWR, very little power is lost in high-impedance line compared low-impedance line – typical coaxial cable, for example. For that reason, radio operators can be more casual about using tuners with high-impedance feedline. Without an ATU, the SWR from a mismatched antenna and feedline can present an improper load to the transmitter, causing distortion and loss of power or efficiency with heating and/or burning of the output stage components. Modern solid state transmitters will automatically reduce power when high SWR is detected, so some solid-state power stages only produce weak signals if the SWR rises above 1.5 to 1. Were it not for that problem, even the losses from an SWR of 2:1 could be tolerated, since only 11 percent of transmitted power would be reflected and 89 percent sent out through to the antenna. So the main loss of output power with high SWR is due to the transmitter "backing off" its output when challenged with backlash current. Tube transmitters and amplifiers usually have an adjustable output network that can feed mismatched loads up to perhaps 3:1 SWR without trouble. In effect the built-in π-network of the transmitter output stage acts as an ATU. Further, since tubes are electrically robust (even though mechanically fragile), tube-based circuits can tolerate very high backlash current without damage. Broadcast Applications AM broadcast transmitters One of the oldest applications for antenna tuners is in AM and shortwave broadcasting transmitters. AM transmitters usually use a vertical antenna (tower) which can be from 0.20 to 0.68 wavelengths long. At the base of the tower an ATU is used to match the antenna to the 50 Ohm transmission line from the transmitter. The most commonly used circuit is a T-network, using two series inductors with a shunt capacitor between them. When multiple towers are used the ATU network may also provide for a phase adjustment so that the currents in each tower can be phased relative to the others to produce a desired pattern. These patterns are often required by law to include nulls in directions that could produce interference as well as to increase the signal in the target area. Adjustment of the ATUs in a multitower array is a complex and time consuming process requiring considerable expertise. High-power shortwave transmitters For International Shortwave (50 kW and above), frequent antenna tuning is done as part of frequency changes which may be required on a seasonal or even a daily basis. Modern shortwave transmitters typically include built-in impedance-matching circuitry for SWR up to 2:1 , and can adjust their output impedance within 15 seconds. The matching networks in transmitters sometimes incorporate a balun or an external one can be installed at the transmitter in order to feed a balanced line. Balanced transmission lines of 300 Ohms or more were more-or-less standard for all shortwave transmitters and antennas in the past, even by amateurs. Most shortwave broadcasters have continued to use high-impedance feeds even before the advent of automatic impedance matching. The most commonly used shortwave antennas for international broadcasting are the HRS antenna (curtain array), which cover a 2 to 1 frequency range and the log-periodic antenna which cover up to 8 to 1 frequency range. Within that range, the SWR will vary, but is usually kept below 1.7 to 1 – within the range of SWR that can be tuned by antenna matching built-into many modern transmitters. Hence, when feeding these antennas, a modern transmitter will be able to tune itself as needed to match at any frequency. Automatic antenna tuning Automatic antenna tuning is used in flagship mobile phones, transceivers for amateur radio, and in land mobile, marine, and tactical HF radio transceivers. Each antenna tuning system (AT) shown in the figure has an "antenna port", which is directly or indirectly coupled to an antenna, and another port, referred to as "radio port" (or as "user port"), for transmitting and / or receiving radio signals through the AT and the antenna. Each AT shown in the figure has a single antenna-port, (SAP) AT, but a multiple antenna-port (MAP) AT may be needed for MIMO radio transmission. Several control schemes can be used in a radio transceiver or transmitter to automatically adjust an antenna tuner (AT). The control schemes are based on one of the two configurations, (a) and (b), shown in the diagram. For both configurations, the transmitter comprises: antenna antenna tuner / matching network (AT) sensing unit (SU) control unit (CU) transmitter and signal processing unit (TSPU) The TSPU incorporates all the parts of the transmitting not otherwise shown in the diagram. The TX port of the TSPU delivers a test signal. The SU delivers, to the TSPU, one or more output signals indicating the response to the test signal, one or more electrical variables (such as voltage, current, incident or forward voltage, etc.). The response sensed at the radio port in the case of configuration (a) or at the antenna port'' in the case of configuration (b). Note that neither configuration (a) nor (b) is ideal, since the line between the antenna and the AT attenuates SWR; response to a test signal is most accurately tested at or near the antenna feedpoint. {| style="text-align:center;" class="wikitable" |+ ''' |- style="vertical-align:bottom;" ! Control scheme !! Configuration !! Extremum-seeking? |- | Type 0 || n/a || n/a |- | Type 1 || (a) || No |- | Type 2 || (a) || Yes |- | Type 3 || (b) || No |- | Type 4 || (b) || Yes |} Broydé & Clavelier (2020) distinguish five types of antenna tuner control schemes, as follows: Type 0 designates the open-loop AT control schemes that do not use any SU, the adjustment being typically only based on previous knowledge programmed for each operating frequency Type 1 and type 2 control schemes use configuration (a) type 2 uses extremum-seeking control type 1 does not seek an extreme Type 3 and type 4 control schemes use configuration (b) type 4 uses extremum-seeking control type 3 does not seek an extreme The control schemes may be compared as regards: use of closed-loop or open-loop control (or both) measurements used ability to mitigate the effects of the electromagnetic characteristics of the surroundings aim / goal accuracy and speed dependence on use of a particular model of AT or CU See also American Radio Relay League Electrical lengthening Impedance bridging Loading coil Preselector Smith chart References Further reading External links American Radio Relay League website. What tuners do and a look inside. Tuner Wireless tuning and filtering
Antenna tuner
[ "Engineering" ]
6,039
[ "Radio electronics", "Wireless tuning and filtering" ]
582,228
https://en.wikipedia.org/wiki/Rabi%20cycle
In physics, the Rabi cycle (or Rabi flop) is the cyclic behaviour of a two-level quantum system in the presence of an oscillatory driving field. A great variety of physical processes belonging to the areas of quantum computing, condensed matter, atomic and molecular physics, and nuclear and particle physics can be conveniently studied in terms of two-level quantum mechanical systems, and exhibit Rabi flopping when coupled to an optical driving field. The effect is important in quantum optics, magnetic resonance, and quantum computing, and is named after Isidor Isaac Rabi. A two-level system is one that has two possible energy levels. One level is a ground state with lower energy, and the other is an excited state with higher energy. If the energy levels are not degenerate (i.e. don't have equal energies), the system can absorb or emit a quantum of energy and transition from the ground state to the excited state or vice versa. When an atom (or some other two-level system) is illuminated by a coherent beam of photons, it will cyclically absorb photons and emit them by stimulated emission. One such cycle is called a Rabi cycle, and the inverse of its duration is the Rabi frequency of the system. The effect can be modeled using the Jaynes–Cummings model and the Bloch vector formalism. Mathematical description of spin flopping One example of Rabi flopping is the spin flipping within a quantum system containing a spin-1/2 particle and an oscillating magnetic field. We split the magnetic field into a constant 'environment' field, and the oscillating part, so that our field looks likewhere and are the strengths of the environment and the oscillating fields respectively, and is the frequency at which the oscillating field oscillates. We can then write a Hamiltonian describing this field, yieldingwhere , , and are the spin operators. The frequency is known as the Rabi frequency. We can substitute in their matrix forms to find the matrix representing the Hamiltonian:where we have used . This Hamiltonian is a function of time, meaning we cannot use the standard prescription of Schrödinger time evolution in quantum mechanics, where the time evolution operator is , because this formula assume that the Hamiltonian is constant with respect to time. The main strategy in solving this problem is to transform the Hamiltonian so that the time independence is gone, solve the problem in this transformed frame, and then transform the results back to normal. This can be done by shifting the reference frame that we work in to match the rotating magnetic field. If we rotate along with the magnetic field, then from our point of view, the magnetic field is not rotating and appears constant. Therefore, in the rotating reference frame, both the magnetic field and the Hamiltonian are constant with respect to time. We denote our spin-1/2 particle state to be in the stationary reference frame, where and are spin up and spin down states respectively, and . We can transform this state to the rotating reference frame by using a rotation operatorwhich rotates the state counterclockwise around the positive z-axis in state space, which may be visualized as a Bloch sphere. At a time and a frequency , the magnetic field will have precessed around by an angle . To transform into the rotating reference frame, note that the stationary x and y-axes rotate clockwise from the point of view of the rotating reference frame. Because the operator rotates counterclockwise, we must negate the angle to produce the correct state in the rotating reference frame. Thus, the state becomesWe may rewrite the amplitudes so thatThe time dependent Schrödinger equation in the stationary reference frame isExpanding this using the matrix forms of the Hamiltonian and the state yieldsApplying the matrix and separating the components of the vector allows us to write two coupled differential equations as followsTo transform this into the rotating reference frame, we may use the fact that and to write the following:where . Now defineWe now write these two new coupled differential equations back into the form of the Schrödinger equation:In some sense, this is a transformed Schrödinger equation in the rotating reference frame. Crucially, the Hamiltonian does not vary with respect to time, meaning in this reference frame, we can use the familiar solution to Schrödinger time evolution:This transformed problem is equivalent to that of Larmor precession of a spin state, so we have solved the essence of Rabi flopping. The probability that a particle starting in the spin up state flips to the spin down state can be stated aswhere is the generalized Rabi Frequency. Something important to notice is that will not reach 1 unless . In other words, the frequency of the rotating magnetic field must match the environmental field's Larmor frequency in order for the spin to fully flip; they must achieve resonance. When resonance (i.e. ) is achieved, . Within the rotating reference frame, when resonance is achieved, it is as if there is no environmental magnetic field, and the oscillating magnetic field looks constant. Thus both mathematically (as we have derived) and physically, the problem reduces to the precession of a spin state under a constant magnetic field (Larmor precession). To transform the solved state back to the stationary reference frame, we reuse the rotation operator with the opposite angle, thus yielding a full solution to the problem. Applications The Rabi effect is important in quantum optics, magnetic resonance and quantum computing. Quantum optics Rabi flopping may be used to describe a two-level atom with an excited state and a ground state in an electromagnetic field with frequency tuned to the excitation energy. Using the spin-flipping formula but applying it to this system yields where is the Rabi frequency. Quantum computing Any two-state quantum system can be used to model a qubit. Rabi flopping provides a physical way to allow for spin flips in a qubit system. At resonance, the transition probability is given by To go from state to state it is sufficient to adjust the time during which the rotating field acts such that or . This is called a pulse. If a time intermediate between 0 and is chosen, we obtain a superposition of and . In particular for , we have a pulse, which acts as: The equations are essentially identical in the case of a two level atom in the field of a laser when the generally well satisfied rotating wave approximation is made, where is the energy difference between the two atomic levels, is the frequency of laser wave and Rabi frequency is proportional to the product of the transition electric dipole moment of atom and electric field of the laser wave that is . On a quantum computer, these oscillations are obtained by exposing qubits to periodic electric or magnetic fields during suitably adjusted time intervals. See also Atomic coherence Bloch sphere Laser pumping Optical pumping Rabi problem Vacuum Rabi oscillation Neutral particle oscillation References Quantum Mechanics Volume 1 by C. Cohen-Tannoudji, Bernard Diu, Frank Laloe, A Short Introduction to Quantum Information and Quantum Computation by Michel Le Bellac, The Feynman Lectures on Physics, Volume III Modern Approach To Quantum Mechanics by John S Townsend, Quantum optics Atomic physics
Rabi cycle
[ "Physics", "Chemistry" ]
1,517
[ "Quantum optics", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
582,263
https://en.wikipedia.org/wiki/Michelson%20interferometer
The Michelson interferometer is a common configuration for optical interferometry and was invented by the 19/20th-century American physicist Albert Abraham Michelson. Using a beam splitter, a light source is split into two arms. Each of those light beams is reflected back toward the beamsplitter which then combines their amplitudes using the superposition principle. The resulting interference pattern that is not directed back toward the source is typically directed to some type of photoelectric detector or camera. For different applications of the interferometer, the two light paths can be with different lengths or incorporate optical elements or even materials under test. The Michelson interferometer (among other interferometer configurations) is employed in many scientific experiments and became well known for its use by Michelson and Edward Morley in the famous Michelson–Morley experiment (1887) in a configuration which would have detected the Earth's motion through the supposed luminiferous aether that most physicists at the time believed was the medium in which light waves propagated. The null result of that experiment essentially disproved the existence of such an aether, leading eventually to the special theory of relativity and the revolution in physics at the beginning of the twentieth century. In 2015, another application of the Michelson interferometer, LIGO, made the first direct observation of gravitational waves. That observation confirmed an important prediction of general relativity, validating the theory's prediction of space-time distortion in the context of large scale cosmic events (known as strong field tests). Configuration A Michelson interferometer consists minimally of mirrors M1 & M2 and a beam splitter M (although a diffraction grating is also used). In Fig 2, a source S emits light that hits the beam splitter (in this case, a plate beamsplitter) surface M at point C. M is partially reflective, so part of the light is transmitted through to point B while some is reflected in the direction of A. Both beams recombine at point C' to produce an interference pattern incident on the detector at point E (or on the retina of a person's eye). If there is a slight angle between the two returning beams, for instance, then an imaging detector will record a sinusoidal fringe pattern as shown in Fig. 3b. If there is perfect spatial alignment between the returning beams, then there will not be any such pattern but rather a constant intensity over the beam dependent on the differential pathlength; this is difficult, requiring very precise control of the beam paths. Fig. 2 shows use of a coherent (laser) source. Narrowband spectral light from a discharge or even white light can also be used, however to obtain significant interference contrast it is required that the differential pathlength is reduced below the coherence length of the light source. That can be only micrometers for white light, as discussed below. If a lossless beamsplitter is employed, then one can show that optical energy is conserved. At every point on the interference pattern, the power that is not directed to the detector at E is rather present in a beam (not shown) returning in the direction of the source. As shown in Fig. 3a and 3b, the observer has a direct view of mirror M1 seen through the beam splitter, and sees a reflected image M'2 of mirror M2. The fringes can be interpreted as the result of interference between light coming from the two virtual images S'1 and S'2 of the original source S. The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. 3a, the optical elements are oriented so that S'1 and S'2 are in line with the observer, and the resulting interference pattern consists of circles centered on the normal to M1 and M'2 (fringes of equal inclination). If, as in Fig. 3b, M1 and M'2 are tilted with respect to each other, the interference fringes will generally take the shape of conic sections (hyperbolas), but if M1 and M'2 overlap, the fringes near the axis will be straight, parallel, and equally spaced (fringes of equal thickness). If S is an extended source rather than a point source as illustrated, the fringes of Fig. 3a must be observed with a telescope set at infinity, while the fringes of Fig. 3b will be localized on the mirrors. Source bandwidth White light has a tiny coherence length and is difficult to use in a Michelson (or Mach–Zehnder) interferometer. Even a narrowband (or "quasi-monochromatic") spectral source requires careful attention to issues of chromatic dispersion when used to illuminate an interferometer. The two optical paths must be practically equal for all wavelengths present in the source. This requirement can be met if both light paths cross an equal thickness of glass of the same dispersion. In Fig. 4a, the horizontal beam crosses the beam splitter three times, while the vertical beam crosses the beam splitter once. To equalize the dispersion, a so-called compensating plate identical to the substrate of the beam splitter may be inserted into the path of the vertical beam. In Fig. 4b, we see using a cube beam splitter already equalizes the pathlengths in glass. The requirement for dispersion equalization is eliminated by using extremely narrowband light from a laser. The extent of the fringes depends on the coherence length of the source. In Fig. 3b, the yellow sodium light used for the fringe illustration consists of a pair of closely spaced lines, D1 and D2, implying that the interference pattern will blur after several hundred fringes. Single longitudinal mode lasers are highly coherent and can produce high contrast interference with differential pathlengths of millions or even billions of wavelengths. On the other hand, using white (broadband) light, the central fringe is sharp, but away from the central fringe the fringes are colored and rapidly become indistinct to the eye. Early experimentalists attempting to detect the Earth's velocity relative to the supposed luminiferous aether, such as Michelson and Morley (1887) and Miller (1933), used quasi-monochromatic light only for initial alignment and coarse path equalization of the interferometer. Thereafter they switched to white (broadband) light, since using white light interferometry they could measure the point of absolute phase equalization (rather than phase modulo 2π), thus setting the two arms' pathlengths equal. More importantly, in a white light interferometer, any subsequent "fringe jump" (differential pathlength shift of one wavelength) would always be detected. Applications The Michelson interferometer configuration is used in a number of different applications. Fourier transform spectrometer Fig. 5 illustrates the operation of a Fourier transform spectrometer, which is essentially a Michelson interferometer with one mirror movable. (A practical Fourier transform spectrometer would substitute corner cube reflectors for the flat mirrors of the conventional Michelson interferometer, but for simplicity, the illustration does not show this.) An interferogram is generated by making measurements of the signal at many discrete positions of the moving mirror. A Fourier transform converts the interferogram into an actual spectrum. Fourier transform spectrometers can offer significant advantages over dispersive (i.e., grating and prism) spectrometers under certain conditions. (1) The Michelson interferometer's detector in effect monitors all wavelengths simultaneously throughout the entire measurement. When using a noisy detector, such as at infrared wavelengths, this offers an increase in signal-to-noise ratio while using only a single detector element; (2) the interferometer does not require a limited aperture as do grating or prism spectrometers, which require the incoming light to pass through a narrow slit in order to achieve high spectral resolution. This is an advantage when the incoming light is not of a single spatial mode. For more information, see Fellgett's advantage. Twyman–Green interferometer The Twyman–Green interferometer is a variation of the Michelson interferometer used to test small optical components, invented and patented by Twyman and Green in 1916. The basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator. Michelson (1918) criticized the Twyman–Green configuration as being unsuitable for the testing of large optical components, since the available light sources had limited coherence length. Michelson pointed out that constraints on geometry forced by the limited coherence length required the use of a reference mirror of equal size to the test mirror, making the Twyman–Green impractical for many purposes. Decades later, the advent of laser light sources answered Michelson's objections. The use of a figured reference mirror in one arm allows the Twyman–Green interferometer to be used for testing various forms of optical component, such as lenses or telescope mirrors. Fig. 6 illustrates a Twyman–Green interferometer set up to test a lens. A point source of monochromatic light is expanded by a diverging lens (not shown), then is collimated into a parallel beam. A convex spherical mirror is positioned so that its center of curvature coincides with the focus of the lens being tested. The emergent beam is recorded by an imaging system for analysis. Laser unequal path interferometer The "LUPI" is a Twyman–Green interferometer that uses a coherent laser light source. The high coherence length of a laser allows unequal path lengths in the test and reference arms and permits economical use of the Twyman–Green configuration in testing large optical components. A similar scheme has been used by Tajammal M in his PhD thesis (Manchester University UK, 1995) to balance two arms of an LDA system. This system used fibre optic direction coupler. Gravitational wave detection Michelson interferometry is the leading method for the direct detection of gravitational waves. This involves detecting tiny strains in space itself, affecting two long arms of the interferometer unequally, due to a strong passing gravitational wave. In 2015 the first detection of gravitational waves was accomplished using the two Michelson interferometers, each with 4 km arms, which comprise the Laser Interferometer Gravitational-Wave Observatory. This was the first experimental validation of gravitational waves, predicted by Albert Einstein's General Theory of Relativity. With the addition of the Virgo interferometer in Europe, it became possible to calculate the direction from which the gravitational waves originate, using the tiny arrival-time differences between the three detectors. In 2020, India was constructing a fourth Michelson interferometer for gravitational wave detection. Miscellaneous applications Fig. 7 illustrates use of a Michelson interferometer as a tunable narrow band filter to create dopplergrams of the Sun's surface. When used as a tunable narrow band filter, Michelson interferometers exhibit a number of advantages and disadvantages when compared with competing technologies such as Fabry–Pérot interferometers or Lyot filters. Michelson interferometers have the largest field of view for a specified wavelength, and are relatively simple in operation, since tuning is via mechanical rotation of waveplates rather than via high voltage control of piezoelectric crystals or lithium niobate optical modulators as used in a Fabry–Pérot system. Compared with Lyot filters, which use birefringent elements, Michelson interferometers have a relatively low temperature sensitivity. On the negative side, Michelson interferometers have a relatively restricted wavelength range, and require use of prefilters which restrict transmittance. The reliability of Michelson interferometers has tended to favor their use in space applications, while the broad wavelength range and overall simplicity of Fabry–Pérot interferometers has favored their use in ground-based systems. Another application of the Michelson interferometer is in optical coherence tomography (OCT), a medical imaging technique using low-coherence interferometry to provide tomographic visualization of internal tissue microstructures. As seen in Fig. 8, the core of a typical OCT system is a Michelson interferometer. One interferometer arm is focused onto the tissue sample and scans the sample in an X-Y longitudinal raster pattern. The other interferometer arm is bounced off a reference mirror. Reflected light from the tissue sample is combined with reflected light from the reference. Because of the low coherence of the light source, interferometric signal is observed only over a limited depth of sample. X-Y scanning therefore records one thin optical slice of the sample at a time. By performing multiple scans, moving the reference mirror between each scan, an entire three-dimensional image of the tissue can be reconstructed. Recent advances have striven to combine the nanometer phase retrieval of coherent interferometry with the ranging capability of low-coherence interferometry. Others applications include delay line interferometer which convert phase modulation into amplitude modulation in DWDM networks, the characterization of high-frequency circuits, and low-cost THz power generation. Atmospheric and space applications The Michelson Interferometer has played an important role in studies of the upper atmosphere, revealing temperatures and winds, employing both space-borne, and ground-based instruments, by measuring the Doppler widths and shifts in the spectra of airglow and aurora. For example, the Wind Imaging Interferometer, WINDII, on the Upper Atmosphere Research Satellite, UARS, (launched on September 12, 1991) measured the global wind and temperature patterns from 80 to 300 km by using the visible airglow emission from these altitudes as a target and employing optical Doppler interferometry to measure the small wavelength shifts of the narrow atomic and molecular airglow emission lines induced by the bulk velocity of the atmosphere carrying the emitting species. The instrument was an all-glass field-widened achromatically and thermally compensated phase-stepping Michelson interferometer, along with a bare CCD detector that imaged the airglow limb through the interferometer. A sequence of phase-stepped images was processed to derive the wind velocity for two orthogonal view directions, yielding the horizontal wind vector. The principle of using a polarizing Michelson Interferometer as a narrow band filter was first described by Evans who developed a birefringent photometer where the incoming light is split into two orthogonally polarized components by a polarizing beam splitter, sandwiched between two halves of a Michelson cube. This led to the first polarizing wide-field Michelson interferometer described by Title and Ramsey which was used for solar observations; and led to the development of a refined instrument applied to measurements of oscillations in the Sun's atmosphere, employing a network of observatories around the Earth known as the Global Oscillations Network Group (GONG). The Polarizing Atmospheric Michelson Interferometer, PAMI, developed by Bird et al., and discussed in Spectral Imaging of the Atmosphere, combines the polarization tuning technique of Title and Ramsey with the Shepherd et al. technique of deriving winds and temperatures from emission rate measurements at sequential path differences, but the scanning system used by PAMI is much simpler than the moving mirror systems in that it has no internal moving parts, instead scanning with a polarizer external to the interferometer. The PAMI was demonstrated in an observation campaign where its performance was compared to a Fabry–Pérot spectrometer, and employed to measure E-region winds. More recently, the Helioseismic and Magnetic Imager (HMI), on the Solar Dynamics Observatory, employs two Michelson Interferometers with a polarizer and other tunable elements, to study solar variability and to characterize the Sun's interior along with the various components of magnetic activity. HMI takes high-resolution measurements of the longitudinal and vector magnetic field over the entire visible disk thus extending the capabilities of its predecessor, the SOHO's MDI instrument (See Fig. 9). HMI produces data to determine the interior sources and mechanisms of solar variability and how the physical processes inside the Sun are related to surface magnetic field and activity. It also produces data to enable estimates of the coronal magnetic field for studies of variability in the extended solar atmosphere. HMI observations will help establish the relationships between the internal dynamics and magnetic activity in order to understand solar variability and its effects. In one example of the use of the MDI, Stanford scientists reported the detection of several sunspot regions in the deep interior of the Sun, 1–2 days before they appeared on the solar disc. The detection of sunspots in the solar interior may thus provide valuable warnings about upcoming surface magnetic activity which could be used to improve and extend the predictions of space weather forecasts. Technical topics Step-phase interferometer This is a Michelson interferometer in which the mirror in one arm is replaced with a Gires–Tournois etalon. The highly dispersed wave reflected by the Gires–Tournois etalon interferes with the original wave as reflected by the other mirror. Because the phase change from the Gires–Tournois etalon is an almost step-like function of wavelength, the resulting interferometer has special characteristics. It has an application in fiber-optic communications as an optical interleaver. Both mirrors in a Michelson interferometer can be replaced with Gires–Tournois etalons. The step-like relation of phase to wavelength is thereby more pronounced, and this can be used to construct an asymmetric optical interleaver. Phase-conjugating interferometry The reflection from phase-conjugating mirror of two light beams inverses their phase difference to the opposite one . For this reason the interference pattern in twin-beam interferometer changes drastically. Compared to conventional Michelson interference curve with period of half-wavelength : where is second-order correlation function, the interference curve in phase-conjugating interferometer has much longer period defined by frequency shift of reflected beams: where visibility curve is nonzero when optical path difference exceeds coherence length of light beams. The nontrivial features of phase fluctuations in optical phase-conjugating mirror had been studied via Michelson interferometer with two independent PC-mirrors . The phase-conjugating Michelson interferometry is a promising technology for coherent summation of laser amplifiers. Constructive interference in an array containing beamsplitters of laser beams synchronized by phase conjugation may increase the brightness of amplified beams as . See also List of types of interferometers LIGO Laser Interferometer Gravitational-Wave Observatory NPOI GEO600 VIRGO KAGRA Michelson stellar interferometer Notes References External links Diagrams of Michelson interferometers Application of a step-phase interferometer in optical communication A satellite view of the VIRGO interferometer A free software, to simulate and understand the Michelson interferometer principles, made by students of Faculty of Engineering of the University of Porto Interferometers
Michelson interferometer
[ "Technology", "Engineering" ]
4,018
[ "Interferometers", "Measuring instruments" ]
582,273
https://en.wikipedia.org/wiki/Hydraulic%20diameter
The hydraulic diameter, , is a commonly used term when handling flow in non-circular tubes and channels. Using this term, one can calculate many things in the same way as for a round tube. When the cross-section is uniform along the tube or channel length, it is defined as where is the cross-sectional area of the flow, is the wetted perimeter of the cross-section. More intuitively, the hydraulic diameter can be understood as a function of the hydraulic radius , which is defined as the cross-sectional area of the channel divided by the wetted perimeter. Here, the wetted perimeter includes all surfaces acted upon by shear stress from the fluid. Note that for the case of a circular pipe, The need for the hydraulic diameter arises due to the use of a single dimension in the case of a dimensionless quantity such as the Reynolds number, which prefers a single variable for flow analysis rather than the set of variables as listed in the table below. The Manning formula contains a quantity called the hydraulic radius. Despite what the name may suggest, the hydraulic diameter is not twice the hydraulic radius, but four times larger. Hydraulic diameter is mainly used for calculations involving turbulent flow. Secondary flows can be observed in non-circular ducts as a result of turbulent shear stress in the turbulent flow. Hydraulic diameter is also used in calculation of heat transfer in internal-flow problems. Non-uniform and non-circular cross-section channels In the more general case, channels with non-uniform non-circular cross-sectional area, such as the Tesla valve, the hydraulic diameter is defined as: where is the total wetted volume of the channel, is the total wetted surface area. This definition is reduced to for uniform non-circular cross-section channels, and for circular pipes. List of hydraulic diameters For a fully filled duct or pipe whose cross-section is a convex regular polygon, the hydraulic diameter is equivalent to the diameter of a circle inscribed within the wetted perimeter. This can be seen as follows: The -sided regular polygon is a union of triangles, each of height and base . Each such triangle contributes to the total area and to the total perimeter, giving for the hydraulic diameter. References See also Equivalent spherical diameter Hydraulic radius Darcy friction factor Fluid dynamics Heat transfer Hydrology Hydraulics Radii
Hydraulic diameter
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
470
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Hydrology", "Chemical engineering", "Physical systems", "Hydraulics", "Thermodynamics", "Environmental engineering", "Piping", "Fluid dynamics" ]
582,393
https://en.wikipedia.org/wiki/Polygynandry
Polygynandry is a mating system in which both males and females have multiple mating partners during a breeding season. In sexually reproducing diploid animals, different mating strategies are employed by males and females, because the cost of gamete production is lower for males than it is for females. The different mating tactics employed by males and females are thought to be the outcome of stochastic reproductive conflicts both ecologically and socially. Reproductive conflicts in animal societies may arise because individuals are not genetically identical and have different optimal strategies for maximizing their fitness; and often it is found that reproductive conflicts generally arise due to dominance hierarchy in which all or a major part of reproduction is monopolized by only one individual. In the wasp Polistes carolina, the dominant queen amongst female wasps is determined by whoever arrives at the nest first rather than the largest foundress, who is expected to be the best at fighting (wasp). In a study of the bird Prunella collaris, the close proximity and sharing of ranges on the mountain tops of the French Pyrenees led to a polygynandrous mating system, where two to four males would mate with a range of two to four females within the same vicinity. Polygynandry is another way to describe a multi-male and multi-female polygamous mating system. When females have multiple mating partners, it is known as polyandry, and when males have multiple mating partners, it is known as polygyny. Each sex has potential benefits in being promiscuous; females, especially those with genetically 'inferior' social partners, have the chance to increase the genetic quality of their offspring, while males are able to fertilize the eggs of many other mates. Essentially, the ideal mating behavior for males is to be promiscuous rather than monogamous (when they only have one mating partner), because this leads to multiple offspring, and these males monopolize their female partners by physically preventing them from copulating with other males. On the other hand, females benefit through polyandry, as they have more sired offspring. Benefits of multiple mating in females Oftentimes, females mate voluntarily with more than one male. Mating with several males reduces the risk of females having unfertilized eggs because one male may not have enough sperm to fertilize all her eggs. In dark-eyed juncos, a female mates with more than one male because oftentimes, her social partner is of lower genetic quality than other potential sperm donors. The females voluntarily mate with other males besides their mate because she sees the potential to improve her offspring viability and sexual attractiveness. Females may also mate with several males for genetic benefits such as genetic diversity among her offspring due to the variety of sperm available to her. In song birds, extra-pair matings occur because females are able to sneak away from their home territories to solicit to other males. When female song birds seek extra-male partners, they sexually select males with colorful plumage more elaborate than those of their social partner. Studies show that female song birds that have less plumage partners most actively seek extra-pair matings, furthermore males with the most developed traits—such as longer tails or brighter plumage survive better. Thus, when female song birds have multiple mating partners, they are increasing the genetic quality of their offspring. To a female, multiple mating means an increase number of young that a female can produce, and oftentimes this also means an increase number of young they have to take care of. In order to ensure the safety and wellbeing of her offspring, females may have multiple mating partners in order to gain more resources from males for herself and her offspring. In dark-eyed juncos, dunnocks, and Galapagos hawks, mating with multiple males increases the amount of care a female can gain for her offspring. Oftentimes multiple mates allow females to have more sired offspring and the paternity of the offspring typically falls outside of the biological parents—meaning a different male may look after another male's offspring. Benefits of multiple mating in males Males can potentially fertilize eggs at a much faster rate than females can produce them, meaning a male can best increase his reproductive success by finding and fertilizing as many different females as possible. In Drosophila melanogaster, the reproductive success of males increased with the number of matings, but for females there was no direct relationship with number of mates and number of offspring produced. When males have multiple mating partners, they sometimes have to share parentage of the offspring, reducing the genetic value of the offspring to him and thus reduces the relative benefit of staying to help. When paternity is shared between multiple males, males are expected to be less likely to stay in order to help the female care for the offspring because there is little benefit in staying to help raise the other offspring when there are other males present. Although males are able to increase their reproductive success faster than females by being able to fertilize eggs faster than females can produce them, males also at a disadvantage when it comes to mating because of sexual selection. Females usually choose males that are 'charming' and those who display sexual ornaments. In a study of long-tailed widowbirds, males with longer tails were sexually selected over those with shorter and less impressive tails. In birds such as the red-collared widowbird, males who display their sexual ornament during courtship are generally paired up faster and attract more females than males who display shorter tails during courtship. Males are often sexually selected based on their physical characteristics and what they have to 'offer', for example, male peacocks with flamboyant colored tails are sexually selected over those with dull and less elaborate tails. Sexual selection of males by females also leads to male-male competition. Unlike females who invest a lot prior to mating, males do not invest as much when generating their sperm, however this increases competition amongst males for female investment. High mating competition also means a greater variance in male success—the best competitors will have better success in mating than those who fail to mate. The best competitors will less likely be inclined to care for their offspring upon mating because they have the ability to produce offspring elsewhere. Males with the greatest size, strength, or best developed weapons achieve the greatest mating success. In other cases, males may have a higher reproductive success if they have better access to resources than other competitors. For instance, female hanging flies mate with a male only if he provides a large insect for her to eat during copulation and North American bullfrogs protect ponds and small lakes where females come to lay their eggs. Taxonomic references Amphibia The various mating tactics are found in a broad number of taxa. In amphibians such as Salamandrina perspicillata, multiple paternity is a consequence of females mating with multiple males. As of now, all species in the suborder Salamandroidea have shown to employ polyandrous mating strategies by females. In a study of a population of Salamandrina perspicillata, multiple paternity occurs as a pervasive reproductive strategy under natural conditions and it is seen that in these species, when males mated with two females, they sired offspring who were inversely related with their genetic similarity to the female. Females in this species practiced polygynandry in order to increase genetic variability among her offspring by choosing mates that were genetically different from themselves. Unlike other studies of polygynandry where the females had multiple mating partners in order to gain resources from the male, in the study of Salamandrina perspicillata, multiple paternity did not provide a genetic indirect benefit to the offspring. This, resulted in a cost/benefit mechanism in which the gained benefit of multiple mating counterbalanced the negative effect of the number of mates on offspring heterozygosity. Females choosing mates that are genetically different from themselves were also seen in Ichthyosaura alpestris and Lissotriton vulgaris, where in a two-male mating system, the less-related males were preferred by the females. And like the case of Salamandrina perspicillata, there were no indirect genetic benefits gained from having multiple mating partners. Pycnogonids (sea spider) In Ammothea hilgendorfi, a sea spider species, fertilization occurs as a female transfers her eggs to a male who holds them with ovigers, a specialized pair of legs and fertilizes the eggs externally. The males glue the eggs into clusters and carries the eggs on his ovigers until they hatch. The personal cost to males for providing a prolonged care for the young is seen to be a significant parental investment because parental assurance is thought to be substantial for post-zygotic investment. A high level of paternity assurance is Ammothea hilgendorfi, suggests that reduced foraging ability, increased predation risk, and lower mobility exist. An experimental study of Ammothea hilgendorfi showed that although males mate with multiple females, males do not mix egg batches from different dams. The eggs held in clusters by a male hatched in a close time frame, indicating that males mated with different females within a short time span. Multiple mating by female pycnogonids are possible since a recently mated female often retains unused mature eggs in one or more femora, which allows her to mate with additional partners. In species with external fertilization and male parental care, females are able to distribute her clutch amongst different males and by doing so the female increases the likelihood that at least some of her offspring will receive indirect genetic benefits and/or extensive parental care from a quality provider. Hymenoptera The reproductive females of social Hymenoptera—wasps, bees, and ants—mate with multiple partners. These females are called queens, to distinguish them from the non-reprodutive females that tend the colony and do not mate. A honey bee queen ideally mates with about a dozen drones (males) in her nuptial flight. The sperm of matings are stored in a special reservoir, called the spermatheca, for the life of the queen—which can be several years. Maintenance Although promiscuity is said to benefit both males and females, there has not yet been sufficient data to support the fact that promiscuity benefits females. In a study of dark-eyed juncos, the offspring produced by extra-pair males were neither better nor worse than the offspring of their male social partners. However, the study of dark-eyed juncos did reveal more sired offspring in promiscuous females than monogamous females. In a study of female water striders, the results showed that multiple matings can become costly to the female—especially since a lot of time and energy is invested in producing an egg. Not only were extra matings costly, but there was no support for any genetic benefits from having multiple mating partners. Instead, the results from the experiment showed that egg production and egg hatching success were the highest when the number of partners were kept at a minimum. On the other hand, studies have shown that males have had a higher reproductive success than females when they were polygynandrous. When compared to female chimpanzees, male chimpanzees had a better ratio of number of matings and number of offspring produced. Not only did studies show a higher reproductive success, but Columbian ground squirrels exhibited a significant male-biased sexual size and body mass, suggesting male-male competition. Male-male competition means sexual dimorphism amongst the males and this means females are able to sexually select males based on the sexual ornaments they display. Overall, studies have shown that polygynandry benefits males more than it benefits females. When polygynandry is observed in different species, males most often have the upper hand—meaning males benefit more from polygynandry than do females. Females generally seek multiple mating partners in order to increase benefits for their offspring, whether it be by gaining physical resources for their offspring or by providing their offspring with healthier genes that are fit for survival. On the other hand, in most cases males generally have multiple mating partners in order to obtain as much offspring as they can during their lifespan and they are able to achieve this easier than females because in most cases, males are not parentally involved in caring and raising their offspring. References External links Mating systems
Polygynandry
[ "Biology" ]
2,551
[ "Behavior", "Mating systems", "Mating" ]
582,410
https://en.wikipedia.org/wiki/Beam%20splitter
A beam splitter or beamsplitter is an optical device that splits a beam of light into a transmitted and a reflected beam. It is a crucial part of many optical experimental and measurement systems, such as interferometers, also finding widespread application in fibre optic telecommunications. Designs In its most common form, a cube, a beam splitter is made from two triangular glass prisms which are glued together at their base using polyester, epoxy, or urethane-based adhesives. (Before these synthetic resins, natural ones were used, e.g. Canada balsam.) The thickness of the resin layer is adjusted such that (for a certain wavelength) half of the light incident through one "port" (i.e., face of the cube) is reflected and the other half is transmitted due to FTIR (frustrated total internal reflection). Polarizing beam splitters, such as the Wollaston prism, use birefringent materials to split light into two beams of orthogonal polarization states. Another design is the use of a half-silvered mirror. This is composed of an optical substrate, which is often a sheet of glass or plastic, with a partially transparent thin coating of metal. The thin coating can be aluminium deposited from aluminium vapor using a physical vapor deposition method. The thickness of the deposit is controlled so that part (typically half) of the light, which is incident at a 45-degree angle and not absorbed by the coating or substrate material, is transmitted and the remainder is reflected. A very thin half-silvered mirror used in photography is often called a pellicle mirror. To reduce loss of light due to absorption by the reflective coating, so-called "Swiss-cheese" beam-splitter mirrors have been used. Originally, these were sheets of highly polished metal perforated with holes to obtain the desired ratio of reflection to transmission. Later, metal was sputtered onto glass so as to form a discontinuous coating, or small areas of a continuous coating were removed by chemical or mechanical action to produce a very literally "half-silvered" surface. Instead of a metallic coating, a dichroic optical coating may be used. Depending on its characteristics (thin-film interference), the ratio of reflection to transmission will vary as a function of the wavelength of the incident light. Dichroic mirrors are used in some ellipsoidal reflector spotlights to split off unwanted infrared (heat) radiation, and as output couplers in laser construction. A third version of the beam splitter is a dichroic mirrored prism assembly which uses dichroic optical coatings to divide an incoming light beam into a number of spectrally distinct output beams. Such a device was used in three-pickup-tube color television cameras and the three-strip Technicolor movie camera. It is currently used in modern three-CCD cameras. An optically similar system is used in reverse as a beam-combiner in three-LCD projectors, in which light from three separate monochrome LCD displays is combined into a single full-color image for projection. Beam splitters with single-mode fiber for PON networks use the single-mode behavior to split the beam. The splitter is done by physically splicing two fibers "together" as an X. Arrangements of mirrors or prisms used as camera attachments to photograph stereoscopic image pairs with one lens and one exposure are sometimes called "beam splitters", but that is a misnomer, as they are effectively a pair of periscopes redirecting rays of light which are already non-coincident. In some very uncommon attachments for stereoscopic photography, mirrors or prism blocks similar to beam splitters perform the opposite function, superimposing views of the subject from two different perspectives through color filters to allow the direct production of an anaglyph 3D image, or through rapidly alternating shutters to record sequential field 3D video. Phase shift Beam splitters are sometimes used to recombine beams of light, as in a Mach–Zehnder interferometer. In this case there are two incoming beams, and potentially two outgoing beams. But the amplitudes of the two outgoing beams are the sums of the (complex) amplitudes calculated from each of the incoming beams, and it may result that one of the two outgoing beams has amplitude zero. In order for energy to be conserved (see next section), there must be a phase shift in at least one of the outgoing beams. For example (see red arrows in picture on the right), if a polarized light wave in air hits a dielectric surface such as glass, and the electric field of the light wave is in the plane of the surface, then the reflected wave will have a phase shift of π, while the transmitted wave will not have a phase shift; the blue arrow does not pick up a phase-shift, because it is reflected from a medium with a lower refractive index. The behavior is dictated by the Fresnel equations. This does not apply to partial reflection by conductive (metallic) coatings, where other phase shifts occur in all paths (reflected and transmitted). In any case, the details of the phase shifts depend on the type and geometry of the beam splitter. Classical lossless beam splitter For beam splitters with two incoming beams, using a classical, lossless beam splitter with electric fields Ea and Eb each incident at one of the inputs, the two output fields Ec and Ed are linearly related to the inputs through where the 2×2 element is the beam-splitter transfer matrix and r and t are the reflectance and transmittance along a particular path through the beam splitter, that path being indicated by the subscripts. (The values depend on the polarization of the light.) If the beam splitter removes no energy from the light beams, the total output energy can be equated with the total input energy, reading Inserting the results from the transfer equation above with produces and similarly for then When both and are non-zero, and using these two results we obtain where "" indicates the complex conjugate. It is now easy to show that where is the identity, i.e. the beam-splitter transfer matrix is a unitary matrix. Each r and t can be written as a complex number having an amplitude and phase factor; for instance, . The phase factor accounts for possible shifts in phase of a beam as it reflects or transmits at that surface. Then we obtain Further simplifying, the relationship becomes which is true when and the exponential term reduces to -1. Applying this new condition and squaring both sides, it becomes where substitutions of the form were made. This leads to the result and similarly, It follows that . Having determined the constraints describing a lossless beam splitter, the initial expression can be rewritten as Applying different values for the amplitudes and phases can account for many different forms of the beam splitter that can be seen widely used. The transfer matrix appears to have 6 amplitude and phase parameters, but it also has 2 constraints: and . To include the constraints and simplify to 4 independent parameters, we may write (and from the constraint ), so that where is the phase difference between the transmitted beams and similarly for , and is a global phase. Lastly using the other constraint that we define so that , hence A 50:50 beam splitter is produced when . The dielectric beam splitter above, for example, has i.e. , while the "symmetric" beam splitter of Loudon has i.e. . Use in experiments Beam splitters have been used in both thought experiments and real-world experiments in the area of quantum theory and relativity theory and other fields of physics. These include: The Fizeau experiment of 1851 to measure the speeds of light in water The Michelson–Morley experiment of 1887 to measure the effect of the (hypothetical) luminiferous aether on the speed of light The Hammar experiment of 1935 to refute Dayton Miller's claim of a positive result from repetitions of the Michelson-Morley experiment The Kennedy–Thorndike experiment of 1932 to test the independence of the speed of light and the velocity of the measuring apparatus Bell test experiments (from ca. 1972) to demonstrate consequences of quantum entanglement and exclude local hidden-variable theories Wheeler's delayed choice experiment of 1978, 1984 etc., to test what makes a photon behave as a wave or a particle and when it happens The FELIX experiment (proposed in 2000) to test the Penrose interpretation that quantum superposition depends on spacetime curvature The Mach–Zehnder interferometer, used in various experiments, including the Elitzur–Vaidman bomb tester involving interaction-free measurement; and in others in the area of quantum computation Quantum mechanical description In quantum mechanics, the electric fields are operators as explained by second quantization and Fock states. Each electrical field operator can further be expressed in terms of modes representing the wave behavior and amplitude operators, which are typically represented by the dimensionless creation and annihilation operators. In this theory, the four ports of the beam splitter are represented by a photon number state and the action of a creation operation is . The following is a simplified version of Ref. The relation between the classical field amplitudes , and produced by the beam splitter is translated into the same relation of the corresponding quantum creation (or annihilation) operators , and , so that where the transfer matrix is given in classical lossless beam splitter section above: Since is unitary, , i.e. This is equivalent to saying that if we start from the vacuum state and add a photon in port a to produce then the beam splitter creates a superposition on the outputs of The probabilities for the photon to exit at ports c and d are therefore and , as might be expected. Likewise, for any input state and the output is Using the multi-binomial theorem, this can be written where and the is a binomial coefficient and it is to be understood that the coefficient is zero if etc. The transmission/reflection coefficient factor in the last equation may be written in terms of the reduced parameters that ensure unitarity: where it can be seen that if the beam splitter is 50:50 then and the only factor that depends on j is the term. This factor causes interesting interference cancellations. For example, if and the beam splitter is 50:50, then where the term has cancelled. Therefore the output states always have even numbers of photons in each arm. A famous example of this is the Hong–Ou–Mandel effect, in which the input has , the output is always or , i.e. the probability of output with a photon in each mode (a coincidence event) is zero. Note that this is true for all types of 50:50 beam splitter irrespective of the details of the phases, and the photons need only be indistinguishable. This contrasts with the classical result, in which equal output in both arms for equal inputs on a 50:50 beam splitter does appear for specific beam splitter phases (e.g. a symmetric beam splitter ), and for other phases where the output goes to one arm (e.g. the dielectric beam splitter ) the output is always in the same arm, not random in either arm as is the case here. From the correspondence principle we might expect the quantum results to tend to the classical one in the limits of large n, but the appearance of large numbers of indistinguishable photons at the input is a non-classical state that does not correspond to a classical field pattern, which instead produces a statistical mixture of different known as Poissonian light. Rigorous derivation is given in the Fearn–Loudon 1987 paper and extended in Ref to include statistical mixtures with the density matrix. Non-symmetric beam-splitter In general, for a non-symmetric beam-splitter, namely a beam-splitter for which the transmission and reflection coefficients are not equal, one can define an angle such that where and are the reflection and transmission coefficients. Then the unitary operation associated with the beam-splitter is then Application for quantum computing In 2000 Knill, Laflamme and Milburn (KLM protocol) proved that it is possible to create a universal quantum computer solely with beam splitters, phase shifters, photodetectors and single photon sources. The states that form a qubit in this protocol are the one-photon states of two modes, i.e. the states |01⟩ and |10⟩ in the occupation number representation (Fock state) of two modes. Using these resources it is possible to implement any single qubit gate and 2-qubit probabilistic gates. The beam splitter is an essential component in this scheme since it is the only one that creates entanglement between the Fock states. Similar settings exist for continuous-variable quantum information processing. In fact, it is possible to simulate arbitrary Gaussian (Bogoliubov) transformations of a quantum state of light by means of beam splitters, phase shifters and photodetectors, given two-mode squeezed vacuum states are available as a prior resource only (this setting hence shares certain similarities with a Gaussian counterpart of the KLM protocol). The building block of this simulation procedure is the fact that a beam splitter is equivalent to a squeezing transformation under partial time reversal. Diffractive beam splitter Reflection beam splitters Reflection beam splitters reflect parts of the incident radiation in different directions. These partial beams show exactly the same intensity. Typically, reflection beam splitters are made of metal and have a broadband spectral characteristic. Due to their compact design, beam splitters of this type are particularly easy to install in infrared detectors. At this application, the radiation enters through the aperture opening of the detector and is split into several beams of equal intensity but different directions by internal highly reflective microstructures. Each beam hits a sensor element with an upstream optical filter. Particularly in NDIR gas analysis, this design enables measurement with only one beam with a minimal beam cross-section, which significantly increases the interference immunity of the measurement. See also Power dividers and directional couplers References Mirrors Optical components Microscopy
Beam splitter
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,951
[ "Glass engineering and science", "Optical components", "Components", "Microscopy" ]
582,440
https://en.wikipedia.org/wiki/Luhn%20algorithm
The Luhn algorithm or Luhn formula, also known as the "modulus 10" or "mod 10" algorithm, named after its creator, IBM scientist Hans Peter Luhn, is a simple check digit formula used to validate a variety of identification numbers. It is described in US patent 2950048A, granted on . The algorithm is in the public domain and is in wide use today. It is specified in ISO/IEC 7812-1. It is not intended to be a cryptographically secure hash function; it was designed to protect against accidental errors, not malicious attacks. Most credit card numbers and many government identification numbers use the algorithm as a simple method of distinguishing valid numbers from mistyped or otherwise incorrect numbers. Description The check digit is computed as follows: Drop the check digit from the number (if it's already present). This leaves the payload. Start with the payload digits. Moving from right to left, double every second digit, starting from the last digit. If doubling a digit results in a value > 9, subtract 9 from it (or sum its digits). Sum all the resulting digits (including the ones that were not doubled). The check digit is calculated by , where s is the sum from step 3. This is the smallest number (possibly zero) that must be added to to make a multiple of 10. Other valid formulas giving the same value are , , and . Note that the formula will not work in all environments due to differences in how negative numbers are handled by the modulo operation. Example for computing check digit Assume an example of an account number 1789372997 (just the "payload", check digit not yet included): The sum of the resulting digits is 56. The check digit is equal to . This makes the full account number read 17893729974. Example for validating check digit Drop the check digit (last digit) of the number to validate. (e.g. 17893729974 → 1789372997) Calculate the check digit (see above) Compare your result with the original check digit. If both numbers match, the result is valid. Strengths and weaknesses The Luhn algorithm will detect all single-digit errors, as well as almost all transpositions of adjacent digits. It will not, however, detect transposition of the two-digit sequence 09 to 90 (or vice versa). It will detect most of the possible twin errors (it will not detect 22 ↔ 55, 33 ↔ 66 or 44 ↔ 77). Other, more complex check-digit algorithms (such as the Verhoeff algorithm and the Damm algorithm) can detect more transcription errors. The Luhn mod N algorithm is an extension that supports non-numerical strings. Because the algorithm operates on the digits in a right-to-left manner and zero digits affect the result only if they cause shift in position, zero-padding the beginning of a string of numbers does not affect the calculation. Therefore, systems that pad to a specific number of digits (by converting 1234 to 0001234 for instance) can perform Luhn validation before or after the padding and achieve the same result. The algorithm appeared in a United States Patent for a simple, hand-held, mechanical device for computing the checksum. The device took the mod 10 sum by mechanical means. The substitution digits, that is, the results of the double and reduce procedure, were not produced mechanically. Rather, the digits were marked in their permuted order on the body of the machine. Pseudocode implementation The following function takes a card number, including the check digit, as an array of integers and outputs true if the check digit is correct, false otherwise. function isValid(cardNumber[1..length]) sum := 0 parity := length mod 2 for i from 1 to length do if i mod 2 != parity then sum := sum + cardNumber[i] elseif cardNumber[i] > 4 then sum := sum + 2 * cardNumber[i] - 9 else sum := sum + 2 * cardNumber[i] end if end for return cardNumber[length] == ((10 - (sum mod 10)) mod 10) end function Uses The Luhn algorithm is used in a variety of systems, including: Credit card numbers IMEI numbers CUSIP numbers for North American financial instruments National Provider Identifier numbers in the United States Canadian social insurance numbers Israeli ID numbers South African ID numbers South African Tax reference numbers Swedish national identification numbers Swedish Corporate Identity Numbers (OrgNr) Greek Social Security Numbers (ΑΜΚΑ) ICCID of SIM cards European patent application numbers Survey codes appearing on McDonald's, Taco Bell, and Tractor Supply Co. receipts United States Postal Service package tracking numbers use a modified Luhn algorithm Italian VAT numbers (Partita Iva) References External links Luhn test of credit card numbers on Rosetta Code: Luhn algorithm/formula implementation in 160 programming languages Modular arithmetic Checksum algorithms Error detection and correction 1954 introductions Articles with example pseudocode Management cybernetics
Luhn algorithm
[ "Mathematics", "Engineering" ]
1,055
[ "Reliability engineering", "Error detection and correction", "Arithmetic", "Modular arithmetic", "Number theory" ]
582,453
https://en.wikipedia.org/wiki/Age%20of%20Aquarius
The Age of Aquarius, in astrology, is either the current or forthcoming astrological age, depending on the method of calculation. Astrologers maintain that an astrological age is a product of the Earth's slow precessional rotation and lasts for 2,160 years, on average (one 25,920 year period of precession, or great year, divided by 12 zodiac signs equals a 2,160 year astrological age). There are various methods of calculating the boundaries of an astrological age. In Sun-sign astrology, the first sign is Aries, followed by Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio, Sagittarius, Capricorn, Aquarius, and Pisces, whereupon the cycle returns to Aries and through the zodiacal signs again. Astrological ages proceed in the opposite direction. Therefore, the Age of Aquarius follows the Age of Pisces. Overview The approximate 2,160 years for each age corresponds to the average time it takes for the vernal equinox to move from one constellation of the zodiac into the next. This average can be computed by dividing the Earth's 25,800 year gyroscopic precession period by 12, the number of zodiacal signs. This is only a rough calculation, as the length of time it takes for a complete precession is currently increasing. A more accurate set of figures is 25,772 years for a complete cycle and 2,147.5 years per astrological age, assuming a constant precession rate. According to various astrologers' calculations, approximate dates for entering the age of Aquarius range from (Terry MacKinnell) to (John Addey). Astrologers do not agree on when the Aquarian age will start or even if it has already started. lists various references from mainly astrological sources for the start of the Age of Aquarius. Based on Campion's summary, most published materials on the subject state that the Age of Aquarius arrived in the 20th century (29 claims), with the 24th century in second place with 12 claimants. Astrological ages are taken to be associated with the precession of the equinoxes. The slow wobble of the Earth's rotation axis on the celestial sphere is independent of the diurnal rotation of the Earth on its own axis and the annual revolution of the Earth around the Sun. Traditionally this 25,800 year-long cycle is calibrated, for the purposes of determining astrological ages, by the perceived location of the Sun in one of the 12 zodiac constellations at the vernal (Spring) equinox, which corresponds to the moment the Sun is perceived as crossing the celestial equator, marking the start of spring in the Northern Hemisphere each year. Roughly every 2,150 years the Sun's position at the time of the vernal equinox will have moved into a new zodiacal constellation. In 1929 the International Astronomical Union defined the edges of the 88 official constellations. The edge established between Pisces and Aquarius officially locates the beginning of the Aquarian Age around Many astrologers dispute this approach because of the varying sizes and overlap between the zodiacal constellations. They prefer the long-established convention of equally-sized signs, spaced every 30 degrees along the ecliptic, which are named after what were the 12 background zodiacal constellations when tropical astrology was codified Astrological meaning Astrologers believe that an astrological age affects humanity, possibly by influencing the rise and fall of civilizations or cultural tendencies. Traditionally, Aquarius is associated with electricity computers flight democracy freedom humanitarianism idealism modernization nervous disorders rebellion nonconformity philanthropy veracity perseverance humanity irresolution Among other dates, one view is that the age of Aquarius arrived around 1844, with the harbinger of Siyyid ʿAlí Muḥammad (1819–1850), who founded Bábism. promoted the view that, although no one knows when the Aquarian age begins, the American Revolution, the Industrial Revolution, and the discovery of electricity are all attributable to Aquarian influence. They make a number of predictions about the trends that they believe will develop in the Aquarian age. Proponents of medieval astrology suggest that the Pisces world where religion is the opiate of the masses will be replaced in the Aquarian age by a world ruled by secretive, power-hungry elites seeking absolute power over others; that knowledge in the Aquarian age will only be valued for its ability to win wars; that knowledge and science will be abused, not industry and trade; and that the Aquarian age will be another dark age in which religion is considered offensive. Another view suggests that the rise of scientific rationalism, combined with the fall of religious influence, the increasing focus on human rights since the 1780s, the exponential growth of technology, plus the advent of flight and space travel, are evidence of the dawning of the age of Aquarius. A "wave" theory of the shifting great ages suggests that the age of Aquarius will not arrive on a given date, but is instead emerging in influence over many years, similar to how the tide rises gradually, by small increments, rather than surging forward all at once. Rudolf Steiner believed that the age of Aquarius will arrive in In Steiner's approach, each age is exactly 2,160 years. Based on this structure, the world has been in the age of Pisces since 1413. Rudolf Steiner had spoken about two great spiritual events: The return of Christ in the ethereal world (and not in a physical body), because people must develop their faculties until they can reach the ethereal world; and the incarnation of Ahriman, Zoroaster's "destructive spirit" that will try to block the development of humanity. In a 1890 article about feminism in the French newspaper La Fronde on 26 February 1890, August Vandekerkhove stated: "About March, 21st this year the cycle of Aquarius will start. Aquarius is the house of the woman". He adds that is in this age the woman will be "equal" to the man. Gnostic philosopher Samael Aun Weor declared 4 February 1962 to be the beginning of the "age of Aquarius", heralded by the alignment of the first six planets, the Sun, the Moon and the constellation Aquarius. Psychoanalyst Carl Jung mentions the "age of Aquarius" in his book Aion, believing that the "age of Aquarius" will "constellate the problem of the union of the opposites". In accordance with prominent astrologers, Jung believed the "age of Aquarius" will be a dark and spiritually deficient time for humanity, writing that "it will no longer be possible to write off evil as the mere privation of good; its real existence will have to be recognized in the age of Aquarius". According to Jung's interpretation of astrology, the "age of Pisces" began with the birth and death of Christ, associating the ichthys (colloquially known as the "Jesus fish") with the symbol of Pisces; following the "age of Pisces" would be the "age of Aquarius", the spiritually deficient age before the arrival of the Antichrist. Common cultural associations The expression "age of Aquarius" in popular culture usually refers to the heyday of the hippie and New Age movements in the 1960s and 1970s. The 1967 musical Hair, with its opening song "Aquarius" and the line "This is the dawning of the Age of Aquarius", brought the Aquarian age concept to the attention of audiences worldwide. However, the song further defines this dawning of the age within the first lines: "When the Moon is in the seventh house and Jupiter aligns with Mars, then peace will guide the planets and love will steer the stars". Astrologer Neil Spencer denounced the lyrics as "astrological gibberish", noting that Jupiter aligns with Mars several times a year and the Moon is in the 7th house for two hours every day. The Woodstock music festival was billed as "an Aquarian exposition". See also Footnotes References External links Astrological ages New Age 1960s fads and trends
Age of Aquarius
[ "Physics" ]
1,722
[ "Spacetime", "Astrological ages", "Physical quantities", "Time" ]
582,473
https://en.wikipedia.org/wiki/Oogenesis
Oogenesis () or ovogenesis is the differentiation of the ovum (egg cell) into a cell competent to further develop when fertilized. It is developed from the primary oocyte by maturation. Oogenesis is initiated in the embryonic stage. Oogenesis in non-human mammals In mammals, the first part of oogenesis starts in the germinal epithelium, which gives rise to the development of ovarian follicles, the functional unit of the ovary. Oogenesis consists of several sub-processes: oocytogenesis, ootidogenesis, and finally maturation to form an ovum (oogenesis proper). Folliculogenesis is a separate sub-process that accompanies and supports all three oogenetic sub-processes. Oogonium —(Oocytogenesis)—> Primary Oocyte —(Meiosis I)—> First Polar body (Discarded afterward) + Secondary oocyte —(Meiosis II)—> Second Polar Body (Discarded afterward) + Ovum Oocyte meiosis, important to all animal life cycles yet unlike all other instances of animal cell division, occurs completely without the aid of spindle-coordinating centrosomes. The creation of oogonia The creation of oogonia traditionally does not belong to oogenesis proper, but, instead, to the common process of gametogenesis, which, in the female human, begins with the processes of folliculogenesis, oocytogenesis, and ootidogenesis. Oogonia enter meiosis during embryonic development, becoming oocytes. Meiosis begins with DNA replication and meiotic crossing over. It then stops in early prophase. Maintenance of meiotic arrest Mammalian oocytes are maintained in meiotic prophase arrest for a very long time—months in mice, years in humans. Initially, the arrest is due to lack of sufficient cell cycle proteins to allow meiotic progression. However, as the oocyte grows, these proteins are synthesized, and meiotic arrest becomes dependent on cyclic AMP. The cyclic AMP is generated by the oocyte by adenylyl cyclase in the oocyte membrane. The adenylyl cyclase is kept active by a constitutively active G-protein-coupled receptor known as GPR3 and a G-protein, Gs, also present in the oocyte membrane. Maintenance of meiotic arrest also depends on the presence of a multilayered complex of cells, known as a follicle, that surrounds the oocyte. Removal of the oocyte from the follicle causes meiosis to progress in the oocyte. The cells that comprise the follicle, known as granulosa cells, are connected to each other by proteins known as gap junctions, that allow small molecules to pass between the cells. The granulosa cells produce a small molecule, cyclic GMP, that diffuses into the oocyte through the gap junctions. In the oocyte, cyclic GMP prevents the breakdown of cyclic AMP by the phosphodiesterase PDE3, and thus maintains meiotic arrest. The cyclic GMP is produced by the guanylyl cyclase NPR2. Reinitiation of meiosis and stimulation of ovulation by luteinizing hormone As follicles grow, they acquire receptors for luteinizing hormone, a pituitary hormone that reinitiates meiosis in the oocyte and causes ovulation of a fertilizable egg. Luteinizing hormone acts on receptors in the outer layers of granulosa cells of the follicle, causing a decrease in cyclic GMP in the granulosa cells. Because the granulosa cells and oocyte are connected by gap junctions, cyclic GMP also decreases in the oocyte, causing meiosis to resume. Meiosis then proceeds to second metaphase, where it pauses again until fertilization. Luteinizing hormone also stimulates gene expression leading to ovulation. Human oogenesis Oogenesis Oogenesis starts with the process of developing primary oocytes, which occurs via the transformation of oogonia into primary [oocyte]s, a process called oocytogenesis. From one single oogonium, only one mature oocyte will rise, with 3 other cells called polar bodies. Oocytogenesis is complete either before or shortly after birth. Number of primary oocytes It is commonly believed that, when oocytogenesis is complete, no additional primary oocytes are created, in contrast to the male process of spermatogenesis, where gametocytes are continuously created. In other words, primary oocytes reach their maximum development at ~20 weeks of gestational age, when approximately seven million primary oocytes have been created; however, at birth, this number has already been reduced to approximately 1-2 million per ovary. At puberty, the number of oocytes decreases even more to reach about 60,000 to 80,000 per ovary, and only about 500 mature oocytes will be produced during a woman's life, the others will undergo atresia (degeneration). Two publications have challenged the belief that a finite number of oocytes are set around the time of birth generation in adult mammalian ovaries by putative germ cells in bone marrow and peripheral blood. The renewal of ovarian follicles from germline stem cells (originating from bone marrow and peripheral blood) has been reported in the postnatal mouse ovary. In contrast, DNA clock measurements do not indicate ongoing oogenesis during human females' lifetimes. Thus, further experiments are required to determine the true dynamics of small follicle formation. Ootidogenesis The succeeding phase of ootidogenesis occurs when the primary oocyte develops into an ootid. This is achieved by the process of meiosis. In fact, a primary oocyte is, by its biological definition, a cell whose primary function is to divide by the process of meiosis. However, although this process begins at prenatal age, it stops at prophase I. In late fetal life, all oocytes, still primary oocytes, have halted at this stage of development, called the dictyate. After menarche, these cells then continue to develop, although only a few do so every menstrual cycle. Meiosis I Meiosis I of ootidogenesis begins during embryonic development, but halts in the diplotene stage of prophase I until puberty. The mouse oocyte in the dictyate (prolonged diplotene) stage actively repairs DNA damage, whereas DNA repair is not detectable in the pre-dictyate (leptotene, zygotene and pachytene) stages of meiosis. For those primary oocytes that continue to develop in each menstrual cycle, however, synapsis occurs and tetrads form, enabling chromosomal crossover to occur. As a result of meiosis I, the primary oocyte has now developed into the secondary oocyte. Meiosis II Immediately after meiosis I, the haploid secondary oocyte initiates meiosis II. However, this process is also halted at the metaphase II stage until fertilization, if such should ever occur. If the egg is not fertilized, it is disintegrated and released (menstruation) and the secondary oocyte does not complete meiosis II (and does not become an ovum). When meiosis II has completed, an ootid and another polar body have now been created. The polar body is small in size. Ovarian cycle The ovarian cycle is divided into several phases: Follicologenesis: Synchronously with ootidogenesis, the ovarian follicle surrounding the ootid has developed from a primordial follicle to a preovulatory one. The primary follicle takes four months to become a preantral, two months to become antral, and then passes to a mature (Graaf) follicle. The primary follicle has oocyte-lining cells that go from floor to cubic and begin to proliferate, increasing the metabolic activity of the oocyte and follicular cells, which release glycoproteins and proteoglycans acids that will form the zona pellucida, which accompany the installation. In the preantral secondary follicle, internal and external theca cells begin to form. Aromatase, produced by follicular cells, transforms androgens produced by the inner theca into estrogens under the stimulation of FSH. LH stimulates theca cells to produce androgens. In the antral follicle, there is an antrum containing a follicle liquor, which contains estrogen, to allow the passage from the antral follicle to the Graaf follicle. The follicular antrum moves the oocyte and becomes eccentric; the oocyte is always surrounded by the pellucid zone and by follicular cells that form the oophorus cumulus. The innermost ones are called radiated corona cells. At this stage, the oocyte produces cortical granules containing acid glycoproteins. Dominant follicle selection: The follicle with more FSH receptors will be more favored, simultaneously inducing the death of the other follicles (3-10 antral follicles that enter this phase each month). Low concentration estrogen will inhibit further production of FSH by the pituitary gland with negative feedback, so the follicles left behind will accumulate in the follicular antrum instead of androgens. Graaf follicle: Estrogen at other concentrations induces LH release, with the peak of LH called LH surge, which induces stages that will lead to follicle burst. LH receptors also appear on follicular cells, which stimulate the oocyte to become a secondary oocyte, blocked in metaphase, waiting for fertilization. LH also stimulates oophore cumulus cells to release progesterone. Ovulation: bursting of the follicle, oocyte leakage with pellucid zone, and radiated corona cells. The lining membrane is thinned on the ovary where the follicle bursts and the cells attached to it emerge from the stigma. The ovary is collected from the uterine tube, where fertilization can take place in the ampullate zone. Formation of the corpus luteum: From the remaining structures of the follicle, the corpus luteum is formed. At first, there is a clot, which is then replaced by loose connective tissue; the cells that form solid cords are follicular cells and cells of the outer theca (Tecali lutein cells) and internal (granulosa cells). The luteal body increases the concentration of progesterone, which LH constantly stimulates. If the egg is not fertilized, the corpus luteum degenerates (body albicans); if it is implanted, it remains until three months of pregnancy, where its function is replaced by the placenta (production of progesterone and estrogen). The level of LH (necessary to keep the corpus luteum alive) is replaced by human chorionic gonadotropin. Uterine cycle The uterine cycle occurs parallel to the ovarian cycle and is induced by estrogen and progesterone. The endometrium, formed by a monostratified cylindrical epithelium, with uterine glands (simple tubular), connective with a functional superficial layer (divided into a spongy layer, a compact layer, and a deeper basal layer, which is always maintained, presents four phases: Proliferative phase: From the 5th to the 14th day of the ovarian cycle, it is conditioned by estrogens. The functional layer of the uterus is restored, with mitotic division of the basal layer. Secretive phase: from the 14th to the 27th day of the ovarian cycle, influenced by the progesterone produced by the corpus luteum. Cells become hypertrophic, and tubular glands begin to produce glycogen Ischemic phase: beginning of the menstrual phase from 27 to 28 days  Regressive or desquamative phase from 1 to 5 days, the spiral-shaped arteries undergo ischemia, and the functional layer detaches If, instead, there is fertilization, the uterine mucosa is modified to accommodate the fertilized egg, and the secretive phase is maintained. Maturation into ovum Both polar bodies disintegrate at the end of Meiosis II, leaving only the ootid, which then eventually undergoes maturation into a mature ovum. The function of forming polar bodies is to discard the extra haploid sets of chromosomes that have resulted as a consequence of meiosis. In vitro maturation In vitro maturation (IVM) is the technique of letting ovarian follicles mature in vitro. It can potentially be performed before an IVF. In such cases, ovarian hyperstimulation is not essential. Rather, oocytes can mature outside the body prior to IVF. Hence, no (or at least a lower dose of) gonadotropins have to be injected in the body. Immature eggs have been grown until maturation in vitro at a 10% survival rate, but the technique is not yet clinically available. With this technique, cryopreserved ovarian tissue could possibly be used to make oocytes that can directly undergo in vitro fertilization. In vitro oogenesis By definition it means, to recapitulate mammalian oogenesis and producing fertilizable oocytes in vitro.it is a complex process involving several different cell types, precise follicular cell-oocyte reciprocal interactions, a variety of nutrients and combinations of cytokines, and precise growth factors and hormones depending on the developmental stage. In 2016, two papers published by Morohaku et al. and Hikabe et al. reported in vitro procedures that appear to reproduce efficiently these conditions allowing for the production, completely in a dish, of a relatively large number of oocytes that are fertilizable and capable of giving rise to viable offspring in the mouse. This technique can be mainly benefited in cancer patients where in today's condition their ovarian tissue is cryopreserved for preservation of fertility. Alternatively to the autologous transplantation, the development of culture systems that support oocyte development from the primordial follicle stage represent a valid strategy to restore fertility. Over time, many studies have been conducted with the aim to optimize the characteristics of ovarian tissue culture systems and to better support the three main phases: 1) activation of primordial follicles; 2) isolation and culture of growing preantral follicles; 3) removal from the follicle environment and maturation of oocyte cumulus complexes. While complete oocyte in vitro development has been achieved in mouse, with the production of live offspring, the goal of obtaining oocytes of sufficient quality to support embryo development has not been completely reached into higher mammals despite decades of effort. Ovarian aging BRCA1 and ATM proteins are employed in repair of DNA double-strand break during meiosis. These proteins appear to have a critical role in resisting ovarian aging. However, homologous recombinational repair of DNA double-strand breaks mediated by BRCA1 and ATM weakens with age in oocytes of humans and other species. Women with BRCA1 mutations have lower ovarian reserves and experience earlier menopause than women without these mutations. Even in woman without specific BRCA1 mutations, ovarian aging is associated with depletion of ovarian reserves leading to menopause, but at a slower rate than in those with such mutations. Since older premenopausal women ordinarily have normal progeny, their capability for meiotic recombinational repair appears to be sufficient to prevent deterioration of their germline despite the reduction in ovarian reserve. DNA damages may arise in the germline during the decades long period in humans between early oocytogenesis and the stage of meiosis in which homologous chromosomes are effectively paired (dictyate stage). It has been suggested that such DNA damages may be removed, in large part, by mechanisms dependent on chromosome pairing, such as homologous recombination. Oogenesis in non-mammals Some algae and the oomycetes produce eggs in oogonia. In the brown alga Fucus, all four egg cells survive oogenesis, which is an exception to the rule that generally only one product of female meiosis survives to maturity. In plants, oogenesis occurs inside the female gametophyte via mitosis. In many plants such as bryophytes, ferns, and gymnosperms, egg cells are formed in archegonia. In flowering plants, the female gametophyte has been reduced to an eight-celled embryo sac within the ovule inside the ovary of the flower. Oogenesis occurs within the embryo sac and leads to the formation of a single egg cell per ovule. In ascaris, the oocyte does not even begin meiosis until the sperm touches it, in contrast to mammals, where meiosis is completed in the estrus cycle. In female Drosophila flies, genetic recombination occurs during meiosis. This recombination is associated with formation of DNA double-strand breaks and the repair of these breaks. The repair process leads to crossover recombinants as well as at least three times as many noncrossover recombinants (e.g. arising by gene conversion without crossover). See also Anisogamy Archegonium Evolution of sexual reproduction Female infertility Female reproductive system Meiosis Oncofertility Oogonium Oocyte Origin and function of meiosis Sexual reproduction Spermatogenesis References Cho WK, Stern S, Biggers JD. 1974. Inhibitory effect of dibutyryl cAMP on mouse oocyte maturation in vitro. J Exp Zool.187:383-386 Bibliography Manandhar G, Schatten H and Sutovsky P (2005). Centrosome reduction during gametogenesis and its significance. Biol Reprod, 72(1)2-13. External links Reproductive Physiology Developmental biology Genetics Human female endocrine system Meiosis
Oogenesis
[ "Biology" ]
3,949
[ "Behavior", "Developmental biology", "Genetics", "Reproduction", "Meiosis", "Molecular genetics", "Cellular processes" ]
582,527
https://en.wikipedia.org/wiki/Host%20controller%20interface%20%28USB%2C%20Firewire%29
A USB and Firewire Host Controller Interface (UFHC) is a register-level interface that enables a host controller for USB or IEEE 1394 hardware to communicate with a host controller driver in software. The driver software is typically provided with an operating system of a personal computer, but may also be implemented by application-specific devices such as a microcontroller. On the expansion card or motherboard controller, this involves much custom logic, with digital logic engines in the motherboard's controller chip, plus analog circuitry managing the high-speed differential signals. On the software side, it requires a device driver (called a Host Controller Driver, or HCD). IEEE 1394 Open Host Controller Interface Open Host Controller Interface (OHCI) is an open standard. When applied to an IEEE 1394 (also known as FireWire; i.LINK or Lynx) card, OHCI means that the card supports a standard interface to the PC and can be used by the OHCI IEEE 1394 drivers that come with all modern operating systems. Because the card has a standard OHCI interface, the OS does not need to know in advance exactly who makes the card or how it works; it can safely assume that the card understands the set of well-defined commands that are defined in the standard protocol. USB Open Host Controller Interface The OHCI standard for USB is similar to the OHCI standard for IEEE 1394, but supports USB 1.1 (full and low speeds) only; so as a result its register interface looks completely different. Compared with UHCI, it moves more intelligence into the controller, and thus is accordingly much more efficient; this was part of the motivation for defining it. If a computer provides non-x86 USB 1.1, or x86 USB 1.1 from a USB controller that is not made by Intel or VIA, it probably uses OHCI (e.g. OHCI is common on add-in PCI Cards based on an NEC chipset). It has many fewer intellectual property restrictions than UHCI. It only supports 32-bit memory addressing, so it requires an IOMMU or a computationally expensive bounce buffer to work with a 64-bit operating system. OHCI interfaces to the rest of the computer only with memory-mapped I/O. Universal Host Controller Interface Universal Host Controller Interface (UHCI) is a proprietary interface created by Intel for USB 1.x (full and low speeds). It requires a license from Intel. A USB controller using UHCI does little in hardware and requires a software UHCI driver to do much of the work of managing the USB bus. It only supports 32-bit memory addressing, so it requires an IOMMU or a computationally expensive bounce buffer to work with a 64-bit operating system. UHCI is configured with port-mapped I/O and memory-mapped I/O, and also requires memory-mapped I/O for status updates and for data buffers needed to hold data that needs to be sent or data that was received. Enhanced Host Controller Interface The Enhanced Host Controller Interface (EHCI) is a high-speed controller standard applicable to USB 2.0. UHCI- and OHCI-based systems, as existed previously, entailed greater complexity and costs than necessary. Consequently, the USB Implementers Forum (USB-IF) insisted on a public specification for EHCI. Intel hosted EHCI conformance-testing and this helped to prevent the incursion of proprietary features. Originally a PC providing high-speed ports had two controllers, one handling low- and full-speed devices and the second handling high-speed devices. Typically such a system had EHCI and either OHCI or UHCI drivers. The UHCI driver provides low- and full-speed interfaces for Intel or VIA chipsets' USB host controllers on the motherboard, or for any VIA discrete host controllers attached to the computer's expansion bus. The OHCI driver provides low- and full-speed functions for USB ports of all other motherboard chipset vendors' integrated USB host controllers or discrete host controllers attached to the computer's expansion bus. The EHCI driver provided high-speed functions for USB ports on the motherboard or on the discrete USB controller. More recent hardware routes all ports through an internal "rate-matching" hub (RMH) that converts all traffic involving any directly-connected ports working at full-speed and low-speed between the high-speed traffic presented to the EHCI controller and the full-speed or low-speed traffic that the ports operating at those speeds expect, allowing the EHCI controller to handle these devices. The EHCI software interface specification defines both 32-bit and 64-bit versions of its data structures, so it does not need a bounce buffer or IOMMU to work with a 64-bit operating system if a rate-matching hub is implemented to provide full-speed and low-speed connectivity instead of companion controllers using either the UHCI specification or OHCI specification, both of which are 32-bit only specifications. Extensible Host Controller Interface Extensible Host Controller Interface (xHCI) is the newest host controller standard that improves speed, power efficiency and virtualization over its predecessors. The goal was also to define a USB host controller to replace UHCI/OHCI/EHCI. It supports all USB device speeds (USB 3.1 SuperSpeed+, USB 3.0 SuperSpeed, USB 2.0 Low-, Full-, and High-speed, USB 1.1 Low- and Full-speed). Virtual Host Controller Interface Virtual Host Controller Interface (VHCI) refers to a virtual controller that may export virtual USB devices not backed by physical devices. For instance, on Linux, VHCI controllers are used to expose USB devices from other machines, attached using the USB/IP protocol. USB4 Host Interface The Host Interface defined in the USB4 Specification. It makes operating system to manage USB4 Host Route for USB, DisplayPort, PCI Express, Thunderbolt or Host-to-Host Communication. USB4® Host Controller ASM4242 Has Passed the USB-IF Certification See also Advanced Host Controller Interface (AHCI) Non-Volatile Memory Host Controller Interface (NVMHCI) Wireless USB (WHCI 1.0) RAID Controller Host adapter LPCIO References External links An OHCI for USB standard document from Compaq, Microsoft and National Semiconductor Linux kernel source: OHCI and EHCI documentation Intel EHCI Specification Intel xHCI Specification Computer hardware standards USB
Host controller interface (USB, Firewire)
[ "Technology" ]
1,338
[ "Computer standards", "Computer hardware standards" ]
582,530
https://en.wikipedia.org/wiki/Lefschetz%20fixed-point%20theorem
In mathematics, the Lefschetz fixed-point theorem is a formula that counts the fixed points of a continuous mapping from a compact topological space to itself by means of traces of the induced mappings on the homology groups of . It is named after Solomon Lefschetz, who first stated it in 1926. The counting is subject to an imputed multiplicity at a fixed point called the fixed-point index. A weak version of the theorem is enough to show that a mapping without any fixed point must have rather special topological properties (like a rotation of a circle). Formal statement For a formal statement of the theorem, let be a continuous map from a compact triangulable space to itself. Define the Lefschetz number of by the alternating (finite) sum of the matrix traces of the linear maps induced by on , the singular homology groups of with rational coefficients. A simple version of the Lefschetz fixed-point theorem states: if then has at least one fixed point, i.e., there exists at least one in such that . In fact, since the Lefschetz number has been defined at the homology level, the conclusion can be extended to say that any map homotopic to has a fixed point as well. Note however that the converse is not true in general: may be zero even if has fixed points, as is the case for the identity map on odd-dimensional spheres. Sketch of a proof First, by applying the simplicial approximation theorem, one shows that if has no fixed points, then (possibly after subdividing ) is homotopic to a fixed-point-free simplicial map (i.e., it sends each simplex to a different simplex). This means that the diagonal values of the matrices of the linear maps induced on the simplicial chain complex of must be all be zero. Then one notes that, in general, the Lefschetz number can also be computed using the alternating sum of the matrix traces of the aforementioned linear maps (this is true for almost exactly the same reason that the Euler characteristic has a definition in terms of homology groups; see below for the relation to the Euler characteristic). In the particular case of a fixed-point-free simplicial map, all of the diagonal values are zero, and thus the traces are all zero. Lefschetz–Hopf theorem A stronger form of the theorem, also known as the Lefschetz–Hopf theorem, states that, if has only finitely many fixed points, then where is the set of fixed points of , and denotes the index of the fixed point . From this theorem one deduces the Poincaré–Hopf theorem for vector fields, since every vector field on compact differential manifold induce flow in a natural way. For every is continuous mapping homotopic to identity (thus have same Lefschetz number) and for small indices of fixed points equals to indices of zeroes of vector field. Relation to the Euler characteristic The Lefschetz number of the identity map on a finite CW complex can be easily computed by realizing that each can be thought of as an identity matrix, and so each trace term is simply the dimension of the appropriate homology group. Thus the Lefschetz number of the identity map is equal to the alternating sum of the Betti numbers of the space, which in turn is equal to the Euler characteristic . Thus we have Relation to the Brouwer fixed-point theorem The Lefschetz fixed-point theorem generalizes the Brouwer fixed-point theorem, which states that every continuous map from the -dimensional closed unit disk to must have at least one fixed point. This can be seen as follows: is compact and triangulable, all its homology groups except are zero, and every continuous map induces the identity map , whose trace is one; all this together implies that is non-zero for any continuous map . Historical context Lefschetz presented his fixed-point theorem in . Lefschetz's focus was not on fixed points of maps, but rather on what are now called coincidence points of maps. Given two maps and from an orientable manifold to an orientable manifold of the same dimension, the Lefschetz coincidence number of and is defined as where is as above, is the homomorphism induced by on the cohomology groups with rational coefficients, and and are the Poincaré duality isomorphisms for and , respectively. Lefschetz proved that if the coincidence number is nonzero, then and have a coincidence point. He noted in his paper that letting and letting be the identity map gives a simpler result, which is now known as the fixed-point theorem. Frobenius Let be a variety defined over the finite field with elements and let be the base change of to the algebraic closure of . The Frobenius endomorphism of (often the geometric Frobenius, or just the Frobenius), denoted by , maps a point with coordinates to the point with coordinates . Thus the fixed points of are exactly the points of with coordinates in ; the set of such points is denoted by . The Lefschetz trace formula holds in this context, and reads: This formula involves the trace of the Frobenius on the étale cohomology, with compact supports, of with values in the field of -adic numbers, where is a prime coprime to . If is smooth and equidimensional, this formula can be rewritten in terms of the arithmetic Frobenius , which acts as the inverse of on cohomology: This formula involves usual cohomology, rather than cohomology with compact supports. The Lefschetz trace formula can also be generalized to algebraic stacks over finite fields. See also Fixed-point theorems Lefschetz zeta function Holomorphic Lefschetz fixed-point formula References Fixed-point theorems Theory of continuous functions Theorems in algebraic topology
Lefschetz fixed-point theorem
[ "Mathematics" ]
1,240
[ "Theorems in mathematical analysis", "Theory of continuous functions", "Fixed-point theorems", "Theorems in topology", "Topology", "Theorems in algebraic topology" ]
582,691
https://en.wikipedia.org/wiki/Retraction%20in%20academic%20publishing
In academic publishing, a retraction is a mechanism by which a published paper in an academic journal is flagged for being seriously flawed to the extent that their results and conclusions can no longer be relied upon. Retracted articles are not removed from the published literature but marked as retracted. In some cases it may be necessary to remove an article from publication, such as when the article is clearly defamatory, violates personal privacy, is the subject of a court order, or might pose a serious health risk to the general public. Procedure A retraction may be initiated by the editors of a journal, or by the author(s) of the papers (or their institution). Retractions are typically accompanied by a retraction notice written by the editors or authors explaining the reason for the retraction. Such notices may also include a note from the authors with apologies for the previous error and/or expressions of gratitude to persons who disclosed the error to the author. Retractions must not be confused with small corrections in published articles. There have been numerous examples of retracted scientific publications. Retraction Watch provides updates on new retractions, and discusses general issues in relation to retractions. History A 2011 paper in the Journal of Medical Ethics attempted to quantify retraction rates in PubMed over time to determine if the rate was increasing, even while taking into account the increased number of overall publications occurring each year. The author found that the rate of increase in retractions was greater than the rate of increase in publications. Moreover, the author notes the following:"It is particularly striking that the number of papers retracted for fraud increased more than sevenfold in the 6 years between 2004 and 2009. During the same period, the number of papers retracted for a scientific mistake did not even double..." (p. 251). Although the author suggests that his findings may indeed indicate a recent increase in scientific fraud, he also acknowledges other possibilities. For example, increased rates of fraud in recent years may simply indicate that journals are doing a better job of policing the scientific literature than they have in the past. Furthermore, because retractions occur for a very small percentage of overall publications (fewer than 1 in 1,000 articles), a few scientists who are willing to commit large amounts of fraud can highly impact retraction rates. For example, the author points out that Jan Hendrik Schön fabricated results in 15 retracted papers in the dataset he reviewed, all of which were retracted in 2002 and 2003, "so he alone was responsible for 56% of papers retracted for fraud in 2002—2003" (p 252). During the COVID-19 pandemic, academia had seen a quick increase in fast-track peer-review articles dealing with SARS-CoV-2 problems. As a result, a number of papers have been retracted made "Retraction Tsunami" due to quality and/or data issues, leading many experts to ponder not just the quality of peer review but also standards of retraction practices. Retracted studies may continue to be cited. This may happen in cases where scholars are unaware of the retraction, in particular when the retraction occurs long after the original publication. The number of journal articles being retracted had risen from about 1,600 in 2013 to 10,000 in 2023. Most of the retractions in 2023 were contributed by Hindawi journals. The significant number of retractions involving Chinese co-authors—over 17,000 since 2021, including 8,000 from Hindawi journals—has led China to launch a nationwide audit addressing retractions and research misconduct. Alternative versions of retraction Retraction with replacement A low percentage of retracted papers can be due to unintentional error within the author(s) work. Rather than removing the entire article, retraction with replacement has been a new practice to help authors avoid being seen as dishonest for mistakes that were not purposefully done. This method allows the author to fix their mistakes from the original paper, and submit an edited version to take the original paper’s place. The journal can decide to retract the original paper then upload the fixed version online, usually with a notice placed stating “Retraction and Replacement,” or “Correction,” on the article page. For example, JAMA will post the edited version with a retraction and replacement notice, along with a link to the original article, while Research Evaluation will use the term "correction" with a link posted on the updated article, referring to the old article. Self-retraction Self-retraction is a request from an author and/or co-authors to retract its own work from being published. Self-retraction by an author is recommended because once it gets retracted from the journal, then it can affect the author(s) because investigations can begin which will have an effect the author's reputation. If one retracts their own work on their terms, it would show more integrity and honesty as they are owning up to their own mistakes, just like the authors mentioned in The Wall Street Journal have done. Scientists at times have been asked to retract their work even though their work is exact and bold; the root cause of the problem should be looked into to avoid retractions. A system to distinguish papers from "good" and "bad" would be beneficial to researchers. This system may save the reputation of scientists and researchers. Most researchers publish honest work and sometimes simple mistakes happen to be overlooked by the peer review process. Retraction should not be for simple spelling errors, but for inaccurate, skewed, and fraudulent data. For example, today new technologies are being developed in a culture of transparency to align the opportunity to record false claims. Another solution is for researchers to use a term “self-citation” since citations look identical therefore they are classified in databases. Recommending a same database to evaluate the researchers own work can help lessen retractions. Notable retractions Retraction for error 2013 - Study on the Mediterranean diet published in New England Journal of Medicine and widely covered by media was retracted due to unreported non-random assignments. This was part of a larger effort verifying proper randomization in thousands of studies by anesthesiologist John Carlisle, who found problems in about 2% of those analyzed. 2012 - Séralini affair - Article suggesting reported an increase in tumors among rats fed genetically modified corn and the herbicide RoundUp retracted due to criticism of experimental design. According to the editor of the journal, a "more in-depth look at the raw data revealed that no definitive conclusions can be reached with this small sample size". 2003 Retracted Science article on ecstasy. See Retracted article on neurotoxicity of ecstasy. Retraction for fraud or misconduct 2021 An article studying the open source community by Qiushi Wu and Kangjie Lu at the University of Minnesota was withdrawn after the Linux Foundation discovered that the researchers submitted patches for the Linux kernel with intentional bugs and without obtaining appropriate consent. 2020 On January 8, 2020, Russian journals retracted more than 800 articles after a large-scale investigation conducted by the Russian Academy of Sciences (RAS) following claims of unethical publications. 2018 On 11 April 2019, two articles on DNA damage by Abderrahmane Kaidi of the University of Bristol, one published in Science in 2010 and another in Nature in 2013, were retracted following evidence of data fabrication. 2017 Five articles in the field of consumer behavior and marketing research, by Brian Wansink at Cornell University, came under scrutiny after peers pointed out inconsistencies in the data. Wansink had written a blog post about asking a graduate student to "salvage" conclusions. Cornell University launched an investigation, which determined in 2018 that Wansink had committed academic misconduct. Wansink resigned. Wansink has since had 18 of his research papers retracted as similar issues were found in other publications. 2014 An article by Haruko Obokata et al. on STAP cells, a method of inducing a cell to become a stem cell, was proven to be falsified. Originally published in Nature, it was retracted later that year. It generated much controversy, and after an institutional investigation, one of the authors committed suicide. 2011 Eight journal articles authored by Duke University cancer researcher Anil Potti and others, which describe genomic signatures of cancer prognosis and predictors of response to cancer treatment, were retracted in 2011 and 2012. The retraction notices generally state that the results of the analyses described in the articles could not be reproduced. In November 2015, the Office of Research Integrity (ORI) found that Potti had engaged in research misconduct. 2010 A 1998 paper by Andrew Wakefield proposing that the MMR vaccine might cause autism, which was responsible for the MMR vaccine controversy, was retracted because "the claims in the original paper that children were "consecutively referred" and that investigations were "approved" by the local ethics committee have been proven to be false." 2009 Numerous papers written by Scott Reuben from 1996 to 2009 were retracted after it was discovered he never actually conducted any of the trials he claimed to have run. 2007 Retraction of several articles written by social psychologist Jennifer Lerner and colleagues from journals including Personality and Social Psychology Bulletin and Biological Psychiatry. 2006 Retraction of Patient-specific embryonic stem cells derived from human SCNT blastocysts, written by Hwang Woo-Suk. Fabrications in the field of stem cell research led to 'indictment on embezzlement and bioethics law violations linked to faked stem cell research'. 2003 Numerous articles with questionable data from physicist Jan Hendrik Schön were retracted from many journals, including both Science and Nature. 2002 Retraction of announced discovery of elements 116 and 118. See Livermorium, Victor Ninov. 1991 Thereza Imanishi-Kari, who worked with David Baltimore, published a 1986 article in the journal Cell on immunology, which showed unexpected results on how the immune system rearranges its genes to produce antibodies against antigens it encounters for the first time. Margot O'Toole, a postdoctoral researcher for Imanishi-Kari, claimed that she could not reproduce Imanishi-Kari's results and alleged that Imanishi-Kari had fabricated the data. After a major investigation, the paper was retracted when the National Institutes of Health concluded that data in the 1986 Imanishi-Kari article had been falsified. Five years later, in 1996, an expert panel appointed by the federal government found no evidence of scientific fraud and cleared Imanishi-Kari of misconduct, although the paper was not reinstated. 1982 John Darsee. Fabricated results in the Cardiac Research Laboratory of Eugene Braunwald at Harvard in the early 1980s. Initially thought to be brilliant by his boss. He was caught out by fellow researchers in the same laboratory. Retraction for ethical violations 2019 An article by Wendy Rogers (Macquarie University, Australia) and colleagues on BMJ Open called for the mass retraction of more than 400 scientific papers on organ transplantation, amid fears the organs were obtained unethically from Chinese prisoners. Wendy Rogers said the journals, researchers and clinicians who used these studies were complicit in these methods of organ trafficking. According to the study, the transplant research community has failed to live up to the ethical standards for using organs from death row inmates that are still being published. These widespread unethical violations in research will cause many unpredictable consequences for science. In 2019, PLOS ONE also retracted 21 articles related to this incident. 2017 The journal Liver International retracted a Chinese study of liver transplantation because 564 livers grafted in the course of the research over 4 years could not be traced. The experts pointed out that it was implausible a hospital could have so many freely donated livers for transplantation, given the small number of donors in China at the time. Retraction over data provenance 2020 On 22 May 2020, during the COVID-19 pandemic, an article was published in The Lancet which claimed to find evidence, based on a database of COVID-19 patients, that hydroxychloroquine and chloroquine increase the chance of patients dying in hospital as well as the chance of ventricular arrhythmia. Medical researchers and newspapers expressed suspicions about the validity of the data, provided by Surgisphere, which is founded by one of the authors of the study. The article was formally retracted by 4 June 2020, on request by the lead author Mandeep Mehra. Retraction over public relations issues 2016 On March 4, 2016, an article in PLOS ONE about the functioning of the human hand was retracted due to outrage on social media over a reference to "Creator" inside the paper (#CreatorGate). 1896 Jose Rizal was said to have issued a letter of retraction regarding his novels and other published articles against the Roman Catholic Church, see José Rizal: Retraction controversy. See also Fabrication (science) Post-publication peer review Scientific misconduct Sokal affair Erratum Correction (newspaper) References Further reading Scientific misconduct Academic publishing Publishing Error
Retraction in academic publishing
[ "Technology" ]
2,713
[ "Scientific misconduct", "Ethics of science and technology" ]
582,702
https://en.wikipedia.org/wiki/Quasistatic%20process
In thermodynamics, a quasi-static process, also known as a quasi-equilibrium process (from Latin quasi, meaning ‘as if’), is a thermodynamic process that happens slowly enough for the system to remain in internal physical (but not necessarily chemical) thermodynamic equilibrium. An example of this is quasi-static expansion of a mixture of hydrogen and oxygen gas, where the volume of the system changes so slowly that the pressure remains uniform throughout the system at each instant of time during the process. Such an idealized process is a succession of physical equilibrium states, characterized by infinite slowness. Only in a quasi-static thermodynamic process can we exactly define intensive quantities (such as pressure, temperature, specific volume, specific entropy) of the system at any instant during the whole process; otherwise, since no internal equilibrium is established, different parts of the system would have different values of these quantities, so a single value per quantity may not be sufficient to represent the whole system. In other words, when an equation for a change in a state function contains P or T, it implies a quasi-static process. Relation to reversible process While all reversible processes are quasi-static, most authors do not require a general quasi-static process to maintain equilibrium between system and surroundings and avoid dissipation, which are defining characteristics of a reversible process. For example, quasi-static compression of a system by a piston subject to friction is irreversible; although the system is always in internal thermal equilibrium, the friction ensures the generation of dissipative entropy, which goes against the definition of reversibility. Any engineer would remember to include friction when calculating the dissipative entropy generation. An example of a quasi-static process that is not idealizable as reversible is slow heat transfer between two bodies on two finitely different temperatures, where the heat transfer rate is controlled by a poorly conductive partition between the two bodies. In this case, no matter how slowly the process takes place, the state of the composite system consisting of the two bodies is far from equilibrium, since thermal equilibrium for this composite system requires that the two bodies be at the same temperature. Nevertheless, the entropy change for each body can be calculated using the Clausius equality for reversible heat transfer. PV-work in various quasi-static processes Constant pressure: Isobaric processes, Constant volume: Isochoric processes, Constant temperature: Isothermal processes, where (pressure) varies with (volume) via , so Polytropic processes, See also Entropy Reversible process (thermodynamics) References Thermodynamic processes Statistical mechanics
Quasistatic process
[ "Physics", "Chemistry" ]
554
[ "Thermodynamic processes", "Statistical mechanics", "Thermodynamics" ]
582,770
https://en.wikipedia.org/wiki/Particle%20number
In thermodynamics, the particle number (symbol ) of a thermodynamic system is the number of constituent particles in that system. The particle number is a fundamental thermodynamic property which is conjugate to the chemical potential. Unlike most physical quantities, the particle number is a dimensionless quantity, specifically a countable quantity. It is an extensive property, as it is directly proportional to the size of the system under consideration and thus meaningful only for closed systems. A constituent particle is one that cannot be broken into smaller pieces at the scale of energy involved in the process (where is the Boltzmann constant and is the temperature). For example, in a thermodynamic system consisting of a piston containing water vapour, the particle number is the number of water molecules in the system. The meaning of constituent particles, and thereby of particle numbers, is thus temperature-dependent. Determining the particle number The concept of particle number plays a major role in theoretical considerations. In situations where the actual particle number of a given thermodynamical system needs to be determined, mainly in chemistry, it is not practically possible to measure it directly by counting the particles. If the material is homogeneous and has a known amount of substance n expressed in moles, the particle number N can be found by the relation : , where NA is the Avogadro constant. Particle number density A related intensive system parameter is the particle number density (or particle number concentration PNC), a quantity of kind volumetric number density obtained by dividing the particle number of a system by its volume. This parameter is often denoted by the lower-case letter n. In quantum mechanics In quantum mechanical processes, the total number of particles may not be preserved. The concept is therefore generalized to the particle number operator, that is, the observable that counts the number of constituent particles. In quantum field theory, the particle number operator (see Fock state) is conjugate to the phase of the classical wave (see coherent state). In air quality One measure of air pollution used in air quality standards is the atmospheric concentration of particulate matter. This measure is usually expressed in μg/m3 (micrograms per cubic metre). In the current EU emission norms for cars, vans, and trucks and in the upcoming EU emission norm for non-road mobile machinery, particle number measurements and limits are defined, commonly referred to as PN, with units [#/km] or [#/kWh]. In this case, PN expresses a quantity of particles per unit distance (or work). References Thermodynamics Dimensionless numbers of thermodynamics Countable quantities State functions
Particle number
[ "Physics", "Chemistry", "Mathematics" ]
553
[ "State functions", "Scalar physical quantities", "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics", "Thermodynamics", "Dimensionless quantities", "Countable quantities", "Dynamical systems" ]
582,780
https://en.wikipedia.org/wiki/Standard%20atmosphere%20%28unit%29
The standard atmosphere (symbol: atm) is a unit of pressure defined as Pa. It is sometimes used as a reference pressure or standard pressure. It is approximately equal to Earth's average atmospheric pressure at sea level. History The standard atmosphere was originally defined as the pressure exerted by a 760 mm column of mercury at and standard gravity (gn = ). It was used as a reference condition for physical and chemical properties, and the definition of the centigrade temperature scale set 100 °C as the boiling point of water at this pressure. In 1954, the 10th General Conference on Weights and Measures (CGPM) adopted standard atmosphere for general use and affirmed its definition of being precisely equal to dynes per square centimetre (). This defined pressure in a way that is independent of the properties of any particular substance. In addition, the CGPM noted that there had been some misapprehension that the previous definition (from the 9th CGPM) "led some physicists to believe that this definition of the standard atmosphere was valid only for accurate work in thermometry." In chemistry and in various industries, the reference pressure referred to in standard temperature and pressure was commonly prior to 1982, but standards have since diverged; in 1982, the International Union of Pure and Applied Chemistry recommended that for the purposes of specifying the physical properties of substances, standard pressure should be precisely . Pressure units and equivalencies A pressure of 1 atm can also be stated as: ≈ kgf/cm2 ≈ m H2O ≈ mmHg ≈ inHg ≈ in H2O ≈ pounds-force per square foot (lbf/ft2) The notation ata has been used to indicate an absolute pressure measured in either standard atmospheres (atm) or technical atmospheres (at). See also International Standard Atmosphere References Units of pressure Atmospheric pressure
Standard atmosphere (unit)
[ "Physics", "Mathematics" ]
381
[ "Physical quantities", "Quantity", "Units of pressure", "Meteorological quantities", "Atmospheric pressure", "Units of measurement" ]
582,796
https://en.wikipedia.org/wiki/Fractional%20CIO
A fractional chief information officer differs from a traditional chief information officer in that they serve as a working member of a company's executive management team as a contractor and may or may not serve on the company's board of directors. A fractional CIO, also known as a part-time CIO, parachute CIO, or CIO on-demand, is an experienced, multi-faceted professional who serves as the part-time chief information officer of a small or medium-sized business that otherwise could not afford or would not need a full-time executive to hold the position of chief information officer. A virtual CIO or vCIO may have a similar role, but the term virtual suggests that the individual is not present in the organisation on the same basis as an employed executive. An interim CIO is generally retained for a limited time period, often to oversee a specific project or to cover an interregnum. As with traditional CIOs, a fractional CIO often helps with technology roadmaps, business process improvements, and business technology strategy. The key business benefit of retaining a fractional CIO is that they provide the same expertise and capability of a full-time CIO without the associated level of overhead and benefits associated with adding another top-level executive. Care, however, must be taken to ensure the skills of the fractional CIO align primarily with the needs of the business, and not weighted with technical expertise in order to achieve the best results. Fractional CIOs typically serve several companies and may or may not engage in the day-to-day management of a company's IT staff or other resources. See also Fractional work Management occupations People in information technology
Fractional CIO
[ "Technology" ]
346
[ "People in information technology", "Information technology" ]
583,073
https://en.wikipedia.org/wiki/Segmentation%20%28biology%29
Segmentation in biology is the division of some animal and plant body plans into a linear series of repetitive segments that may or may not be interconnected to each other. This article focuses on the segmentation of animal body plans, specifically using the examples of the taxa Arthropoda, Chordata, and Annelida. These three groups form segments by using a "growth zone" to direct and define the segments. While all three have a generally segmented body plan and use a growth zone, they use different mechanisms for generating this patterning. Even within these groups, different organisms have different mechanisms for segmenting the body. Segmentation of the body plan is important for allowing free movement and development of certain body parts. It also allows for regeneration in specific individuals. Definition Segmentation is a difficult process to satisfactorily define. Many taxa (for example the molluscs) have some form of serial repetition in their units but are not conventionally thought of as segmented. Segmented animals are those considered to have organs that were repeated, or to have a body composed of self-similar units, but usually it is the parts of an organism that are referred to as being segmented. Embryology Segmentation in animals typically falls into three types, characteristic of different arthropods, vertebrates, and annelids. Arthropods such as the fruit fly form segments from a field of equivalent cells based on transcription factor gradients. Vertebrates like the zebrafish use oscillating gene expression to define segments known as somites. Annelids such as the leech use smaller blast cells budded off from large teloblast cells to define segments. Arthropods Although Drosophila segmentation is not representative of the arthropod phylum in general, it is the most highly studied. Early screens to identify genes involved in cuticle development led to the discovery of a class of genes that was necessary for proper segmentation of the Drosophila embryo. To properly segment the Drosophila embryo, the anterior-posterior axis is defined by maternally supplied transcripts giving rise to gradients of these proteins. This gradient then defines the expression pattern for gap genes, which set up the boundaries between the different segments. The gradients produced from gap gene expression then define the expression pattern for the pair-rule genes. The pair-rule genes are mostly transcription factors, expressed in regular stripes down the length of the embryo. These transcription factors then regulate the expression of segment polarity genes, which define the polarity of each segment. Boundaries and identities of each segment are later defined. Within the arthropods, the body wall, nervous system, kidneys, muscles and body cavity are segmented, as are the appendages (when they are present). Some of these elements (e.g. musculature) are not segmented in their sister taxon, the onychophora. Annelids: Leech While not as well studied as in Drosophila and zebrafish, segmentation in the leech has been described as “budding” segmentation. Early divisions within the leech embryo result in teloblast cells, which are stem cells that divide asymmetrically to create bandlets of blast cells. Furthermore, there are five different teloblast lineages (N, M, O, P, and Q), with one set for each side of the midline. The N and Q lineages contribute two blast cells for each segment, while the M, O, and P lineages only contribute one cell per segment. Finally, the number of segments within the embryo is defined by the number of divisions and blast cells. Segmentation appears to be regulated by the gene Hedgehog, suggesting its common evolutionary origin in the ancestor of arthropods and annelids. Within the annelids, as with the arthropods, the body wall, nervous system, kidneys, muscles and body cavity are generally segmented. However, this is not true for all of the traits all of the time: many lack segmentation in the body wall, coelom and musculature. Chordates Although perhaps not as well understood as Drosophila, the embryological process of segmentation has been studied in many vertebrate groups, such as fish (Zebrafish, Medaka), reptiles (Corn Snake), birds (Chicken), and mammals (Mouse). Segmentation in chordates is characterized as the formation of a pair of somites on either side of the midline. This is often referred to as somitogenesis. In vertebrates, segmentation is most often explained in terms of the clock and wavefront model. The "clock" refers to the periodic oscillation in abundance of specific gene products, such as members of the Hairy and Enhancer of Split (Hes) gene family. Expression starts at the posterior end of the embryo and moves towards the anterior, creating travelling waves of gene expression. The "wavefront" is where clock oscillations arrest, initiating gene expression that leads to the patterning of somite boundaries. The position of the wavefront is defined by a decreasing posterior-to-anterior gradient of FGF signalling. In higher vertebrates including Mouse and Chick, (but not Zebrafish), the wavefront also depends upon an opposing anterior-to-posterior decreasing gradient of retinoic acid which limits the anterior spreading of FGF8; retinoic acid repression of Fgf8 gene expression defines the wavefront as the point at which the concentrations of both retinoic acid and diffusible FGF8 protein are at their lowest. Cells at this point will mature and form a pair of somites. The interaction of other signaling molecules, such as myogenic regulatory factors, with this gradient promotes the development of other structures, such as muscles, across the basic segments. Lower vertebrates such as zebrafish do not require retinoic acid repression of caudal Fgf8 for somitogenesis due to differences in gastrulation and neuromesodermal progenitor function compared to higher vertebrates. Other taxa In other taxa, there is some evidence of segmentation in some organs, but this segmentation is not pervasive to the full list of organs mentioned above for arthropods and annelids. One might think of the serially repeated units in many Cycloneuralia, or the segmented body armature of the chitons (which is not accompanied by a segmented coelom). Origin Segmentation can be seen as originating in two ways. To caricature, the 'amplification' pathway would involve a single-segment ancestral organism becoming segmented by repeating itself. This seems implausible, and the 'parcellization' framework is generally preferred – where existing organization of organ systems is 'formalized' from loosely defined packets into more rigid segments. As such, organisms with a loosely defined metamerism, whether internal (as some molluscs) or external (as onychophora), can be seen as 'precursors' to eusegmented organisms such as annelids or arthropods. See also References Developmental biology
Segmentation (biology)
[ "Biology" ]
1,490
[ "Behavior", "Developmental biology", "Reproduction" ]
583,104
https://en.wikipedia.org/wiki/Orthosie%20%28moon%29
Orthosie , also known as , is a natural satellite of Jupiter. It was discovered by a team of astronomers from the University of Hawaii led by Scott S. Sheppard in 2001, and given the temporary designation . Orthosie is about 2 kilometres in diameter, and orbits Jupiter at an average distance of 21,075,662 km in 625.07 days, at an inclination of 146.46° to the ecliptic (143° to Jupiter's equator), in a retrograde direction and with an eccentricity of 0.3376. It was named in August 2003 after Orthosie, the Greek goddess of prosperity and one of the Horae. The Horae (Hours) were daughters of Zeus and Themis. Orthosie belongs to the Ananke group. References Ananke group Moons of Jupiter Irregular satellites Discoveries by Scott S. Sheppard Discoveries by David C. Jewitt Discoveries by Yanga R. Fernandez 20011211 Moons with a retrograde orbit
Orthosie (moon)
[ "Astronomy" ]
207
[ "Astronomy stubs", "Planetary science stubs" ]
583,362
https://en.wikipedia.org/wiki/4015%20Wilson%E2%80%93Harrington
4015 Wilson–Harrington is an active asteroid known both as comet 107P/Wilson–Harrington and as asteroid 4015 Wilson–Harrington. It passed from Earth on 20 July 2022 and then passed perihelion (closest approach to the Sun) on 24 August 2022. It seldom gets brighter than apparent magnitude 16. It will return to perihelion on 25 November 2026. This near-Earth object is considered both an Apollo asteroid with the designation 4015 Wilson–Harrington and a periodic comet known as Comet Wilson–Harrington or 107P/Wilson–Harrington. It was initially discovered in 1949 as a comet and then lost to further observations. Thirty years later it was rediscovered as an asteroid, after which it took over a decade to determine that these observations were of the same object. Therefore, it has both a comet designation and an asteroid designation, and with a name length of 17 characters it is currently the asteroid with the longest name, having one more character than the 16-character limit imposed by the IAU. The comet was discovered on 19 November 1949, by Albert G. Wilson and Robert G. Harrington at Palomar Observatory. Only three photographic observations were obtained and the comet was lost (insufficient observations to determine a precise enough orbit to know where to look for future appearances of the comet.) On 15 November 1979, an apparent Mars-crosser asteroid was found by Eleanor F. Helin, also of Palomar Observatory. It received the designation 1979 VA, and when re-observed on 20 December 1988, received the permanent number 4015. On 13 August 1992, it was reported that asteroid (4015) 1979 VA and comet 107P/Wilson–Harrington were the same object. By then, enough observations of the asteroid had accumulated to obtain a fairly precise orbit, and the search of old photographic plates for prediscovery images turned up the 1949 plates with the images of the lost comet. Although the 1949 images show cometary features, all subsequent images show only a stellar image, suggesting it might be an inactive comet that undergoes only infrequent outbursts. The eccentricity is 0.624, which is somewhat higher than that of a typical asteroid-belt minor planet and more typical of periodic comets. Its Minimum Orbit Intersection Distance (MOID) of less than 0.05 AU and its large size make it a potentially hazardous asteroid (PHA). There are only eight other objects that are cross-listed as both comets and asteroids: 2060 Chiron (95P/Chiron), 7968 Elst–Pizarro (133P/Elst–Pizarro), 60558 Echeclus (174P/Echeclus), 118401 LINEAR (176P/LINEAR), (282P/2003 BM80), (288P/2006 VW139), (362P/2008 GO98), and (433P/2005 QN173). As a dual status object, astrometric observations of 4015 Wilson–Harrington should be reported under the minor planet designation. A flyby of 4015 Wilson–Harrington was formerly planned by Deep Space 1. It was also considered for the NEAR mission. See also Marco Polo (spacecraft) List of asteroids visited by spacecraft References External links Cometography.com: Wilson–Harrington 107P/(4015) Wilson-Harrington – Seiichi Yoshida @ aerith.net 004015 004015 004015 0107 004015 Discoveries by Eleanor F. Helin Named minor planets 004015 19791115 Recovered astronomical objects
4015 Wilson–Harrington
[ "Astronomy" ]
738
[ "Recovered astronomical objects", "Astronomical objects" ]
583,438
https://en.wikipedia.org/wiki/Genetic%20transformation
In molecular biology and genetics, transformation is the genetic alteration of a cell resulting from the direct uptake and incorporation of exogenous genetic material from its surroundings through the cell membrane(s). For transformation to take place, the recipient bacterium must be in a state of competence, which might occur in nature as a time-limited response to environmental conditions such as starvation and cell density, and may also be induced in a laboratory. Transformation is one of three processes that lead to horizontal gene transfer, in which exogenous genetic material passes from one bacterium to another, the other two being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of foreign DNA by a bacteriophage virus into the host bacterium). In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium. As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between Gram-positive and Gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers. "Transformation" may also be used to describe the insertion of new genetic material into nonbacterial cells, including animal and plant cells; however, because "transformation" has a special meaning in relation to animal cells, indicating progression to a cancerous state, the process is usually called "transfection". History Transformation in bacteria was first demonstrated in 1928 by the British bacteriologist Frederick Griffith. Griffith was interested in determining whether injections of heat-killed bacteria could be used to vaccinate mice against pneumonia. However, he discovered that a non-virulent strain of Streptococcus pneumoniae could be made virulent after being exposed to heat-killed virulent strains. Griffith hypothesized that some "transforming principle" from the heat-killed strain was responsible for making the harmless strain virulent. In 1944 this "transforming principle" was identified as being genetic by Oswald Avery, Colin MacLeod, and Maclyn McCarty. They isolated DNA from a virulent strain of S. pneumoniae and using just this DNA were able to make a harmless strain virulent. They called this uptake and incorporation of DNA by bacteria "transformation" (See Avery-MacLeod-McCarty experiment) The results of Avery et al.'s experiments were at first skeptically received by the scientific community and it was not until the development of genetic markers and the discovery of other methods of genetic transfer (conjugation in 1947 and transduction in 1953) by Joshua Lederberg that Avery's experiments were accepted. It was originally thought that Escherichia coli, a commonly used laboratory organism, was refractory to transformation. However, in 1970, Morton Mandel and Akiko Higa showed that E. coli may be induced to take up DNA from bacteriophage λ without the use of helper phage after treatment with calcium chloride solution. Two years later in 1972, Stanley Norman Cohen, Annie Chang and Leslie Hsu showed that treatment is also effective for transformation of plasmid DNA. The method of transformation by Mandel and Higa was later improved upon by Douglas Hanahan. The discovery of artificially induced competence in E. coli created an efficient and convenient procedure for transforming bacteria which allows for simpler molecular cloning methods in biotechnology and research, and it is now a routinely used laboratory procedure. Transformation using electroporation was developed in the late 1980s, increasing the efficiency of in-vitro transformation and increasing the number of bacterial strains that could be transformed. Transformation of animal and plant cells was also investigated with the first transgenic mouse being created by injecting a gene for a rat growth hormone into a mouse embryo in 1982. In 1897 a bacterium that caused plant tumors, Agrobacterium tumefaciens, was discovered and in the early 1970s the tumor-inducing agent was found to be a DNA plasmid called the Ti plasmid. By removing the genes in the plasmid that caused the tumor and adding in novel genes, researchers were able to infect plants with A. tumefaciens and let the bacteria insert their chosen DNA into the genomes of the plants. Not all plant cells are susceptible to infection by A. tumefaciens, so other methods were developed, including electroporation and micro-injection. Particle bombardment was made possible with the invention of the Biolistic Particle Delivery System (gene gun) by John Sanford in the 1980s. Definitions Transformation is one of three forms of horizontal gene transfer that occur in nature among bacteria, in which DNA encoding for a trait passes from one bacterium to another and is integrated into the recipient genome by homologous recombination; the other two are transduction, carried out by means of a bacteriophage, and conjugation, in which a gene is passed through direct contact between bacteria. In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium. Competence refers to a temporary state of being able to take up exogenous DNA from the environment; it may be induced in a laboratory. It appears to be an ancient process inherited from a common prokaryotic ancestor that is a beneficial adaptation for promoting recombinational repair of DNA damage, especially damage acquired under stressful conditions. Natural genetic transformation appears to be an adaptation for repair of DNA damage that also generates genetic diversity. Transformation has been studied in medically important Gram-negative bacteria species such as Helicobacter pylori, Legionella pneumophila, Neisseria meningitidis, Neisseria gonorrhoeae, Haemophilus influenzae and Vibrio cholerae. It has also been studied in Gram-negative species found in soil such as Pseudomonas stutzeri, Acinetobacter baylyi, and Gram-negative plant pathogens such as Ralstonia solanacearum and Xylella fastidiosa. Transformation among Gram-positive bacteria has been studied in medically important species such as Streptococcus pneumoniae, Streptococcus mutans, Staphylococcus aureus and Streptococcus sanguinis and in Gram-positive soil bacterium Bacillus subtilis. It has also been reported in at least 30 species of Pseudomonadota distributed in several different classes. The best studied Pseudomonadota with respect to transformation are the medically important human pathogens Neisseria gonorrhoeae, Haemophilus influenzae, and Helicobacter pylori. "Transformation" may also be used to describe the insertion of new genetic material into nonbacterial cells, including animal and plant cells; however, because "transformation" has a special meaning in relation to animal cells, indicating progression to a cancerous state, the process is usually called "transfection". Natural competence and transformation Naturally competent bacteria carry sets of genes that provide the protein machinery to bring DNA across the cell membrane(s). The transport of the exogenous DNA into the cells may require proteins that are involved in the assembly of type IV pili and type II secretion system, as well as DNA translocase complex at the cytoplasmic membrane. Due to the differences in structure of the cell envelope between Gram-positive and Gram-negative bacteria, there are some differences in the mechanisms of DNA uptake in these cells, however most of them share common features that involve related proteins. The DNA first binds to the surface of the competent cells on a DNA receptor, and passes through the cytoplasmic membrane via DNA translocase. Only single-stranded DNA may pass through, the other strand being degraded by nucleases in the process. The translocated single-stranded DNA may then be integrated into the bacterial chromosomes by a RecA-dependent process. In Gram-negative cells, due to the presence of an extra membrane, the DNA requires the presence of a channel formed by secretins on the outer membrane. Pilin may be required for competence, but its role is uncertain. The uptake of DNA is generally non-sequence specific, although in some species the presence of specific DNA uptake sequences may facilitate efficient DNA uptake. Natural transformation Natural transformation is a bacterial adaptation for DNA transfer that depends on the expression of numerous bacterial genes whose products appear to be responsible for this process. In general, transformation is a complex, energy-requiring developmental process. In order for a bacterium to bind, take up and recombine exogenous DNA into its chromosome, it must become competent, that is, enter a special physiological state. Competence development in Bacillus subtilis requires expression of about 40 genes. The DNA integrated into the host chromosome is usually (but with rare exceptions) derived from another bacterium of the same species, and is thus homologous to the resident chromosome. In B. subtilis the length of the transferred DNA is greater than 1271 kb (more than 1 million bases). The length transferred is likely double stranded DNA and is often more than a third of the total chromosome length of 4215 kb. It appears that about 7-9% of the recipient cells take up an entire chromosome. The capacity for natural transformation appears to occur in a number of prokaryotes, and thus far 67 prokaryotic species (in seven different phyla) are known to undergo this process. Competence for transformation is typically induced by high cell density and/or nutritional limitation, conditions associated with the stationary phase of bacterial growth. Transformation in Haemophilus influenzae occurs most efficiently at the end of exponential growth as bacterial growth approaches stationary phase. Transformation in Streptococcus mutans, as well as in many other streptococci, occurs at high cell density and is associated with biofilm formation. Competence in B. subtilis is induced toward the end of logarithmic growth, especially under conditions of amino acid limitation. Similarly, in Micrococcus luteus (a representative of the less well studied Actinomycetota phylum), competence develops during the mid-late exponential growth phase and is also triggered by amino acids starvation. By releasing intact host and plasmid DNA, certain bacteriophages are thought to contribute to transformation. Transformation, as an adaptation for DNA repair Competence is specifically induced by DNA damaging conditions. For instance, transformation is induced in Streptococcus pneumoniae by the DNA damaging agents mitomycin C (a DNA cross-linking agent) and fluoroquinolone (a topoisomerase inhibitor that causes double-strand breaks). In B. subtilis, transformation is increased by UV light, a DNA damaging agent. In Helicobacter pylori, ciprofloxacin, which interacts with DNA gyrase and introduces double-strand breaks, induces expression of competence genes, thus enhancing the frequency of transformation Using Legionella pneumophila, Charpentier et al. tested 64 toxic molecules to determine which of these induce competence. Of these, only six, all DNA damaging agents, caused strong induction. These DNA damaging agents were mitomycin C (which causes DNA inter-strand crosslinks), norfloxacin, ofloxacin and nalidixic acid (inhibitors of DNA gyrase that cause double-strand breaks), bicyclomycin (causes single- and double-strand breaks), and hydroxyurea (induces DNA base oxidation). UV light also induced competence in L. pneumophila. Charpentier et al. suggested that competence for transformation probably evolved as a DNA damage response. Logarithmically growing bacteria differ from stationary phase bacteria with respect to the number of genome copies present in the cell, and this has implications for the capability to carry out an important DNA repair process. During logarithmic growth, two or more copies of any particular region of the chromosome may be present in a bacterial cell, as cell division is not precisely matched with chromosome replication. The process of homologous recombinational repair (HRR) is a key DNA repair process that is especially effective for repairing double-strand damages, such as double-strand breaks. This process depends on a second homologous chromosome in addition to the damaged chromosome. During logarithmic growth, a DNA damage in one chromosome may be repaired by HRR using sequence information from the other homologous chromosome. Once cells approach stationary phase, however, they typically have just one copy of the chromosome, and HRR requires input of homologous template from outside the cell by transformation. To test whether the adaptive function of transformation is repair of DNA damages, a series of experiments were carried out using B. subtilis irradiated by UV light as the damaging agent (reviewed by Michod et al. and Bernstein et al.) The results of these experiments indicated that transforming DNA acts to repair potentially lethal DNA damages introduced by UV light in the recipient DNA. The particular process responsible for repair was likely HRR. Transformation in bacteria can be viewed as a primitive sexual process, since it involves interaction of homologous DNA from two individuals to form recombinant DNA that is passed on to succeeding generations. Bacterial transformation in prokaryotes may have been the ancestral process that gave rise to meiotic sexual reproduction in eukaryotes (see Evolution of sexual reproduction; Meiosis.) Methods and mechanisms of transformation in laboratory Bacterial Artificial competence can be induced in laboratory procedures that involve making the cell passively permeable to DNA by exposing it to conditions that do not normally occur in nature. Typically the cells are incubated in a solution containing divalent cations (often calcium chloride) under cold conditions, before being exposed to a heat pulse (heat shock). Calcium chloride partially disrupts the cell membrane, which allows the recombinant DNA to enter the host cell. Cells that are able to take up the DNA are called competent cells. It has been found that growth of Gram-negative bacteria in 20 mM Mg reduces the number of protein-to-lipopolysaccharide bonds by increasing the ratio of ionic to covalent bonds, which increases membrane fluidity, facilitating transformation. The role of lipopolysaccharides here are verified from the observation that shorter O-side chains are more effectively transformed – perhaps because of improved DNA accessibility. The surface of bacteria such as E. coli is negatively charged due to phospholipids and lipopolysaccharides on its cell surface, and the DNA is also negatively charged. One function of the divalent cation therefore would be to shield the charges by coordinating the phosphate groups and other negative charges, thereby allowing a DNA molecule to adhere to the cell surface. DNA entry into E. coli cells is through channels known as zones of adhesion or Bayer's junction, with a typical cell carrying as many as 400 such zones. Their role was established when cobalamine (which also uses these channels) was found to competitively inhibit DNA uptake. Another type of channel implicated in DNA uptake consists of poly (HB):poly P:Ca. In this poly (HB) is envisioned to wrap around DNA (itself a polyphosphate), and is carried in a shield formed by Ca ions. It is suggested that exposing the cells to divalent cations in cold condition may also change or weaken the cell surface structure, making it more permeable to DNA. The heat-pulse is thought to create a thermal imbalance across the cell membrane, which forces the DNA to enter the cells through either cell pores or the damaged cell wall. Electroporation is another method of promoting competence. In this method the cells are briefly shocked with an electric field of 10-20 kV/cm, which is thought to create holes in the cell membrane through which the plasmid DNA may enter. After the electric shock, the holes are rapidly closed by the cell's membrane-repair mechanisms. Yeast Most species of yeast, including Saccharomyces cerevisiae, may be transformed by exogenous DNA in the environment. Several methods have been developed to facilitate this transformation at high frequency in the lab. Yeast cells may be treated with enzymes to degrade their cell walls, yielding spheroplasts. These cells are very fragile but take up foreign DNA at a high rate. Exposing intact yeast cells to alkali cations such as those of caesium or lithium allows the cells to take up plasmid DNA. Later protocols adapted this transformation method, using lithium acetate, polyethylene glycol, and single-stranded DNA. In these protocols, the single-stranded DNA preferentially binds to the yeast cell wall, preventing plasmid DNA from doing so and leaving it available for transformation. Electroporation: Formation of transient holes in the cell membranes using electric shock; this allows DNA to enter as described above for bacteria. Enzymatic digestion or agitation with glass beads may also be used to transform yeast cells. Efficiency – Different yeast genera and species take up foreign DNA with different efficiencies. Also, most transformation protocols have been developed for baker's yeast, S. cerevisiae, and thus may not be optimal for other species. Even within one species, different strains have different transformation efficiencies, sometimes different by three orders of magnitude. For instance, when S. cerevisiae strains were transformed with 10 ug of plasmid YEp13, the strain DKD-5D-H yielded between 550 and 3115 colonies while strain OS1 yielded fewer than five colonies. Plants A number of methods are available to transfer DNA into plant cells. Some vector-mediated methods are: Agrobacterium-mediated transformation is the easiest and most simple plant transformation. Plant tissue (often leaves) are cut into small pieces, e.g. 10x10mm, and soaked for ten minutes in a fluid containing suspended Agrobacterium. The bacteria will attach to many of the plant cells exposed by the cut. The plant cells secrete wound-related phenolic compounds which in turn act to upregulate the virulence operon of the Agrobacterium. The virulence operon includes many genes that encode for proteins that are part of a Type IV secretion system that exports from the bacterium proteins and DNA (delineated by specific recognition motifs called border sequences and excised as a single strand from the virulence plasmid) into the plant cell through a structure called a pilus. The transferred DNA (called T-DNA) is piloted to the plant cell nucleus by nuclear localization signals present in the Agrobacterium protein VirD2, which is covalently attached to the end of the T-DNA at the Right border (RB). Exactly how the T-DNA is integrated into the host plant genomic DNA is an active area of plant biology research. Assuming that a selection marker (such as an antibiotic resistance gene) was included in the T-DNA, the transformed plant tissue can be cultured on selective media to produce shoots. The shoots are then transferred to a different medium to promote root formation. Once roots begin to grow from the transgenic shoot, the plants can be transferred to soil to complete a normal life cycle (make seeds). The seeds from this first plant (called the T1, for first transgenic generation) can be planted on a selective (containing an antibiotic), or if an herbicide resistance gene was used, could alternatively be planted in soil, then later treated with herbicide to kill wildtype segregants. Some plants species, such as Arabidopsis thaliana can be transformed by dipping the flowers or whole plant, into a suspension of Agrobacterium tumefaciens, typically strain C58 (C=Cherry, 58=1958, the year in which this particular strain of A. tumefaciens was isolated from a cherry tree in an orchard at Cornell University in Ithaca, New York). Though many plants remain recalcitrant to transformation by this method, research is ongoing that continues to add to the list the species that have been successfully modified in this manner. Viral transformation (transduction): Package the desired genetic material into a suitable plant virus and allow this modified virus to infect the plant. If the genetic material is DNA, it can recombine with the chromosomes to produce transformant cells. However, genomes of most plant viruses consist of single stranded RNA which replicates in the cytoplasm of infected cell. For such genomes this method is a form of transfection and not a real transformation, since the inserted genes never reach the nucleus of the cell and do not integrate into the host genome. The progeny of the infected plants is virus-free and also free of the inserted gene. Some vector-less methods include: Gene gun: Also referred to as particle bombardment, microprojectile bombardment, or biolistics. Particles of gold or tungsten are coated with DNA and then shot into young plant cells or plant embryos. Some genetic material will stay in the cells and transform them. This method also allows transformation of plant plastids. The transformation efficiency is lower than in Agrobacterium-mediated transformation, but most plants can be transformed with this method. Electroporation: Formation of transient holes in cell membranes using electric pulses of high field strength; this allows DNA to enter as described above for bacteria. Fungi There are some methods to produce transgenic fungi most of them being analogous to those used for plants. However, fungi have to be treated differently due to some of their microscopic and biochemical traits: A major issue is the dikaryotic state that parts of some fungi are in; dikaryotic cells contain two haploid nuclei, one of each parent fungus. If only one of these gets transformed, which is the rule, the percentage of transformed nuclei decreases after each sporulation. Fungal cell walls are quite thick hindering DNA uptake so (partial) removal is often required; complete degradation, which is sometimes necessary, yields protoplasts. Mycelial fungi consist of filamentous hyphae, which are, if at all, separated by internal cell walls interrupted by pores big enough to enable nutrients and organelles, sometimes even nuclei, to travel through each hypha. As a result, individual cells usually cannot be separated. This is problematic as neighbouring transformed cells may render untransformed ones immune to selection treatments, e.g. by delivering nutrients or proteins for antibiotic resistance. Additionally, growth (and thereby mitosis) of these fungi exclusively occurs at the tip of their hyphae which can also deliver issues. As stated earlier, an array of methods used for plant transformation do also work in fungi: Agrobacterium is not only capable of infecting plants but also fungi, however, unlike plants, fungi do not secrete the phenolic compounds necessary to trigger Agrobacterium so that they have to be added, e.g. in the form of acetosyringone. Thanks to development of an expression system for small RNAs in fungi the introduction of a CRISPR/CAS9-system in fungal cells became possible. In 2016 the USDA declared that it will not regulate a white button mushroom strain edited with CRISPR/CAS9 to prevent fruit body browning causing a broad discussion about placing CRISPR/CAS9-edited crops on the market. Physical methods like electroporation, biolistics ("gene gun"), sonoporation that uses cavitation of gas bubbles produced by ultrasound to penetrate the cell membrane, etc. are also applicable to fungi. Animals Introduction of DNA into animal cells is usually called transfection, and is discussed in the corresponding article. Practical aspects of transformation in molecular biology The discovery of artificially induced competence in bacteria allow bacteria such as Escherichia coli to be used as a convenient host for the manipulation of DNA as well as expressing proteins. Typically plasmids are used for transformation in E. coli. In order to be stably maintained in the cell, a plasmid DNA molecule must contain an origin of replication, which allows it to be replicated in the cell independently of the replication of the cell's own chromosome. The efficiency with which a competent culture can take up exogenous DNA and express its genes is known as transformation efficiency and is measured in colony forming unit (cfu) per μg DNA used. A transformation efficiency of 1×108 cfu/μg for a small plasmid like pUC19 is roughly equivalent to 1 in 2000 molecules of the plasmid used being transformed. In calcium chloride transformation, the cells are prepared by chilling cells in the presence of (in solution), making the cell become permeable to plasmid DNA. The cells are incubated on ice with the DNA, and then briefly heat-shocked (e.g., at 42 °C for 30–120 seconds). This method works very well for circular plasmid DNA. Non-commercial preparations should normally give 106 to 107 transformants per microgram of plasmid; a poor preparation will be about 104/μg or less, but a good preparation of competent cells can give up to ~108 colonies per microgram of plasmid. Protocols, however, exist for making supercompetent cells that may yield a transformation efficiency of over 109. The chemical method, however, usually does not work well for linear DNA, such as fragments of chromosomal DNA, probably because the cell's native exonuclease enzymes rapidly degrade linear DNA. In contrast, cells that are naturally competent are usually transformed more efficiently with linear DNA than with plasmid DNA. The transformation efficiency using the method decreases with plasmid size, and electroporation therefore may be a more effective method for the uptake of large plasmid DNA. Cells used in electroporation should be prepared first by washing in cold double-distilled water to remove charged particles that may create sparks during the electroporation process. Selection and screening in plasmid transformation Because transformation usually produces a mixture of relatively few transformed cells and an abundance of non-transformed cells, a method is necessary to select for the cells that have acquired the plasmid. The plasmid therefore requires a selectable marker such that those cells without the plasmid may be killed or have their growth arrested. Antibiotic resistance is the most commonly used marker for prokaryotes. The transforming plasmid contains a gene that confers resistance to an antibiotic that the bacteria are otherwise sensitive to. The mixture of treated cells is cultured on media that contain the antibiotic so that only transformed cells are able to grow. Another method of selection is the use of certain auxotrophic markers that can compensate for an inability to metabolise certain amino acids, nucleotides, or sugars. This method requires the use of suitably mutated strains that are deficient in the synthesis or utility of a particular biomolecule, and the transformed cells are cultured in a medium that allows only cells containing the plasmid to grow. In a cloning experiment, a gene may be inserted into a plasmid used for transformation. However, in such experiment, not all the plasmids may contain a successfully inserted gene. Additional techniques may therefore be employed further to screen for transformed cells that contain plasmid with the insert. Reporter genes can be used as markers, such as the lacZ gene which codes for β-galactosidase used in blue-white screening. This method of screening relies on the principle of α-complementation, where a fragment of the lacZ gene (lacZα) in the plasmid can complement another mutant lacZ gene (lacZΔM15) in the cell. Both genes by themselves produce non-functional peptides, however, when expressed together, as when a plasmid containing lacZ-α is transformed into a lacZΔM15 cells, they form a functional β-galactosidase. The presence of an active β-galactosidase may be detected when cells are grown in plates containing X-gal, forming characteristic blue colonies. However, the multiple cloning site, where a gene of interest may be ligated into the plasmid vector, is located within the lacZα gene. Successful ligation therefore disrupts the lacZα gene, and no functional β-galactosidase can form, resulting in white colonies. Cells containing successfully ligated insert can then be easily identified by its white coloration from the unsuccessful blue ones. Other commonly used reporter genes are green fluorescent protein (GFP), which produces cells that glow green under blue light, and the enzyme luciferase, which catalyzes a reaction with luciferin to emit light. The recombinant DNA may also be detected using other methods such as nucleic acid hybridization with radioactive RNA probe, while cells that expressed the desired protein from the plasmid may also be detected using immunological methods. References External links Bacterial Transformation (a Flash Animation) "Ready, aim, fire!" At the Max Planck Institute for Molecular Plant Physiology in Potsdam-Golm plant cells are 'bombarded' using a particle gun Gene delivery Modification of genetic information Molecular biology
Genetic transformation
[ "Chemistry", "Biology" ]
6,127
[ "Genetics techniques", "Modification of genetic information", "Molecular biology techniques", "Molecular genetics", "Molecular biology", "Biochemistry", "Gene delivery" ]
583,514
https://en.wikipedia.org/wiki/Automatic%20terminal%20information%20service
Automatic terminal information service, or ATIS, is a continuous broadcast of recorded aeronautical information in busier terminal areas. ATIS broadcasts contain essential information, such as current weather information, active runways, available approaches, and any other information required by the pilots, such as important NOTAMs. Pilots usually listen to an available ATIS broadcast before contacting the local control unit, which reduces the controllers' workload and relieves frequency congestion. ATIS was developed and adopted by the FAA in the mid-1960s and internationally (under the direction of ICAO) beginning in 1974. Before the adoption of ATIS, this information was routinely disseminated to each aircraft separately, increasing controller workload during periods of high traffic density. In the U.S., ATIS will include (in this order): the airport or facility name; a phonetic letter code; time of the latest weather observation in UTC; weather information, consisting of wind direction and velocity, visibility, obstructions to vision, sky condition, temperature, dew point, altimeter setting, density altitude advisory if appropriate; and other pertinent remarks, including runway in use. If it exists, the weather observation includes remarks of lightning, cumulonimbus, and towering cumulus clouds. Additionally, ATIS may contain man-portable air-defense systems (MANPADS) alert and advisory, reported unauthorized laser illumination events, instrument or visual approaches in use, departure runways, taxiway closures, new or temporary changes to runway length, runway condition and codes, other optional information, and advisories. The recording is updated in fixed intervals or when there is a significant change in the information, such as a change in the active runway. It is given a letter designation (alpha, bravo, charlie, etc.) from the ICAO spelling alphabet. The letter progresses through the alphabet with every update and starts at alpha after a break in service of twelve hours or more. When contacting the local control unit, pilots indicate their information <letter>, where <letter> is the ATIS identification letter of the ATIS transmission the pilot received. This helps the ATC controller verify that the pilot has current information. Many airports also employ the use of data-link ATIS (D-ATIS, introduced in 1996). D-ATIS is a text-based, digitally transmitted version of the ATIS audio broadcast. It is accessed via a data link service such as the ACARS and displayed on an electronic display in the aircraft. D-ATIS is incorporated on the aircraft as part of its electronic system, such as an EFB or an FMS. D-ATIS may be incorporated into the core ATIS system or be realized as a separate system with a data interface between voice ATIS and D-ATIS. The ATIS is not to be confused with the METAR, which will not contain certain information such as the runway in use. Sample messages Example at a General Aviation airport in the UK (Gloucestershire Airport) International Airport Example 1 See METAR for a more in-depth explanation of aviation weather messages and terminology. Example 2 This example was recorded on 11 July 2016 at London Stansted Airport during which time there were ongoing maintenance works taking place on the taxiway surface in a part of the airport near the cargo terminal; the ATIS broadcast reflects this. Example 3 This message was recorded at Manchester International Airport on the 9th of August 2019 See also METAR Air traffic control Automated airport weather station References External links Canada ATIS frequencies UK ATIS frequencies Sydney Australia live web-based ATIS EUROCONTROL > ATM Performance > EATM > ACARS > Overview Digital Automatic Terminal Information Service (D-ATIS) by ARINC Digital Automatic Terminal Information Service SkyVector: Flight Planning Abbreviated U.S. airport pop-up ATIS information Telecommunications-related introductions in the 1960s Telecommunications-related introductions in 1974 Air traffic control Airport infrastructure
Automatic terminal information service
[ "Engineering" ]
792
[ "Airport infrastructure", "Aerospace engineering" ]
583,532
https://en.wikipedia.org/wiki/Function%20type
In computer science and mathematical logic, a function type (or arrow type or exponential) is the type of a variable or parameter to which a function has or can be assigned, or an argument or result type of a higher-order function taking or returning a function. A function type depends on the type of the parameters and the result type of the function (it, or more accurately the unapplied type constructor , is a higher-kinded type). In theoretical settings and programming languages where functions are defined in curried form, such as the simply typed lambda calculus, a function type depends on exactly two types, the domain A and the range B. Here a function type is often denoted , following mathematical convention, or , based on there existing exactly (exponentially many) set-theoretic functions mappings A to B in the category of sets. The class of such maps or functions is called the exponential object. The act of currying makes the function type adjoint to the product type; this is explored in detail in the article on currying. The function type can be considered to be a special case of the dependent product type, which among other properties, encompasses the idea of a polymorphic function. Programming languages The syntax used for function types in several programming languages can be summarized, including an example type signature for the higher-order function composition function: When looking at the example type signature of, for example C#, the type of the function is actually Func<Func<A,B>,Func<B,C>,Func<A,C>>. Due to type erasure in C++11's std::function, it is more common to use templates for higher order function parameters and type inference (auto) for closures. Denotational semantics The function type in programming languages does not correspond to the space of all set-theoretic functions. Given the countably infinite type of natural numbers as the domain and the booleans as range, then there are an uncountably infinite number (2ℵ0 = c) of set-theoretic functions between them. Clearly this space of functions is larger than the number of functions that can be defined in any programming language, as there exist only countably many programs (a program being a finite sequence of a finite number of symbols) and one of the set-theoretic functions effectively solves the halting problem. Denotational semantics concerns itself with finding more appropriate models (called domains) to model programming language concepts such as function types. It turns out that restricting expression to the set of computable functions is not sufficient either if the programming language allows writing non-terminating computations (which is the case if the programming language is Turing complete). Expression must be restricted to the so-called continuous functions (corresponding to continuity in the Scott topology, not continuity in the real analytical sense). Even then, the set of continuous function contains the parallel-or function, which cannot be correctly defined in all programming languages. See also Cartesian closed category Currying Exponential object, category-theoretic equivalent First-class function Function space, set-theoretic equivalent References Homotopy Type Theory: Univalent Foundations of Mathematics, The Univalent Foundations Program, Institute for Advanced Study. See section 1.2. Data types Subroutines Type theory
Function type
[ "Mathematics" ]
695
[ "Type theory", "Mathematical logic", "Mathematical structures", "Mathematical objects" ]
583,598
https://en.wikipedia.org/wiki/Oxygen%20cycle
Oxygen cycle refers to the movement of oxygen through the atmosphere (air), biosphere (plants and animals) and the lithosphere (the Earth’s crust). The oxygen cycle demonstrates how free oxygen is made available in each of these regions, as well as how it is used. The oxygen cycle is the biogeochemical cycle of oxygen atoms between different oxidation states in ions, oxides, and molecules through redox reactions within and between the spheres/reservoirs of the planet Earth. The word oxygen in the literature typically refers to the most common oxygen allotrope, elemental/diatomic oxygen (O2), as it is a common product or reactant of many biogeochemical redox reactions within the cycle. Processes within the oxygen cycle are considered to be biological or geological and are evaluated as either a source (O2 production) or sink (O2 consumption). Oxygen is one of the most common elements on Earth and represents a large portion of each main reservoir. By far the largest reservoir of Earth's oxygen is within the silicate and oxide minerals of the crust and mantle (99.5% by weight). The Earth's atmosphere, hydrosphere, and biosphere together hold less than 0.05% of the Earth's total mass of oxygen. Besides O2, additional oxygen atoms are present in various forms spread throughout the surface reservoirs in the molecules of biomass, H2O, CO2, HNO3, NO, NO2, CO, H2O2, O3, SO2, H2SO4, MgO, CaO, Al2O3, SiO2, and PO4. Atmosphere The atmosphere is 21% oxygen by volume, which equates to a total of roughly 34 × 1018 mol of oxygen. Other oxygen-containing molecules in the atmosphere include ozone (O3), carbon dioxide (CO2), water vapor (H2O), and sulphur and nitrogen oxides (SO2, NO, N2O, etc.). Biosphere The biosphere is 22% oxygen by volume, present mainly as a component of organic molecules (CxHxNxOx) and water. Hydrosphere The hydrosphere is 33% oxygen by volume present mainly as a component of water molecules, with dissolved molecules including free oxygen and carbolic acids (HxCO3). Lithosphere The lithosphere is 46.6% oxygen by volume, present mainly as silica minerals (SiO2) and other oxide minerals. Sources and sinks While there are many abiotic sources and sinks for O2, the presence of the profuse concentration of free oxygen in modern Earth's atmosphere and ocean is attributed to O2 production from the biological process of oxygenic photosynthesis in conjunction with a biological sink known as the biological pump and a geologic process of carbon burial involving plate tectonics. Biology is the main driver of O2 flux on modern Earth, and the evolution of oxygenic photosynthesis by bacteria, which is discussed as part of the Great Oxygenation Event, is thought to be directly responsible for the conditions permitting the development and existence of all complex eukaryotic metabolism. Biological production The main source of atmospheric free oxygen is photosynthesis, which produces sugars and free oxygen from carbon dioxide and water: Photosynthesizing organisms include the plant life of the land areas, as well as the phytoplankton of the oceans. The tiny marine cyanobacterium Prochlorococcus was discovered in 1986 and accounts for up to half of the photosynthesis of the open oceans. Abiotic production An additional source of atmospheric free oxygen comes from photolysis, whereby high-energy ultraviolet radiation breaks down atmospheric water and nitrous oxide into component atoms. The free hydrogen and nitrogen atoms escape into space, leaving O2 in the atmosphere: Biological consumption The main way free oxygen is lost from the atmosphere is via respiration and decay, mechanisms in which animal life and bacteria consume oxygen and release carbon dioxide. Capacities and fluxes The following tables offer estimates of oxygen cycle reservoir capacities and fluxes. These numbers are based primarily on estimates from (Walker, J. C. G.): More recent research indicates that ocean life (marine primary production) is actually responsible for more than half the total oxygen production on Earth. Table 2: Annual gain and loss of atmospheric oxygen (Units of 1010 kg O2 per year) Ozone The presence of atmospheric oxygen has led to the formation of ozone (O3) and the ozone layer within the stratosphere: O + O2 :- O3 The ozone layer is extremely important to modern life as it absorbs harmful ultraviolet radiation: See also Carbon cycle Nitrogen cycle Hydrogen cycle Dark oxygen References Further reading Ecology Chemical oceanography Photosynthesis Biogeochemical cycle Cycle
Oxygen cycle
[ "Chemistry", "Biology" ]
999
[ "Photosynthesis", "Ecology", "Chemical oceanography", "Biogeochemical cycle", "Biogeochemistry", "Biochemistry" ]
583,600
https://en.wikipedia.org/wiki/Theory%20of%20equations
In algebra, the theory of equations is the study of algebraic equations (also called "polynomial equations"), which are equations defined by a polynomial. The main problem of the theory of equations was to know when an algebraic equation has an algebraic solution. This problem was completely solved in 1830 by Évariste Galois, by introducing what is now called Galois theory. Before Galois, there was no clear distinction between the "theory of equations" and "algebra". Since then algebra has been dramatically enlarged to include many new subareas, and the theory of algebraic equations receives much less attention. Thus, the term "theory of equations" is mainly used in the context of the history of mathematics, to avoid confusion between old and new meanings of "algebra". History Until the end of the 19th century, "theory of equations" was almost synonymous with "algebra". For a long time, the main problem was to find the solutions of a single non-linear polynomial equation in a single unknown. The fact that a complex solution always exists is the fundamental theorem of algebra, which was proved only at the beginning of the 19th century and does not have a purely algebraic proof. Nevertheless, the main concern of the algebraists was to solve in terms of radicals, that is to express the solutions by a formula which is built with the four operations of arithmetics and with nth roots. This was done up to degree four during the 16th century. Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. The case of higher degrees remained open until the 19th century, when Paolo Ruffini gave an incomplete proof in 1799 that some fifth degree equations cannot be solved in radicals followed by Niels Henrik Abel's complete proof in 1824 (now known as the Abel–Ruffini theorem). Évariste Galois later introduced a theory (presently called Galois theory) to decide which equations are solvable by radicals. Further problems Other classical problems of the theory of equations are the following: Linear equations: this problem was solved during antiquity. Simultaneous linear equations: The general theoretical solution was provided by Gabriel Cramer in 1750. However devising efficient methods (algorithms) to solve these systems remains an active subject of research now called linear algebra. Finding the integer solutions of an equation or of a system of equations. These problems are now called Diophantine equations, which are considered a part of number theory (see also integer programming). Systems of polynomial equations: Because of their difficulty, these systems, with few exceptions, have been studied only since the second part of the 19th century. They have led to the development of algebraic geometry. See also Root-finding algorithm Properties of polynomial roots Quintic function References https://www.britannica.com/science/mathematics/Theory-of-equations Further reading Uspensky, James Victor, Theory of Equations (McGraw-Hill), 1963 Dickson, Leonard E., Elementary Theory of Equations (Internet Archive), originally 1914 History of algebra Polynomials Equations
Theory of equations
[ "Mathematics" ]
688
[ "History of algebra", "Polynomials", "Mathematical objects", "Equations", "Algebra" ]
583,637
https://en.wikipedia.org/wiki/Simplicial%20approximation%20theorem
In mathematics, the simplicial approximation theorem is a foundational result for algebraic topology, guaranteeing that continuous mappings can be (by a slight deformation) approximated by ones that are piecewise of the simplest kind. It applies to mappings between spaces that are built up from simplices—that is, finite simplicial complexes. The general continuous mapping between such spaces can be represented approximately by the type of mapping that is (affine-) linear on each simplex into another simplex, at the cost (i) of sufficient barycentric subdivision of the simplices of the domain, and (ii) replacement of the actual mapping by a homotopic one. This theorem was first proved by L.E.J. Brouwer, by use of the Lebesgue covering theorem (a result based on compactness). It served to put the homology theory of the time—the first decade of the twentieth century—on a rigorous basis, since it showed that the topological effect (on homology groups) of continuous mappings could in a given case be expressed in a finitary way. This must be seen against the background of a realisation at the time that continuity was in general compatible with the pathological, in some other areas. This initiated, one could say, the era of combinatorial topology. There is a further simplicial approximation theorem for homotopies, stating that a homotopy between continuous mappings can likewise be approximated by a combinatorial version. Formal statement of the theorem Let and be two simplicial complexes. A simplicial mapping is called a simplicial approximation of a continuous function if for every point , belongs to the minimal closed simplex of containing the point . If is a simplicial approximation to a continuous map , then the geometric realization of , is necessarily homotopic to . The simplicial approximation theorem states that given any continuous map there exists a natural number such that for all there exists a simplicial approximation to (where denotes the barycentric subdivision of , and denotes the result of applying barycentric subdivision times.), in other words, if and are simplicial complexes and is a continuous function, then there is a subdivision of and a simplicial map which is homotopic to . Moreover, if is a positive continuous map, then there are subdivisions of and a simplicial map such that is -homotopic to ; that is, there is a homotopy from to such that for all . So, we may consider the simplicial approximation theorem as a piecewise linear analog of Whitney approximation theorem. References Theory of continuous functions Simplicial sets Theorems in algebraic topology
Simplicial approximation theorem
[ "Mathematics" ]
572
[ "Theory of continuous functions", "Theorems in topology", "Basic concepts in set theory", "Topology", "Families of sets", "Simplicial sets", "Theorems in algebraic topology" ]
583,651
https://en.wikipedia.org/wiki/Barycentric%20subdivision
In mathematics, the barycentric subdivision is a standard way to subdivide a given simplex into smaller ones. Its extension on simplicial complexes is a canonical method to refine them. Therefore, the barycentric subdivision is an important tool in algebraic topology. Motivation The barycentric subdivision is an operation on simplicial complexes. In algebraic topology it is sometimes useful to replace the original spaces with simplicial complexes via triangulations: The substitution allows to assign combinatorial invariants as the Euler characteristic to the spaces. One can ask if there is an analogous way to replace the continuous functions defined on the topological spaces by functions that are linear on the simplices and which are homotopic to the original maps (see also simplicial approximation). In general, such an assignment requires a refinement of the given complex, meaning, one replaces bigger simplices by a union of smaller simplices. A standard way to effectuate such a refinement is the barycentric subdivision. Moreover, barycentric subdivision induces maps on homology groups and is helpful for computational concerns, see Excision and Mayer–Vietoris sequence. Definition Subdivision of simplicial complexes Let be a geometric simplicial complex. A complex is said to be a subdivision of if each simplex of is contained in a simplex of each simplex of is a finite union of simplices of These conditions imply that and equal as sets and as topological spaces, only the simplicial structure changes. Barycentric subdivision of a simplex For a simplex spanned by points , the barycenter is defined to be the point . To define the subdivision, we will consider a simplex as a simplicial complex that contains only one simplex of maximal dimension, namely the simplex itself. The barycentric subdivision of a simplex can be defined inductively by its dimension. For points, i.e. simplices of dimension 0, the barycentric subdivision is defined as the point itself. Suppose then for a simplex of dimension that its faces of dimension are already divided. Therefore, there exist simplices covering . The barycentric subdivision is then defined to be the geometric simplicial complex whose maximal simplices of dimension are each a convex hulls of for one pair for some , so there will be simplices covering . One can generalize the subdivision for simplicial complexes whose simplices are not all contained in a single simplex of maximal dimension, i.e. simplicial complexes that do not correspond geometrically to one simplex. This can be done by effectuating the steps described above simultaneously for every simplex of maximal dimension. The induction will then be based on the -th skeleton of the simplicial complex. It allows effectuating the subdivision more than once. Barycentric subdivision of a convex polytope The operation of barycentric subdivision can be applied to any convex polytope of any dimension, producing another convex polytope of the same dimension. In this version of barycentric subdivision, it is not necessary for the polytope to form a simplicial complex: it can have faces that are not simplices. This is the dual operation to omnitruncation. The vertices of the barycentric subdivision correspond to the faces of all dimensions of the original polytope. Two vertices are adjacent in the barycentric subdivision when they correspond to two faces of different dimensions with the lower-dimensional face included in the higher-dimensional face. The facets of the barycentric subdivision are simplices, corresponding to the flags of the original polytope. For instance, the barycentric subdivision of a cube, or of a regular octahedron, is the disdyakis dodecahedron. The degree-6, degree-4, and degree-8 vertices of the disdyakis dodecahedron correspond to the vertices, edges, and square facets of the cube, respectively. Properties Mesh Let a simplex and define . One way to measure the mesh of a geometric, simplicial complex is to take the maximal diameter of the simplices contained in the complex. Let be an - dimensional simplex that comes from the covering of obtained by the barycentric subdivision. Then, the following estimation holds: . Therefore, by applying barycentric subdivision sufficiently often, the largest edge can be made as small as desired. Homology For some statements in homology-theory one wishes to replace simplicial complexes by a subdivision. On the level of simplicial homology groups one requires a map from the homology-group of the original simplicial complex to the groups of the subdivided complex. Indeed it can be shown that for any subdivision of a finite simplicial complex there is a unique sequence of maps between the homology groups such that for each in the maps fulfills and such that the maps induces endomorphisms of chain complexes. Moreover, the induced map is an isomorphism: Subdivision does not change the homology of the complex. To compute the singular homology groups of a topological space one considers continuous functions where denotes the -dimensional-standard-simplex. In an analogous way as described for simplicial homology groups, barycentric subdivision can be interpreted as an endomorphism of singular chain complexes. Here again, there exists a subdivision operator sending a chain to a linear combination where the sum runs over all simplices that appear in the covering of by barycentric subdivision, and for all of such . This map also induces an endomorphism of chain complexes. Applications The barycentric subdivision can be applied on whole simplicial complexes as in the simplicial approximation theorem or it can be used to subdivide geometric simplices. Therefore it is crucial for statements in singular homology theory, see Mayer–Vietoris sequence and excision. Simplicial approximation Let , be abstract simplicial complexes above sets , . A simplicial map is a function which maps each simplex in onto a simplex in . By affin-linear extension on the simplices, induces a map between the geometric realizations of the complexes. Each point in a geometric complex lies in the inner of exactly one simplex, its support. Consider now a continuous map . A simplicial map is said to be a simplicial approximation of if and only if each is mapped by onto the support of in . If such an approximation exists, one can construct a homotopy transforming into by defining it on each simplex; there, it always exists, because simplices are contractible. The simplicial approximation theorem guarantees for every continuous function the existence of a simplicial approximation at least after refinement of , for instance by replacing by its iterated barycentric subdivision. The theorem plays an important role for certain statements in algebraic topology in order to reduce the behavior of continuous maps on those of simplicial maps, as for instance in Lefschetz's fixed-point theorem. Lefschetz's fixed-point theorem The Lefschetz number is a useful tool to find out whether a continuous function admits fixed-points. This data is computed as follows: Suppose that and are topological spaces that admit finite triangulations. A continuous map induces homomorphisms between its simplicial homology groups with coefficients in a field . These are linear maps between - vectorspaces, so their trace can be determined and their alternating sum is called the Lefschetz number of . If , this number is the Euler characteristic of . The fixpoint theorem states that whenever , has a fixed-point. In the proof this is first shown only for simplicial maps and then generalized for any continuous functions via the approximation theorem. Now, Brouwer's fixpoint theorem is a special case of this statement. Let is an endomorphism of the unit-ball. For all its homology groups vanish, and is always the identity, so , so has a fixpoint. Mayer–Vietoris sequence The Mayer–Vietoris sequence is often used to compute singular homology groups and gives rise to inductive arguments in topology. The related statement can be formulated as follows: Let an open cover of the topological space . There is an exact sequence where we consider singular homology groups, are embeddings and denotes the direct sum of abelian groups. For the construction of singular homology groups one considers continuous maps defined on the standard simplex . An obstacle in the proof of the theorem are maps such that their image is nor contained in neither in . This can be fixed using the subdivision operator: By considering the images of such maps as the sum of images of smaller simplices, lying in or one can show that the inclusion induces an isomorphism on homology which is needed to compare the homology groups. Excision Excision can be used to determine relative homology groups. It allows in certain cases to forget about subsets of topological spaces for their homology groups and therefore simplifies their computation: Let be a topological space and let be subsets, where is closed such that . Then the inclusion induces an isomorphism for all Again, in singular homology, maps may appear such that their image is not part of the subsets mentioned in the theorem. Analogously those can be understood as a sum of images of smaller simplices obtained by the barycentric subdivision. References Algebraic topology Geometric topology Triangulation (geometry) Simplicial homology
Barycentric subdivision
[ "Mathematics" ]
2,026
[ "Triangulation (geometry)", "Planar graphs", "Algebraic topology", "Geometric topology", "Fields of abstract algebra", "Topology", "Planes (geometry)" ]
583,785
https://en.wikipedia.org/wiki/Tarski%27s%20undefinability%20theorem
Tarski's undefinability theorem, stated and proved by Alfred Tarski in 1933, is an important limitative result in mathematical logic, the foundations of mathematics, and in formal semantics. Informally, the theorem states that "arithmetical truth cannot be defined in arithmetic". The theorem applies more generally to any sufficiently strong formal system, showing that truth in the standard model of the system cannot be defined within the system. History In 1931, Kurt Gödel published the incompleteness theorems, which he proved in part by showing how to represent the syntax of formal logic within first-order arithmetic. Each expression of the formal language of arithmetic is assigned a distinct number. This procedure is known variously as Gödel numbering, coding and, more generally, as arithmetization. In particular, various sets of expressions are coded as sets of numbers. For various syntactic properties (such as being a formula, being a sentence, etc.), these sets are computable. Moreover, any computable set of numbers can be defined by some arithmetical formula. For example, there are formulas in the language of arithmetic defining the set of codes for arithmetic sentences, and for provable arithmetic sentences. The undefinability theorem shows that this encoding cannot be done for semantic concepts such as truth. It shows that no sufficiently rich interpreted language can represent its own semantics. A corollary is that any metalanguage capable of expressing the semantics of some object language (e.g. a predicate is definable in Zermelo-Fraenkel set theory for whether formulae in the language of Peano arithmetic are true in the standard model of arithmetic) must have expressive power exceeding that of the object language. The metalanguage includes primitive notions, axioms, and rules absent from the object language, so that there are theorems provable in the metalanguage not provable in the object language. The undefinability theorem is conventionally attributed to Alfred Tarski. Gödel also discovered the undefinability theorem in 1930, while proving his incompleteness theorems published in 1931, and well before the 1933 publication of Tarski's work (Murawski 1998). While Gödel never published anything bearing on his independent discovery of undefinability, he did describe it in a 1931 letter to John von Neumann. Tarski had obtained almost all results of his 1933 monograph "The Concept of Truth in the Languages of the Deductive Sciences" between 1929 and 1931, and spoke about them to Polish audiences. However, as he emphasized in the paper, the undefinability theorem was the only result he did not obtain earlier. According to the footnote to the undefinability theorem (Twierdzenie I) of the 1933 monograph, the theorem and the sketch of the proof were added to the monograph only after the manuscript had been sent to the printer in 1931. Tarski reports there that, when he presented the content of his monograph to the Warsaw Academy of Science on March 21, 1931, he expressed at this place only some conjectures, based partly on his own investigations and partly on Gödel's short report on the incompleteness theorems "" [Some metamathematical results on the definiteness of decision and consistency], Austrian Academy of Sciences, Vienna, 1930. Statement We will first state a simplified version of Tarski's theorem, then state and prove in the next section the theorem Tarski proved in 1933. Let be the language of first-order arithmetic. This is the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence. Let be the standard structure for i.e. consists of the ordinary set of natural numbers and their addition and multiplication. Each sentence in can be interpreted in and then becomes either true or false. Thus is the "interpreted first-order language of arithmetic". Each formula in has a Gödel number This is a natural number that "encodes" In that way, the language can talk about formulas in not just about numbers. Let denote the set of -sentences true in , and the set of Gödel numbers of the sentences in The following theorem answers the question: Can be defined by a formula of first-order arithmetic? Tarski's undefinability theorem: There is no -formula that defines That is, there is no -formula such that for every -sentence holds in . Informally, the theorem says that the concept of truth of first-order arithmetic statements cannot be defined by a formula in first-order arithmetic. This implies a major limitation on the scope of "self-representation". It is possible to define a formula whose extension is but only by drawing on a metalanguage whose expressive power goes beyond that of . For example, a truth predicate for first-order arithmetic can be defined in second-order arithmetic. However, this formula would only be able to define a truth predicate for formulas in the original language . To define a truth predicate for the metalanguage would require a still higher metametalanguage, and so on. To prove the theorem, we proceed by contradiction and assume that an -formula exists which is true for the natural number in if and only if is the Gödel number of a sentence in that is true in . We could then use to define a new -formula which is true for the natural number if and only if is the Gödel number of a formula (with a free variable ) such that is false when interpreted in (i.e. the formula when applied to its own Gödel number, yields a false statement). If we now consider the Gödel number of the formula , and ask whether the sentence is true in , we obtain a contradiction. (This is known as a diagonal argument.) The theorem is a corollary of Post's theorem about the arithmetical hierarchy, proved some years after Tarski (1933). A semantic proof of Tarski's theorem from Post's theorem is obtained by reductio ad absurdum as follows. Assuming is arithmetically definable, there is a natural number such that is definable by a formula at level of the arithmetical hierarchy. However, is -hard for all Thus the arithmetical hierarchy collapses at level , contradicting Post's theorem. General form Tarski proved a stronger theorem than the one stated above, using an entirely syntactical method. The resulting theorem applies to any formal language with negation, and with sufficient capability for self-reference that the diagonal lemma holds. First-order arithmetic satisfies these preconditions, but the theorem applies to much more general formal systems, such as ZFC. Tarski's undefinability theorem (general form): Let be any interpreted formal language which includes negation and has a Gödel numbering satisfying the diagonal lemma, i.e. for every -formula (with one free variable ) there is a sentence such that holds in . Then there is no -formula with the following property: for every -sentence is true in . The proof of Tarski's undefinability theorem in this form is again by reductio ad absurdum. Suppose that an -formula as above existed, i.e., if is a sentence of arithmetic, then holds in if and only if holds in . Hence for all , the formula holds in . But the diagonal lemma yields a counterexample to this equivalence, by giving a "liar" formula such that holds in . This is a contradiction. QED. Discussion The formal machinery of the proof given above is wholly elementary except for the diagonalization which the diagonal lemma requires. The proof of the diagonal lemma is likewise surprisingly simple; for example, it does not invoke recursive functions in any way. The proof does assume that every -formula has a Gödel number, but the specifics of a coding method are not required. Hence Tarski's theorem is much easier to motivate and prove than the more celebrated theorems of Gödel about the metamathematical properties of first-order arithmetic. Smullyan (1991, 2001) has argued forcefully that Tarski's undefinability theorem deserves much of the attention garnered by Gödel's incompleteness theorems. That the latter theorems have much to say about all of mathematics and more controversially, about a range of philosophical issues (e.g., Lucas 1961) is less than evident. Tarski's theorem, on the other hand, is not directly about mathematics but about the inherent limitations of any formal language sufficiently expressive to be of real interest. Such languages are necessarily capable of enough self-reference for the diagonal lemma to apply to them. The broader philosophical import of Tarski's theorem is more strikingly evident. An interpreted language is strongly-semantically-self-representational exactly when the language contains predicates and function symbols defining all the semantic concepts specific to the language. Hence the required functions include the "semantic valuation function" mapping a formula to its truth value and the "semantic denotation function" mapping a term to the object it denotes. Tarski's theorem then generalizes as follows: No sufficiently powerful language is strongly-semantically-self-representational. The undefinability theorem does not prevent truth in one theory from being defined in a stronger theory. For example, the set of (codes for) formulas of first-order Peano arithmetic that are true in is definable by a formula in second order arithmetic. Similarly, the set of true formulas of the standard model of second order arithmetic (or -th order arithmetic for any ) can be defined by a formula in first-order ZFC. See also References Primary sources English translation of Tarski's 1936 article. Further reading Mathematical logic Metatheorems Philosophy of logic Theorems in the foundations of mathematics Theories of truth
Tarski's undefinability theorem
[ "Mathematics" ]
2,106
[ "Foundations of mathematics", "Mathematical logic", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
583,800
https://en.wikipedia.org/wiki/Therm
The therm (symbol, thm) is a non-SI unit of heat energy equal to 100,000 British thermal units (BTU), and approximately megajoules, kilowatt-hours, 25,200kilocalories and thermies. One therm is the energy content of approximately of natural gas at standard temperature and pressure. However, the BTU is not standardised worldwide, with slightly different values in the EU, UK, and United States, meaning that the energy content of the therm also varies by territory. Natural gas meters measure volume and not energy content, and given that the energy density varies with the mix of hydrocarbons in the natural gas, a "therm factor" is used by natural gas companies to convert the volume of gas used to its heat equivalent, usually being expressed in units of "therms per CCF" (CCF is an abbreviation for 100 standard cubic feet). Higher than average concentration of ethane, propane or butane will increase the therm factor and the inclusion of non-flammable impurities, such as carbon dioxide or nitrogen will reduce it. The Wobbe Index of a fuel gas is also sometimes used to quantify the amount of heat per unit volume burnt. Definitions Therm (EC) ≡ BTUISO = joules ≈ kWh The therm (EC) is often used by engineers in the US. Therm (US) ≡ BTU59°F = joules ≈ kWh. Therm (UK) ≡ joules ≡ kWh Decatherm A decatherm or dekatherm (dth or Dth) is 10 therms, which is 1,000,000British thermal units or 1.055 GJ. It is a combination of the prefix for 10 (deca, often with the US spelling "deka") and the energy unit therm. There is some ambiguity, as "decatherm" uses the prefix "d" to mean 10, where in metric the prefix "d" means "deci" or one-tenth, and the prefix "da" means "deca", or 10, though decatherm may use a capital "D". The energy content of natural gas measured at standard conditions is approximately equal to one dekatherm. This unit of energy is used primarily to measure natural gas. Natural gas is a mixture of gases containing approximately 80% methane (CH4) and its heating value varies from about or , depending on the mix of different gases in the gas stream. The volume of natural gas with heating value of one dekatherm is about . Noncombustible carbon dioxide (CO2) lowers the heating value of natural gas. Heavier hydrocarbons such as ethane (C2H6), propane (C3H8), and butane (C4H10) increase its heating value. Since customers who buy natural gas are actually buying heat, gas distribution companies who bill by volume routinely adjust their rates to compensate for this. The company Texas Eastern Transmission Corporation, a natural gas pipeline company, started to use the unit dekatherm in about 1972. To simplify billing, Texas Eastern staff members coined the term dekatherm and proposed using calorimeters to measure and bill gas delivered to customers in dekatherms. This would eliminate the constant calculation of rate adjustments to dollar per 1000 cubic feet rates in order to assure that all customers received the same amount of heat per dollar. A settlement agreement reflecting the new billing procedure and settlement rates was filed in 1973. The Federal Power Commission issued an order approving the settlement agreement and the new tariff using dekatherms later that year, Other gas distribution companies also began to use this process. In spite of the need for adjustments, many companies continue to use standard cubic feet rather than dekatherms to measure and bill natural gas. Usage United Kingdom regulations were amended to replace therms with joules with effect from 1999, with natural gas usually retailed in the derived unit, kilowatt-hours. Despite this, the wholesale UK gas market trades in therms. In the United States, natural gas is commonly billed in CCFs (hundreds of cubic feet) or therms. Carbon footprint According to the EPA burning one therm of natural gas produces on average of carbon dioxide. See also Barrel of oil equivalent A Cubic Mile of Oil References Units of energy
Therm
[ "Mathematics" ]
916
[ "Quantity", "Units of energy", "Units of measurement" ]
583,901
https://en.wikipedia.org/wiki/Combined%20cycle%20power%20plant
A combined cycle power plant is an assembly of heat engines that work in tandem from the same source of heat, converting it into mechanical energy. On land, when used to make electricity the most common type is called a combined cycle gas turbine (CCGT) plant, which is a kind of gas-fired power plant. The same principle is also used for marine propulsion, where it is called a combined gas and steam (COGAS) plant. Combining two or more thermodynamic cycles improves overall efficiency, which reduces fuel costs. The principle is that after completing its cycle in the first engine, the working fluid (the exhaust) is still hot enough that a second subsequent heat engine can extract energy from the heat in the exhaust. Usually the heat passes through a heat exchanger so that the two engines can use different working fluids. By generating power from multiple streams of work, the overall efficiency can be increased by 50–60%. That is, from an overall efficiency of the system of say 34% for a simple cycle, to as much as 64% net for the turbine alone in specified conditions for a combined cycle. Historical cycles Historically successful combined cycles have used mercury vapour turbines, magnetohydrodynamic generators and molten carbonate fuel cells, with steam plants for the low temperature "bottoming" cycle. Very low temperature bottoming cycles have been too costly due to the very large sizes of equipment needed to handle the large mass flows and small temperature differences. However, in cold climates it is common to sell hot power plant water for hot water and space heating. Vacuum-insulated piping can let this utility reach as far as 90 km. The approach is called "combined heat and power" (CHP). In stationary and marine power plants, a widely used combined cycle has a large gas turbine (operating by the Brayton cycle). The turbine's hot exhaust powers a steam power plant (operating by the Rankine cycle). This is a combined cycle gas turbine (CCGT) plant. These achieve a best-of-class real (see below) thermal efficiency of around 64% in base-load operation. In contrast, a single cycle steam power plant is limited to efficiencies from 35 to 42%. Many new power plants utilize CCGTs. Stationary CCGTs burn natural gas or synthesis gas from coal. Ships burn fuel oil. Multiple stage turbine or steam cycles can also be used, but CCGT plants have advantages for both electricity generation and marine power. The gas turbine cycle can often start very quickly, which gives immediate power. This avoids the need for separate expensive peaker plants, or lets a ship maneuver. Over time the secondary steam cycle will warm up, improving fuel efficiency and providing further power. In November 2013, the Fraunhofer Institute for Solar Energy Systems ISE assessed the levelised cost of energy for newly built power plants in the German electricity sector. They gave costs of between 78 and €100 /MWh for CCGT plants powered by natural gas. In addition the capital costs of combined cycle power is relatively low, at around $1000/kW, making it one of the cheapest types of generation to install. Basic combined cycle The thermodynamic cycle of the basic combined cycle consists of two power plant cycles. One is the Joule or Brayton cycle which is a gas turbine cycle and the other is the Rankine cycle which is a steam turbine cycle. The cycle 1-2-3-4-1 which is the gas turbine power plant cycle is the topping cycle. It depicts the heat and work transfer process taking place in the high temperature region. The cycle a-b-c-d-e-f-a which is the Rankine steam cycle takes place at a lower temperature and is known as the bottoming cycle. Transfer of heat energy from high temperature exhaust gas to water and steam takes place in a waste heat recovery boiler in the bottoming cycle. During the constant pressure process 4-1 the exhaust gases from the gas turbine reject heat. The feed water, wet and super heated steam absorb some of this heat in the process a-b, b-c and c-d. Steam generators The steam power plant takes its input heat from the high temperature exhaust gases from a gas turbine power plant. The steam thus generated can be used to drive a steam turbine. The Waste Heat Recovery Boiler (WHRB) has 3 sections: Economiser, evaporator and superheater. Cheng cycle The Cheng cycle is a simplified form of combined cycle where the steam turbine is eliminated by injecting steam directly into the combustion turbine. This has been used since the mid 1970s and allows recovery of waste heat with less total complexity, but at the loss of the additional power and redundancy of a true combined cycle system. It has no additional steam turbine or generator, and therefore it cannot be used as a backup or supplementary power. It is named after American professor D. Y. Cheng who patented the design in 1976. Design principles The efficiency of a heat engine, the fraction of input heat energy that can be converted to useful work, is limited by the temperature difference between the heat entering the engine and the exhaust heat leaving the engine. In a thermal power station, water is the working medium. High pressure steam requires strong, bulky components. High temperatures require expensive alloys made from nickel or cobalt, rather than inexpensive steel. These alloys limit practical steam temperatures to 655 °C while the lower temperature of a steam plant is fixed by the temperature of the cooling water. With these limits, a steam plant has a fixed upper efficiency of 35–42%. An open circuit gas turbine cycle has a compressor, a combustor and a turbine. For gas turbines the amount of metal that must withstand the high temperatures and pressures is small, and lower quantities of expensive materials can be used. In this type of cycle, the input temperature to the turbine (the firing temperature), is relatively high (900 to 1,400 °C). The output temperature of the flue gas is also high (450 to 650 °C). This is therefore high enough to provide heat for a second cycle which uses steam as the working fluid (a Rankine cycle). In a combined cycle power plant, the heat of the gas turbine's exhaust is used to generate steam by passing it through a heat recovery steam generator (HRSG) with a live steam temperature between 420 and 580 °C. The condenser of the Rankine cycle is usually cooled by water from a lake, river, sea or cooling towers. This temperature can be as low as 15 °C. Typical size Plant size is important in the cost of the plant. The larger plant sizes benefit from economies of scale (lower initial cost per kilowatt) and improved efficiency. For large-scale power generation, a typical set would be a 270 MW primary gas turbine coupled to a 130 MW secondary steam turbine, giving a total output of 400 MW. A typical power station might consist of between 1 and 6 such sets. Gas turbines for large-scale power generation are manufactured by at least four separate groups – General Electric, Siemens, Mitsubishi-Hitachi, and Ansaldo Energia. These groups are also developing, testing and/or marketing gas turbine sizes in excess of 300 MW (for 60 Hz applications) and 400 MW (for 50 Hz applications). Combined cycle units are made up of one or more such gas turbines, each with a waste heat steam generator arranged to supply steam to a single or multiple steam turbines, thus forming a combined cycle block or unit. Combined cycle block sizes offered by three major manufacturers (Alstom, General Electric and Siemens) can range anywhere from 50 MW to well over 1300 MW with costs approaching $670/kW. Unfired boiler The heat recovery boiler is item 5 in the COGAS figure shown above. Hot gas turbine exhaust enters the super heater, then passes through the evaporator and finally through the economiser section as it flows out from the boiler. Feed water comes in through the economizer and then exits after having attained saturation temperature in the water or steam circuit. Finally it flows through the evaporator and super heater. If the temperature of the gases entering the heat recovery boiler is higher, then the temperature of the exiting gases is also high. Dual pressure boiler In order to remove the maximum amount of heat from the gasses exiting the high temperature cycle, a dual pressure boiler is often employed. It has two water/steam drums. The low-pressure drum is connected to the low-pressure economizer or evaporator. The low-pressure steam is generated in the low temperature zone of the turbine exhaust gasses. The low-pressure steam is supplied to the low-temperature turbine. A super heater can be provided in the low-pressure circuit. Some part of the feed water from the low-pressure zone is transferred to the high-pressure economizer by a booster pump. This economizer heats up the water to its saturation temperature. This saturated water goes through the high-temperature zone of the boiler and is supplied to the high-pressure turbine. Supplementary firing The HRSG can be designed to burn supplementary fuel after the gas turbine. Supplementary burners are also called duct burners. Duct burning is possible because the turbine exhaust gas (flue gas) still contains some oxygen. Temperature limits at the gas turbine inlet force the turbine to use excess air, above the optimal stoichiometric ratio to burn the fuel. Often in gas turbine designs part of the compressed air flow bypasses the burner in order to cool the turbine blades. The turbine exhaust is already hot, so a regenerative air preheater is not required as in a conventional steam plant. However, a fresh air fan blowing directly into the duct permits a duct-burning steam plant to operate even when the gas turbine cannot. Without supplementary firing, the thermal efficiency of a combined cycle power plant is higher. But more flexible plant operations make a marine CCGT safer by permitting a ship to operate with equipment failures. A flexible stationary plant can make more money. Duct burning raises the flue temperature, which increases the quantity or temperature of the steam (e.g. to 84 bar, 525 degree Celsius). This improves the efficiency of the steam cycle. Supplementary firing lets the plant respond to fluctuations of electrical load, because duct burners can have very good efficiency with partial loads. It can enable higher steam production to compensate for the failure of another unit. Also, coal can be burned in the steam generator as an economical supplementary fuel. Supplementary firing can raise exhaust temperatures from 600 °C (GT exhaust) to 800 or even 1000 °C. Supplemental firing does not raise the efficiency of most combined cycles. For single boilers it can raise the efficiency if fired to 700–750 °C; for multiple boilers however, the flexibility of the plant should be the major attraction. "Maximum supplementary firing" is the condition when the maximum fuel is fired with the oxygen available in the gas turbine exhaust. Combined cycle advanced Rankine subatmospheric reheating Fuel for combined cycle power plants Combined cycle plants are usually powered by natural gas, although fuel oil, synthesis gas or other fuels can be used. The supplementary fuel may be natural gas, fuel oil, or coal. Biofuels can also be used. Integrated solar combined cycle power stations combine the energy harvested from solar radiation with another fuel to cut fuel costs and environmental impact (See: ISCC section). Many next generation nuclear power plants can use the higher temperature range of a Brayton top cycle, as well as the increase in thermal efficiency offered by a Rankine bottoming cycle. Where the extension of a gas pipeline is impractical or cannot be economically justified, electricity needs in remote areas can be met with small-scale combined cycle plants using renewable fuels. Instead of natural gas, these gasify and burn agricultural and forestry waste, which is often readily available in rural areas. Managing low-grade fuels in turbines Gas turbines burn mainly natural gas and light oil. Crude oil, residual, and some distillates contain corrosive components and as such require fuel treatment equipment. In addition, ash deposits from these fuels result in gas turbine deratings of up to 15%. They may still be economically attractive fuels however, particularly in combined-cycle plants. Sodium and potassium are removed from residual, crude and heavy distillates by a water washing procedure. A simpler and less expensive purification system will do the same job for light crude and light distillates. A magnesium additive system may also be needed to reduce the corrosive effects if vanadium is present. Fuels requiring such treatment must have a separate fuel-treatment plant and a system of accurate fuel monitoring to assure reliable, low-maintenance operation of gas turbines. Hydrogen Xcel Energy is going to build two natural gas power plants in the Midwestern United States that can mix 30% hydrogen with the natural gas. Intermountain Power Plant is being retrofitted to a natural gas/hydrogen power plant that can run on 30% hydrogen as well, and is scheduled to run on pure hydrogen by 2045. However others think low-carbon hydrogen should be used for things which are harder to decarbonize, such as making fertilizer, so there may not be enough for electricity generation. Configuration Combined-cycle systems can have single-shaft or multi-shaft configurations. Also, there are several configurations of steam systems. The most fuel-efficient power generation cycles use an unfired heat recovery steam generator (HRSG) with modular pre-engineered components. These unfired steam cycles are also the lowest in initial cost, and they are often part of a single shaft system that is installed as a unit. Supplementary-fired and multishaft combined-cycle systems are usually selected for specific fuels, applications or situations. For example, cogeneration combined-cycle systems sometimes need more heat, or higher temperatures, and electricity is a lower priority. Multishaft systems with supplementary firing can provide a wider range of temperatures or heat to electric power. Systems burning low quality fuels such as brown coal or peat might use relatively expensive closed-cycle helium turbines as the topping cycle to avoid even more expensive fuel processing and gasification that would be needed by a conventional gas turbine. A typical single-shaft system has one gas turbine, one steam turbine, one generator and one heat recovery steam generator (HRSG). The gas turbine and steam turbine are both coupled in tandem to a single electrical generator on a single shaft. This arrangement is simpler to operate, smaller, with a lower startup cost. Single-shaft arrangements can have less flexibility and reliability than multi-shaft systems. With some expense, there are ways to add operational flexibility: Most often, the operator desires to operate the gas turbine as a peaking plant. In these plants, the steam turbine's shaft can be disconnected with a synchro-self-shifting (SSS) clutch, for start up or for simple cycle operation of the gas turbine. Another less common set of options enable more heat or standalone operation of the steam turbine to increase reliability: Duct burning, perhaps with a fresh air blower in the duct and a clutch on the gas turbine side of the shaft. A multi-shaft system usually has only one steam system for up to three gas turbines. Having only one large steam turbine and heat sink has economies of scale and can have lower cost operations and maintenance. A larger steam turbine can also use higher pressures, for a more efficient steam cycle. However, a multi-shaft system is about 5% higher in initial cost. The overall plant size and the associated number of gas turbines required can also determine which type of plant is more economical. A collection of single shaft combined cycle power plants can be more costly to operate and maintain, because there are more pieces of equipment. However, it can save interest costs by letting a business add plant capacity as it is needed. Multiple-pressure reheat steam cycles are applied to combined-cycle systems with gas turbines with exhaust gas temperatures near 600 °C. Single- and multiple-pressure non-reheat steam cycles are applied to combined-cycle systems with gas turbines that have exhaust gas temperatures of 540 °C or less. Selection of the steam cycle for a specific application is determined by an economic evaluation that considers a plant's installed cost, fuel cost and quality, duty cycle, and the costs of interest, business risks, and operations and maintenance. Efficiency By combining both gas and steam cycles, high input temperatures and low output temperatures can be achieved. The efficiency of the cycles add, because they are powered by the same fuel source. So, a combined cycle plant has a thermodynamic cycle that operates between the gas-turbine's high firing temperature and the waste heat temperature from the condensers of the steam cycle. This large range means that the Carnot efficiency of the cycle is high. The actual efficiency, while lower than the Carnot efficiency, is still higher than that of either plant on its own. The electric efficiency of a combined cycle power station, if calculated as electric energy produced as a percentage of the lower heating value of the fuel consumed, can be over 60% when operating new, i.e. unaged, and at continuous output which are ideal conditions. As with single cycle thermal units, combined cycle units may also deliver low temperature heat energy for industrial processes, district heating and other uses. This is called cogeneration and such power plants are often referred to as a combined heat and power (CHP) plant. In general, combined cycle efficiencies in service are over 50% on a lower heating value and Gross Output basis. Most combined cycle units, especially the larger units, have peak, steady-state efficiencies on the LHV basis of 55 to 59%. A limitation of combined cycles is that efficiency is reduced when not running at continuous output. During start up, the second cycle can take time to start up. Thus efficiency is initially much lower until the second cycle is running, which can take an hour or more. Fuel heating value Heat engine efficiency can be based on the fuel Higher Heating Value (HHV), including latent heat of vaporisation that would be recuperated in condensing boilers, or the Lower Heating Value (LHV), excluding it. The HHV of methane is , compared to a LHV: a 11% increase. Boosting efficiency Efficiency of the turbine is increased when combustion can run hotter, so the working fluid expands more. Therefore, efficiency is limited by whether the first stage of turbine blades can survive higher temperatures. Cooling and materials research are continuing. A common technique, adopted from aircraft, is to pressurise hot-stage turbine blades with coolant. This is also bled-off in proprietary ways to improve the aerodynamic efficiencies of the turbine blades. Different vendors have experimented with different coolants. Air is common but steam is increasingly used. Some vendors might now utilize single-crystal turbine blades in the hot section, a technique already common in military aircraft engines. The efficiency of CCGT and GT can also be boosted by pre-cooling combustion air. This increases its density, also increasing the expansion ratio of the turbine. This is practised in hot climates and also has the effect of increasing power output. This is achieved by evaporative cooling of water using a moist matrix placed in the turbine's inlet, or by using Ice storage air conditioning. The latter has the advantage of greater improvements due to the lower temperatures available. Furthermore, ice storage can be used as a means of load control or load shifting since ice can be made during periods of low power demand and, potentially in the future the anticipated high availability of other resources such as renewables during certain periods. Combustion technology is a proprietary but very active area of research, because fuels, gasification and carburation all affect fuel efficiency. A typical focus is to combine aerodynamic and chemical computer simulations to find combustor designs that assure complete fuel burn up, yet minimize both pollution and dilution of the hot exhaust gases. Some combustors inject other materials, such air or steam, to reduce pollution by reducing the formation of nitrates and ozone. Another active area of research is the steam generator for the Rankine cycle. Typical plants already use a two-stage steam turbine, reheating the steam between the two stages. When the heat-exchangers' thermal conductivity can be improved, efficiency improves. As in nuclear reactors, tubes might be made thinner (e.g. from stronger or more corrosion-resistant steel). Another approach might use silicon carbide sandwiches, which do not corrode. There is also some development of modified Rankine cycles. Two promising areas are ammonia/water mixtures, and turbines that utilize supercritical carbon dioxide. Modern CCGT plants also need software that is precisely tuned to every choice of fuel, equipment, temperature, humidity and pressure. When a plant is improved, the software becomes a moving target. CCGT software is also expensive to test, because actual time is limited on the multimillion-dollar prototypes of new CCGT plants. Testing usually simulates unusual fuels and conditions, but validates the simulations with selected data points measured on actual equipment. Competition There is active competition to reach higher efficiencies. Research aimed at turbine inlet temperature has led to even more efficient combined cycles. Nearly 60% LHV efficiency (54% HHV efficiency) was reached in the Baglan Bay power station, using a GE H-technology gas turbine with a NEM 3 pressure reheat boiler, using steam from the heat recovery steam generator (HRSG) to cool the turbine blades. In May 2011 Siemens AG announced they had achieved a 60.75% efficiency with a 578 megawatt SGT5-8000H gas turbine at the Irsching Power Station. The Chubu Electric’s Nishi-ku, Nagoya power plant 405 MW 7HA is expected to have 62% gross combined cycle efficiency. On April 28, 2016, the plant run by Électricité de France in Bouchain was certified by Guinness World Records as the world's most efficient combined cycle power plant at 62.22%. It uses a General Electric 9HA, that claimed 41.5% simple cycle efficiency and 61.4% in combined cycle mode, with a gas turbine output of 397 MW to 470 MW and a combined output of 592 MW to 701 MW. Its firing temperature is between , its overall pressure ratio is 21.8 to 1. In December 2016, Mitsubishi claimed a LHV efficiency of greater than 63% for some members of its J Series turbines. In December 2017, GE claimed 64% in its latest 826 MW HA plant, up from 63.7%. They said this was due to advances in additive manufacturing and combustion. Their press release said that they planned to achieve 65% by the early 2020s. Integrated gasification combined cycle (IGCC) An integrated gasification combined cycle, or IGCC, is a power plant using synthesis gas (syngas). Syngas can be produced from a number of sources, including coal and biomass. The system uses gas and steam turbines, the steam turbine operating from the heat left over from the gas turbine. This process can raise electricity generation efficiency to around 50%. Integrated solar combined cycle (ISCC) An Integrated Solar Combined Cycle (ISCC) is a hybrid technology in which a solar thermal field is integrated within a combined cycle plant. In ISCC plants, solar energy is used as an auxiliary heat supply, supporting the steam cycle, which results in increased generation capacity or a reduction of fossil fuel use. Thermodynamic benefits are that daily steam turbine startup losses are eliminated. Major factors limiting the load output of a combined cycle power plant are the allowed pressure and temperature transients of the steam turbine and the heat recovery steam generator waiting times to establish required steam chemistry conditions and warm-up times for the balance of plant and the main piping system. Those limitations also influence the fast start-up capability of the gas turbine by requiring waiting times. And waiting gas turbines consume gas. The solar component, if the plant is started after sunshine, or before, if there is heat storage, allows the preheat of the steam to the required conditions. That is, the plant is started faster and with less consumption of gas before achieving operating conditions. Economic benefits are that the solar components costs are 25% to 75% those of a Solar Energy Generating Systems plant of the same collector surface. The first such system to come online was the Archimede combined cycle power plant, Italy in 2010, followed by Martin Next Generation Solar Energy Center in Florida, and in 2011 by the Kuraymat ISCC Power Plant in Egypt, Yazd power plant in Iran, Hassi R'mel in Algeria, Ain Beni Mathar in Morocco. In Australia CS Energy's Kogan Creek and Macquarie Generation's Liddell Power Station started construction of a solar Fresnel boost section (44 MW and 9 MW), but the projects never became active. Bottoming cycles In most successful combined cycles, the bottoming cycle for power is a conventional steam Rankine cycle. It is already common in cold climates (such as Finland) to drive community heating systems from a steam power plant's condenser heat. Such cogeneration systems can yield theoretical efficiencies above 95%. Bottoming cycles producing electricity from the steam condenser's heat exhaust are theoretically possible, but conventional turbines are uneconomically large. The small temperature differences between condensing steam and outside air or water require very large movements of mass to drive the turbines. Although not reduced to practice, a vortex of air can concentrate the mass flows for a bottoming cycle. Theoretical studies of the Vortex engine show that if built at scale it is an economical bottoming cycle for a large steam Rankine cycle power plant. Combined cycle hydrogen power plant A combined cycle hydrogen power plant is a power plant that uses hydrogen in a combined cycle power plant. A green hydrogen combined cycle power plant is only about 40% efficient, after electrolysis and reburning for electricity, and is a viable option for energy storage for longer term compared to battery storage. Natural gas power plants could be converted to hydrogen power plants with minimal renovation or do a combined mix of natural gas and hydrogen. Retrofitting natural gas power plants Natural gas power plants could be designed with a transition to hydrogen in mind by having wider inlet pipes to the burner to increase flow rates because hydrogen is less dense than natural gas, and have the right material because hydrogen can cause hydrogen embrittlement. Limitations Current electrolysis plants are not capable of providing the scale of hydrogen that is needed to provide for a large scale power plant. On site electrolysis may be needed, then storing large amounts of hydrogen could take up a lot of space if it is only compressed hydrogen and not Liquid hydrogen. Hydrogen embrittlement could happen in pipelines, but 316L stainless steel pipelines could handle compressed hydrogen above 50 Bar (unit), which is what compressed natural gas is piped at, or wider pipelines could be built for hydrogen. Polyethylene or fiber-reinforced polymer pipelines coule also be used. Nitrous oxide When hydrogen is burned as a fuel no carbon dioxide is produced, but more nitrous oxide is produced because of the higher flame temperature from hydrogen, a selective catalytic reduction process could be implemented to break NO₂ down into just nitrogen and water. The exhaust from a burning hydrogen reaction is water vapor and could be used as a diluent to lower the high burning temp that creates the nitrous oxide. Corrosion Corrosion of the turbine from the water vapor from the hydrogen flame could reduce plant life or parts may need to be replaced more often. Fuel handling Hydrogen is the smallest and lightest element and can leak more easily at connection points and joints. Hydrogen diffuses quickly mitigating explosions. A hydrogen flame is also not as visible as a standard flame. Transition to a renewable power grid Wind and solar power are variable renewable energy sources that aren't as consistent as base load energy. Hydrogen could help renewables by capturing excess energy, with electrolysis, when they produce too much, and fill the gaps with that energy when they aren't producing as much. See also Strategic natural gas reserve Green energy High-temperature electrolysis hydrogen economy Hydrogen fuel cell power plant Hydrogen fuel enhancement Hydrogen storage Underground hydrogen storage Blue hydrogen White hydrogen Intermountain Power Plant Smart grid Pumped-storage hydroelectricity Midcontinent Rift System Allam power cycle Cheng cycle Cogeneration Combined gas and steam Combined cycle hydrogen power plant Combined cycle powered railway locomotive Cost of electricity by source Heat recovery steam generator Hydrogen-cooled turbo generator Integrated gasification combined cycle Compound steam engine References Further reading Steam & Gas Turbines And Power Plant Engineering ISBN C039000000001, R Yadav., Sanjay., Rajay, Central Publishing House, Allahabad Applied Thermodynamics , R Yadav., Sanjay., Rajay, Central Publishing House, Allahabad. External links Thermodynamic cycles Mechanical engineering Power station technology Energy conversion Fuel technology Turbo generators Articles containing video clips
Combined cycle power plant
[ "Physics", "Engineering" ]
5,978
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
584,118
https://en.wikipedia.org/wiki/Rivalry%20%28economics%29
In economics, a good is said to be rivalrous or a rival if its consumption by one consumer prevents simultaneous consumption by other consumers, or if consumption by one party reduces the ability of another party to consume it. A good is considered non-rivalrous or non-rival if, for any level of production, the cost of providing it to a marginal (additional) individual is zero. A good is "anti-rivalrous" and "inclusive" if each person benefits more when other people consume it. A good can be placed along a continuum from rivalrous through non-rivalrous to anti-rivalrous. The distinction between rivalrous and non-rivalrous is sometimes referred to as jointness of supply or subtractable or non-subtractable. Economist Paul Samuelson made the distinction between private and public goods in 1954 by introducing the concept of nonrival consumption. Economist Richard Musgrave followed on and added rivalry and excludability as criteria for defining consumption goods in 1959 and 1969. Rivalry Most tangible goods, both durable and nondurable, are rival goods. A hammer is a durable rival good. One person's use of the hammer prevents others from using the hammer at the same time. However, the first user does not "use up" the hammer, meaning that some rival goods can still be shared through time. An apple is a nondurable rival good: once an apple is eaten, it is "used up" and can no longer be eaten by others. Non-tangible goods can also be rivalrous. Examples include the ownership of radio spectra and domain names. In more general terms, almost all private goods are rivalrous. Non-rivalry In contrast, non-rival goods may be consumed by one consumer without preventing simultaneous consumption by others. Most examples of non-rival goods are intangible. Broadcast television is an example of a non-rival good; when a consumer turns on a TV set, this does not prevent the TV in another consumer's house from working. The television itself is a rival good, but television broadcasts are non-rival goods. Other examples of non-rival goods include a beautiful scenic view, national defense, clean air, street lights, and public safety. More generally, most intellectual property is non-rival. In fact, certain types of intellectual property become more valuable as more people consume them (anti-rival). For example, the more people use a particular language, the more valuable that language becomes. Non-rivalry does not imply that the total production costs are low, but that the marginal production costs are zero. In reality, few goods are completely non-rival as rivalry can emerge at certain levels. For instance, use of public roads, the Internet, or police/law courts is non-rival up to a certain capacity, after which congestion means that each additional user decreases speed for others. For that, recent economic theory views rivalry as a continuum, not as a binary category, where many goods are somewhere between the two extremes of completely rival and completely non-rival. A perfectly non-rival good can be consumed simultaneously by an unlimited number of consumers. Anti-rivalry Goods are anti-rivalrous and inclusive if the consumer’s enjoyment increases with how many others consume the good. The concept was introduced by Steven Weber (2004), saying that when more people use free and open-source software, it becomes easier and more powerful for all users. Lessig noted that any natural language is anti-rivalrous, because its utility increases with how much it is used by others. Cooper noted that efforts to combat climate change are perversely anti-rivalrous — any country acting as a free rider will benefit from the efforts of others to combat this problem, even while not contributing itself. Types of goods based on rivalry in consumption and excludability See also The generalized network effect of microeconomics. Metcalfe's law Anti-rival good Rent-seeking Free-rider problem References Goods (economics) Rivalry
Rivalry (economics)
[ "Physics" ]
813
[ "Materials", "Goods (economics)", "Matter" ]
584,136
https://en.wikipedia.org/wiki/Topological%20abelian%20group
In mathematics, a topological abelian group, or TAG, is a topological group that is also an abelian group. That is, a TAG is both a group and a topological space, the group operations are continuous, and the group's binary operation is commutative. The theory of topological groups applies also to TAGs, but more can be done with TAGs. Locally compact TAGs, in particular, are used heavily in harmonic analysis. See also , a topological abelian group that is compact and connected References Fourier analysis on Groups, by Walter Rudin. Abelian group theory Topology Topological groups
Topological abelian group
[ "Physics", "Mathematics" ]
121
[ "Space (mathematics)", "Topological spaces", "Topology stubs", "Topology", "Space", "Geometry", "Topological groups", "Spacetime" ]
584,207
https://en.wikipedia.org/wiki/Dichroic%20prism
A dichroic prism is a prism that splits light into two beams of differing wavelengths (colour). A trichroic prism assembly combines two dichroic prisms to split an image into 3 colours, typically as red, green and blue of the RGB colour model. They are usually constructed of one or more glass prisms with dichroic optical coatings that selectively reflect or transmit light depending on the light's wavelength. That is, certain surfaces within the prism act as dichroic filters. These are used as beam splitters in many optical instruments. (See: Dichroism, for the etymology of the term.) Applications in camcorders or digital cameras One common application of dichroic prisms is in some camcorders and high-quality digital cameras. A trichroic prism assembly is a combination of two dichroic prisms which are used to split an image into red, green, and blue components, which can be separately detected on three CCD arrays. A possible layout for the device is shown in the diagram. A light beam enters the first prism (A), and the blue component of the beam is reflected from a low-pass filter coating (F1) that reflects blue light (high-frequency), but transmits longer wavelengths (lower frequencies). The blue beam undergoes total internal reflection from the front of prism A and exits it through a side face. The remainder of the beam enters the second prism (B) and is split by a second filter coating (F2) which reflects red light but transmits shorter wavelengths. The red beam is also totally internally reflected due to a small air-gap between prisms A and B. The remaining green component of the beam travels through prism C. The trichroic prism assembly can be used in reverse to combine red, green and blue beams into a coloured image, and is used in this way in some projector devices. Assemblies with more than 3 beams are possible. Advantages of dichroic prism color separation When used for color separation, in an imaging system, this method has some advantages over other methods, such as the use of a Bayer filter. Most of those characteristics derive from the usage of dichroic filters and are in common with those. The advantages include: Minimal light absorption, most of the light is directed to one of the output beams. Better color separation than with most other filters. Easy to fabricate for any combination of pass bands. Does not require color interpolation (demosaicing) and thus avoids all of the false color artifacts commonly seen in demosaiced images. Disadvantages of dichroic prism color separation Since dichroic prisms use dichroic filters, the exact bandpass of each filter depends on the light incidence angle. Maximum lens numerical aperture might be restricted due to the geometry of the optical path inside the assembly. The exact bandpass depends on the lens numerical aperture, since this factor changes the average light incidence angle in the filters. Since some of the glass surfaces are at an angle against the incident beam some polarization by reflection effects can result. See also Thin-film optics Three-CCD camera DLP projector References Prisms (optics) Thin-film optics
Dichroic prism
[ "Materials_science", "Mathematics" ]
670
[ "Thin-film optics", "Planes (geometry)", "Thin films" ]
584,238
https://en.wikipedia.org/wiki/Cladogenesis
Cladogenesis is an evolutionary splitting of a parent species into two distinct species, forming a clade. This event usually occurs when a few organisms end up in new, often distant areas or when environmental changes cause several extinctions, opening up ecological niches for the survivors and causing population bottlenecks and founder effects changing allele frequencies of diverging populations compared to their ancestral population. The events that cause these species to originally separate from each other over distant areas may still allow both of the species to have equal chances of surviving, reproducing, and even evolving to better suit their environments while still being two distinct species due to subsequent natural selection, mutations and genetic drift. Cladogenesis is in contrast to anagenesis, in which an ancestral species gradually accumulates change, and eventually, when enough is accumulated, the species is sufficiently distinct and different enough from its original starting form that it can be labeled as a new form - a new species. With anagenesis, the lineage in a phylogenetic tree does not split. To determine whether a speciation event is cladogenesis or anagenesis, researchers may use simulation, evidence from fossils, molecular evidence from the DNA of different living species, or modelling. It has however been debated whether the distinction between cladogenesis and anagenesis is necessary at all in evolutionary theory. See also Anagenesis Evolutionary biology Speciation References Evolutionary biology concepts Phylogenetics
Cladogenesis
[ "Biology" ]
280
[ "Bioinformatics", "Phylogenetics", "Taxonomy (biology)", "Evolutionary biology concepts" ]
584,240
https://en.wikipedia.org/wiki/Image%20intensifier
An image intensifier or image intensifier tube is a vacuum tube device for increasing the intensity of available light in an optical system to allow use under low-light conditions, such as at night, to facilitate visual imaging of low-light processes, such as fluorescence of materials in X-rays or gamma rays (X-ray image intensifier), or for conversion of non-visible light sources, such as near-infrared or short wave infrared to visible. They operate by converting photons of light into electrons, amplifying the electrons (usually with a microchannel plate), and then converting the amplified electrons back into photons for viewing. They are used in devices such as night-vision goggles. Introduction Image intensifier tubes (IITs) are optoelectronic devices that allow many devices, such as night vision devices and medical imaging devices, to function. They convert low levels of light from various wavelengths into visible quantities of light at a single wavelength. Operation Image intensifiers convert low levels of light photons into electrons, amplify those electrons, and then convert the electrons back into photons of light. Photons from a low-light source enter an objective lens which focuses an image into a photocathode. The photocathode releases electrons via the photoelectric effect as the incoming photons hit it. The electrons are accelerated through a high-voltage potential into a microchannel plate (MCP). Each high-energy electron that strikes the MCP causes the release of many electrons from the MCP in a process called secondary cascaded emission. The MCP is made up of thousands of tiny conductive channels, tilted at an angle away from normal to encourage more electron collisions and thus enhance the emission of secondary electrons in a controlled Electron avalanche. All the electrons move in a straight line due to the high-voltage difference across the plates, which preserves collimation, and where one or two electrons entered, thousands may emerge. A separate (lower) charge differential accelerates the secondary electrons from the MCP until they hit a phosphor screen at the other end of the intensifier, which releases a photon for every electron. The image on the phosphor screen is focused by an eyepiece lens. The amplification occurs at the microchannel plate stage via its secondary cascaded emission. The phosphor is usually green because the human eye is more sensitive to green than other colors and because historically the original material used to produce phosphor screens produced green light (hence the soldiers' nickname 'green TV' for image intensification devices). History The development of image intensifier tubes began during the 20th century, with continuous development since inception. Pioneering work The idea of an image tube was first proposed by G. Holst and H. De Boer in 1928, in the Netherlands , but early attempts to create one were not successful. It was not until 1934 that Holst, working for Philips, created the first successful infrared converter tube. This tube consisted of a photocathode in proximity to a fluorescent screen. Using a simple lens, an image was focused on the photocathode and a potential difference of several thousand volts was maintained across the tube, causing electrons dislodged from the photocathode by photons to strike the fluorescent screen. This caused the screen to light up with the image of the object focused onto the screen, however the image was non-inverting. With this image converter type tube, it was possible to view infrared light in real time, for the first time. Generation 0: early infrared electro-optical image converters Development continued in the US as well during the 1930s and mid-1930, the first inverting image intensifier was developed at RCA. This tube used an electrostatic inverter to focus an image from a spherical cathode onto a spherical screen. (The choice of spheres was to reduce off-axial aberrations.) Subsequent development of this technology led directly to the first Generation 0 image intensifiers which were used by the military during World War II to allow vision at night with infrared lighting for both shooting and personal night vision. The first military night vision device was introduced by the German army as early as 1939, developed since 1935. Early night vision devices based on these technologies were used by both sides in World War II. Unlike later technologies, early Generation 0 night vision devices were unable to significantly amplify the available ambient light and so, to be useful, required an infrared source. These devices used an S1 photocathode or "silver-oxygen-caesium" photocathode, discovered in 1930, which had a sensitivity of around 60 μA/lm (Microampere per Lumen) and a quantum efficiency of around 1% in the ultraviolet region and around 0.5% in the infrared region. Of note, the S1 photocathode had sensitivity peaks in both the infrared and ultraviolet spectrum and with sensitivity over 950 nm was the only photocathode material that could be used to view infrared light above 950 nm. Solar blind converters Solar blind converters, also known as solar blind photocathodes, are specialized devices that detect ultraviolet (UV) light below 280 nanometers (nm) in wavelength. This UV range is termed "solar blind" because it is shorter than the wavelengths of sunlight that typically penetrate the Earth's atmosphere. Discovered in 1953 by Taft and Apker , solar blind photocathodes were initially developed using cesium telluride. Unlike night-vision technologies that are classified into "generations" based on their military applications, solar blind photocathodes do not fit into this categorization because their utility is not primarily military. Their ability to detect UV light in the solar blind range makes them useful for applications that require sensitivity to UV radiation without interference from visible sunlight. Generation 1: significant amplification With the discovery of more effective photocathode materials, which increased in both sensitivity and quantum efficiency, it became possible to achieve significant levels of gain over Generation 0 devices. In 1936, the S-11 cathode (cesium-antimony) was discovered by Gorlich, which provided sensitivity of approximately 80 μA/lm with a quantum efficiency of around 20%; this only included sensitivity in the visible region with a threshold wavelength of approximately 650 nm. It was not until the development of the bialkali antimonide photocathodes (potassium-cesium-antimony and sodium-potassium-antimony) discovered by A.H. Sommer and his later multialkali photocathode (sodium-potassium-antimony-cesium) S20 photocathode discovered in 1956 by accident, that the tubes had both suitable infrared sensitivity and visible spectrum amplification to be useful militarily. The S20 photocathode has a sensitivity of around 150 to 200 μA/lm. The additional sensitivity made these tubes usable with limited light, such as moonlight, while still being suitable for use with low-level infrared illumination. Cascade (passive) image intensifier tubes Although originally experimented with by the Germans in World War Two, it was not until the 1950s that the U.S. began conducting early experiments using multiple tubes in a "cascade", by coupling the output of an inverting tube to the input of another tube, which allowed for increased amplification of the object light being viewed. These experiments worked far better than expected and night vision devices based on these tubes were able to pick up faint starlight and produce a usable image. However, the size of these tubes, at 17 in (43 cm) long and 3.5 in (8.9 cm) in diameter, were too large to be suitable for military use. Known as "cascade" tubes, they provided the capability to produce the first truly passive night vision scopes. With the advent of fiber optic bundles in the 1960s, it was possible to connect smaller tubes together, which allowed for the first true Starlight scopes to be developed in 1964. Many of these tubes were used in the AN/PVS-2 rifle scope, which saw use in Vietnam. An alternative to the cascade tube explored in the mid 20th century involves optical feedback, with the output of the tube fed back into the input. This scheme has not been used in rifle scopes, but it has been used successfully in lab applications where larger image intensifier assemblies are acceptable. Generation 2: micro-channel plate Second generation image intensifiers use the same multialkali photocathode that the first generation tubes used, however by using thicker layers of the same materials, the S25 photocathode was developed, which provides extended red response and reduced blue response, making it more suitable for military applications. It has a typical sensitivity of around 230 μA/lm and a higher quantum efficiency than S20 photocathode material. Oxidation of the cesium to cesium oxide in later versions improved the sensitivity in a similar way to third generation photocathodes. The same technology that produced the fiber optic bundles that allowed the creation of cascade tubes, allowed, with a slight change in manufacturing, the production of micro-channel plates, or MCPs. The micro-channel plate is a thin glass wafer with a Nichrome electrode on either side across which a large potential difference of up to 1,000 volts is applied. The wafer is manufactured from many thousands of individual hollow glass fibers, aligned at a "bias" angle to the axis of the tube. The micro-channel plate fits between the photocathode and screen. Electrons that strike the side of the "micro-channel" as they pass through it elicit secondary electrons, which in turn elicit additional electrons as they too strike the walls, amplifying the signal. By using the MCP with a proximity focused tube, amplifications of up to 30,000 times with a single MCP layer were possible. By increasing the number of layers of MCP, additional amplification to well over 1,000,000 times could be achieved. Inversion of Generation 2 devices was achieved through one of two different ways. The Inverter tube uses electrostatic inversion, in the same manner as the first generation tubes did, with a MCP included. Proximity focused second generation tubes could also be inverted by using a fiber bundle with a 180 degree twist in it. Generation 3: high sensitivity and improved frequency response While the third generation of tubes were fundamentally the same as the second generation, they possessed two significant differences. Firstly, they used a GaAs—CsO—AlGaAs photocathode, which is more sensitive in the 800 nm-900 nm range than second-generation photocathodes. Secondly, the photocathode exhibits negative electron affinity (NEA), which provides photoelectrons that are excited to the conduction band a free ride to the vacuum band as the Cesium Oxide layer at the edge of the photocathode causes sufficient band-bending. This makes the photocathode very efficient at creating photoelectrons from photons. The Achilles heel of third generation photocathodes, however, is that they are seriously degraded by positive ion poisoning. Due to the high electrostatic field stresses in the tube, and the operation of the MicroChannel Plate, this led to the failure of the photocathode within a short period - as little as 100 hours before photocathode sensitivity dropped below Gen2 levels. To protect the photocathode from positive ions and gases produced by the MCP, they introduced a thin film of sintered aluminium oxide attached to the MCP. The high sensitivity of this photocathode, greater than 900 μA/lm, allows more effective low light response, though this was offset by the thin film, which typically blocked up to 50% of electrons. Super second generation Although not formally recognized under the U.S. generation categories, Super Second Generation or SuperGen was developed in 1989 by Jacques Dupuy and Gerald Wolzak. This technology improved the tri-alkali photocathodes to more than double their sensitivity while also improving the microchannel plate by increasing the open-area ratio to 70% while reducing the noise level. This allowed second generation tubes, which are more economical to manufacture, to achieve comparable results to third generation image intensifier tubes. With sensitivities of the photocathodes approaching 700 μA/lm and extended frequency response to 950 nm, this technology continued to be developed outside of the U.S., notably by Photonis and now forms the basis for most non-US manufactured high-end night vision equipment. Generation 4 In 1998, the US company Litton developed the filmless image tube. These tubes were originally made for the Omni V contract and resulted in significant interest by the US military. However, the tubes suffered greatly from fragility during testing and, by 2002, the NVESD revoked the fourth generation designation for filmless tubes, at which time they simply became known as Gen III Filmless. These tubes are still produced for specialist uses, such as aviation and special operations; however, they are not used for weapon-mounted purposes. To overcome the ion-poisoning problems, they improved scrubbing techniques during manufacture of the MCP ( the primary source of positive ions in a wafer tube ) and implemented autogating, discovering that a sufficient period of autogating would cause positive ions to be ejected from the photocathode before they could cause photocathode poisoning. Generation III Filmless technology is still in production and use today, but officially, there is no Generation 4 of image intensifiers. Generation 3 thin film Also known as Generation 3 Omni VII and Generation 3+, following the issues experienced with generation IV technology, Thin Film technology became the standard for current image intensifier technology. In Thin Film image intensifiers, the thickness of the film is reduced from around 30 Angstrom (standard) to around 10 Angstrom and the photocathode voltage is lowered. This causes fewer electrons to be stopped than with third generation tubes, while providing the benefits of a filmed tube. Generation 3 Thin Film technology is presently the standard for most image intensifiers used by the US military. 4G In 2014, French image tube manufacturer PHOTONIS released the first global, open, performance specification; "4G". The specification had four main requirements that an image intensifier tube would have to meet. Spectral sensitivity from below 400 nm to above 1000 nm A minimum figure-of-merit of FOM1800 High light resolution higher than 57 lp/mm Halo size of less than 0.7mm Terminology There are several common terms used for Image Intensifier tubes. Gating Electronic Gating (or 'gating') is a means by which an image intensifier tube may be switched ON and OFF in a controlled manner. An electronically gated image intensifier tube functions like a camera shutter, allowing images to pass through when the electronic "gate" is enabled. The gating durations can be very short (nanoseconds or even picoseconds). This makes gated image intensifier tubes ideal candidates for use in research environments where very short duration events must be photographed. As an example, in order to assist engineers in designing more efficient combustion chambers, gated imaging tubes have been used to record very fast events such as the wavefront of burning fuel in an internal combustion engine. Often gating is used to synchronize imaging tubes to events whose start cannot be controlled or predicted. In such an instance, the gating operation may be synchronized to the start of an event using 'gating electronics', e.g. high-speed digital delay generators. The gating electronics allows a user to specify when the tube will turn on and off relative to the start of an event. There are many examples of the uses of gated imaging tubes. Because of the combination of the very high speeds at which a gated tube may operate and their light amplification capability, gated tubes can record specific portions of a beam of light. It is possible to capture only the portion of light reflected from a target, when a pulsed beam of light is fired at the target, by controlling the gating parameters. Gated-Pulsed-Active Night Vision (GPANV) devices are another example of an application that uses this technique. GPANV devices can allow a user to see objects of interest that are obscured behind vegetation, foliage, and/or mist. These devices are also useful for locating objects in deep water, where reflections of light off of nearby particles from a continuous light source, such as a high brightness underwater floodlight, would otherwise obscure the image. ATG (auto-gating) Auto-gating is a feature found in many image intensifier tubes manufactured for military purposes after 2006, though it has been around for some time. Autogated tubes gate the image intensifier within so as to control the amount of light that gets through to the microchannel plate. The gating occurs at high frequency and by varying the duty cycle to maintain a constant current draw from the microchannel plate, it is possible to operate the tube during brighter conditions, such as daylight, without damaging the tube or leading to premature failure. Auto-gating of image intensifiers is militarily valuable as it allowed extended operational hours giving enhanced vision during twilight hours while providing better support for soldiers who encounter rapidly changing lighting conditions, such as those assaulting a building. Sensitivity The sensitivity of an image intensifier tube is measured in microamperes per lumen (μA/lm). It defines how many electrons are produced per quantity of light that falls on the photocathode. This measurement should be made at a specific color temperature, such as "at a colour temperature of 2854 K". The color temperature at which this test is made tends to vary slightly between manufacturers. Additional measurements at specific wavelengths are usually also specified, especially for Gen2 devices, such as at 800 nm and 850 nm (infrared). Typically, the higher the value, the more sensitive the tube is to light. Resolution More accurately known as limiting resolution, tube resolution is measured in line pairs per millimeter or lp/mm. This is a measure of how many lines of varying intensity (light to dark) can be resolved within a millimeter of screen area. However the limiting resolution itself is a measure of the Modulation Transfer Function. For most tubes, the limiting resolution is defined as the point at which the modulation transfer function becomes three percent or less. The higher the value, the higher the resolution of the tube. An important consideration, however, is that this is based on the physical screen size in millimeters and is not proportional to the screen size. As such, an 18 mm tube with a resolution of around 64 lp/mm has a higher overall resolution than an 8 mm tube with 72 lp/mm resolution. Resolution is usually measured at the centre and at the edge of the screen and tubes often come with figures for both. Military Specification or milspec tubes only come with a criterion such as "> 64 lp/mm" or "Greater than 64 line pairs/millimeter". Gain The gain of a tube is typically measured using one of two units. The most common (SI) unit is cd·m−2·lx−1, i.e. candelas per meter squared per lux. The older convention is Fl/Fc (foot-lamberts per foot-candle). This creates issues with comparative gain measurements since neither is a pure ratio, although both are measured as a value of output intensity over input intensity. This creates ambiguity in the marketing of night vision devices as the difference between the two measurements is effectively pi or approximately 3.142x. This means that a gain of 10,000 cd/m2/lx is the same as 31.42 Fl/Fc. MTBF (mean time between failure) This value, expressed in hours, gives an idea how long a tube typically should last. It's a reasonably common comparison point, however takes many factors into account. The first is that tubes are constantly degrading. This means that over time, the tube will slowly produce less gain than it did when it was new. When the tube gain reaches 50% of its "new" gain level, the tube is considered to have failed, so primarily this reflects this point in a tube's life. Additional considerations for the tube lifespan are the environment that the tube is being used in and the general level of illumination present in that environment, including bright moonlight and exposure to both artificial lighting and use during dusk/dawn periods, as exposure to brighter light reduces a tube's life significantly. Also, a MTBF only includes operational hours. It is considered that turning a tube on or off does not contribute to reducing overall lifespan, so many civilians tend to turn their night vision equipment on only when they need to, to make the most of the tube's life. Military users tend to keep equipment on for longer periods of time, typically, the entire time while it is being used with batteries being the primary concern, not tube life. Typical examples of tube life are: First Generation: 1000 hrs Second Generation: 2000 to 2500 hrs Third Generation: 10000 to 15000 hrs. Many recent high-end second-generation tubes now have MTBFs approaching 15,000 operational hours. MTF (modulation transfer function) The modulation transfer function of an image intensifier is a measure of the output amplitude of dark and light lines on the display for a given level of input from lines presented to the photocathode at different resolutions. It is usually given as a percentage at a given frequency (spacing) of light and dark lines. For example, if you look at white and black lines with a MTF of 99% @ 2 lp/mm then the output of the dark and light lines is going to be 99% as dark or light as looking at a black image or a white image. This value decreases for a given increase in resolution also. On the same tube if the MTF at 16 and 32 lp/mm was 50% and 3% then at 16 lp/mm the signal would be only half as bright/dark as the lines were for 2 lp/mm and at 32 lp/mm the image of the lines would be only three percent as bright/dark as the lines were at 2 lp/mm. Additionally, since the limiting resolution is usually defined as the point at which the MTF is three percent or less, this would also be the maximum resolution of the tube. The MTF is affected by every part of an image intensifier tube's operation and on a complete system is also affected by the quality of the optics involved. Factors that affect the MTF include transition through any fiber plate or glass, at the screen and the photocathode and also through the tube and the microchannel plate itself. The higher the MTF at a given resolution, the better. See also References External links Historical information on IIT development and inception Discovery of other photocathode materials Several references are made to historical data noted in "Image Tubes" by Illes P Csorba Selected Papers on Image tubes Make Time for the Stars by Antony Cooke Optical devices Vacuum tubes ja:イメージインテンシファイア
Image intensifier
[ "Physics", "Materials_science", "Engineering" ]
4,832
[ "Glass engineering and science", "Optical devices", "Vacuum tubes", "Vacuum", "Matter" ]
584,310
https://en.wikipedia.org/wiki/Octahedral%20number
In number theory, an octahedral number is a figurate number that represents the number of spheres in an octahedron formed from close-packed spheres. The th octahedral number can be obtained by the formula: The first few octahedral numbers are: 1, 6, 19, 44, 85, 146, 231, 344, 489, 670, 891 . Properties and applications The octahedral numbers have a generating function Sir Frederick Pollock conjectured in 1850 that every positive integer is the sum of at most 7 octahedral numbers. This statement, the Pollock octahedral numbers conjecture, has been proven true for all but finitely many numbers. In chemistry, octahedral numbers may be used to describe the numbers of atoms in octahedral clusters; in this context they are called magic numbers. Relation to other figurate numbers Square pyramids An octahedral packing of spheres may be partitioned into two square pyramids, one upside-down underneath the other, by splitting it along a square cross-section. Therefore, the octahedral number can be obtained by adding two consecutive square pyramidal numbers together: Tetrahedra If is the octahedral number and is the tetrahedral number then This represents the geometric fact that gluing a tetrahedron onto each of four non-adjacent faces of an octahedron produces a tetrahedron of twice the size. Another relation between octahedral numbers and tetrahedral numbers is also possible, based on the fact that an octahedron may be divided into four tetrahedra each having two adjacent original faces (or alternatively, based on the fact that each square pyramidal number is the sum of two tetrahedral numbers): Cubes If two tetrahedra are attached to opposite faces of an octahedron, the result is a rhombohedron. The number of close-packed spheres in the rhombohedron is a cube, justifying the equation Centered squares The difference between two consecutive octahedral numbers is a centered square number: Therefore, an octahedral number also represents the number of points in a square pyramid formed by stacking centered squares; for this reason, in his book Arithmeticorum libri duo (1575), Francesco Maurolico called these numbers "pyramides quadratae secundae". The number of cubes in an octahedron formed by stacking centered squares is a centered octahedral number, the sum of two consecutive octahedral numbers. These numbers are 1, 7, 25, 63, 129, 231, 377, 575, 833, 1159, 1561, 2047, 2625, ... given by the formula for n = 1, 2, 3, ... History The first study of octahedral numbers appears to have been by René Descartes, around 1630, in his De solidorum elementis. Prior to Descartes, figurate numbers had been studied by the ancient Greeks and by Johann Faulhaber, but only for polygonal numbers, pyramidal numbers, and cubes. Descartes introduced the study of figurate numbers based on the Platonic solids and some of the semiregular polyhedra; his work included the octahedral numbers. However, De solidorum elementis was lost, and not rediscovered until 1860. In the meantime, octahedral numbers had been studied again by other mathematicians, including Friedrich Wilhelm Marpurg in 1774, Georg Simon Klügel in 1808, and Sir Frederick Pollock in 1850. References External links Figurate numbers
Octahedral number
[ "Mathematics" ]
765
[ "Figurate numbers", "Mathematical objects", "Numbers" ]
584,362
https://en.wikipedia.org/wiki/Ballard%20Locks
The Hiram M. Chittenden Locks, or Ballard Locks, is a complex of locks at the west end of Salmon Bay in Seattle, Washington's Lake Washington Ship Canal, between the neighborhoods of Ballard to the north and Magnolia to the south. The Ballard Locks carry more boat traffic than any other lock in the U.S., and the locks, along with the fish ladder and the surrounding Carl S. English Jr. Botanical Gardens, attract more than one million visitors annually, making it one of Seattle's top tourist attractions. The construction of the locks profoundly reshaped the topography of Seattle and the surrounding area, lowering the water level of Lake Washington and Lake Union by , adding miles of new waterfront land, reversing the flow of rivers, and leaving piers in the eastern half of Salmon Bay high and dry. The Locks are listed on the National Register of Historic Places and have been designated by the American Society of Civil Engineers as a National Historic Civil Engineering Landmark. Prior to construction As early as 1854, there was discussion of building a navigable connection between Lake Washington and Puget Sound for the purpose of transporting logs, milled lumber, and fishing vessels. Thirteen years later, the United States Navy endorsed a canal project, which included a plan for building a naval shipyard on Lake Washington. In 1891 the U.S. Army Corps of Engineers started planning the project. Some preliminary work was begun in 1906, and work began in earnest five years later under the command of Hiram M. Chittenden. The delays in canal planning and construction resulted in the US Navy building the Puget Sound Naval Shipyard in Bremerton, Washington, which is located across the Sound from Seattle. Construction In early 1909, the Washington State Legislature appropriated $250,000, placed under the control of the Corps of Engineers, for excavation of the canal between Lake Union and Lake Washington. In June 1910, the US Congress gave its approval for the lock, on the condition that the rest of the canals along the route be paid for locally. Construction was then delayed by legal challenges, mainly by mill owners in Ballard who feared property damage and loss of waterfront in Salmon Bay, and by Lake Washington property owners. Under Major James. B. Cavanaugh, Chittenden's replacement as Seattle District Commander, construction of the Ballard, or Government, Locks connecting Salmon Bay to Shilshole Bay began in 1911, proceeding without further controversy or legal entanglements. In July 1912, the Locks gates were closed for the first time, turning Salmon Bay from saltwater to freshwater. The first ship passed through the locks on August 3, 1916. On August 25, 1916, the temporary dam at Montlake was breached. During the following three months, Lake Washington drained, lowering the water level by and drying up more than of wetlands, as well as drying up the Black River and cutting off the Cedar River salmon run. The Cedar River was rerouted into Lake Washington to provide sufficient water flow for operating the Locks. Additionally the White River was rerouted into the Puyallup River. The Cedar and White Rivers both originally flowed into the Duwamish causing frequent flooding. The rerouting of the rivers opened up huge lowland areas for development but significantly disrupted the Duwamish salmon runs. To rectify this problem, salmon runs were reintroduced allowing the fish to migrate through the locks. The locks officially opened for boat traffic on May 8, 1917. The total cost of the project to that point was $3.5 million, with $2.5 million having come from the federal government and the rest from local governments. To allow for the intended boat traffic, three bridges were removed along the ship canal route, at Latona Avenue, Fremont, Stone Way. The Ballard and Fremont Bridges were completed in 1917, followed by the University Bridge in 1919, and Montlake Bridge in 1925. The University Bridge was improved in 1932, and in 1934 the Lake Washington Ship Canal project was declared complete. While generally a success, the project was not without its problems. Salt water began to make its way upstream toward Lake Union, requiring a system of siphons and flushing mechanisms. Because the Cedar River was the main water source both for the lakes and locks and for Seattle's potable water, at times there were problems maintaining an adequate water supply to maintain lake level and operate the locks. Conversely, with several rivers redirected, flooding worsened throughout the watershed. That last problem was exacerbated by logging, and at times during storms the locks had to be opened just to allow water to flow out. Function The locks and associated facilities serve three purposes: To maintain the water level of the fresh water Lake Washington and Lake Union at above sea level, or more specifically, above Puget Sound's mean low tide. To prevent the mixing of sea water from Puget Sound with the fresh water of the lakes (saltwater intrusion). To move boats from the water level of the lakes to the water level of Puget Sound, and vice versa. The complex includes two locks, (small) and (large). The complex also includes a spillway with six gates to assist in water-level control. A fish ladder is integrated into the locks for migration of anadromous fish, notably salmon. The grounds feature a visitors center, as well as the Carl S. English Jr. Botanical Gardens. Operated by the US Army Corps of Engineers, the locks were formally opened on July 4, 1917, although the first ship passed on August 3, 1916. They were named after US Army Major Hiram M. Chittenden, the Seattle District Engineer for the Corps of Engineers from April 1906 to September 1908. They were added to the National Register of Historic Places in 1978. Vessels passing from the freshwater Lakes Washington and Union to Puget Sound enter the lock chamber through the open upper gates (A in the accompanying diagram). The lower gates (B) and the draining valve (D) are closed. The vessel is assisted by the lockwall attendants who assure it is tied down and ready for the chamber to be drained. Next, the upper gates (A) and the filling valve (C) are closed and the draining valve (D) is opened allowing water to drain via gravity out to Puget Sound. When the water pressure is equal on both sides of the gate, the lower gates (B) are opened, allowing the vessels to leave the lock chamber. The process is reversed for upstream locking. Locks The complex includes two locks. Using the small lock when boat traffic is low conserves fresh water during summer, when the lakes receive less inflow. Having two locks also allows one of the locks to be drained for maintenance without blocking all boat traffic. The large lock is drained for approximately 2-weeks, usually in November, and the small lock is drained for about the same period, usually in March. The locks can elevate a vessel , from the level of Puget Sound at a very low tide to the level of freshwater Salmon Bay, in 10–15 minutes. The locks handle both pleasure boats and commercial vessels, ranging from kayaks to fishing boats returning from the Bering Sea to cargo ships. Over 1 million tons of cargo, fuel, building materials, and seafood products pass through the locks each year. Spillway South of the small lock is a spillway dam with tainter gates used to regulate the freshwater levels of the ship canal and lakes. The gates on the dam release or store water to maintain the lake within a range of above sea level. Maintaining this lake level is necessary for floating bridges, mooring facilities, and vessel clearances under bridges. "Smolt flumes" in the spillway help young salmon to pass safely downstream. Higher water levels are maintained in the summer to accommodate recreation as well as to allow the lakes to act as a water storage basin in anticipation of drought conditions. Salt water barrier If excessive salt water were allowed to migrate into Salmon Bay, the salt could eventually damage the freshwater ecosystem. To prevent this, a basin was dredged just above (east of) the large lock. The heavier salt water settles into the basin and drains through a pipe discharging downstream of the locks area. In 1975, the saltwater drain was modified to divert some salt water from the basin to the fish ladder, where it is added via a diffuser to the fish ladder attraction water; see below. To further restrict saltwater intrusion, in 1966, a hinged barrier was installed just upstream of the large lock. This hollow metal barrier is filled with air to remain in the upright position, blocking the heavier salt water. When necessary to accommodate deep-draft vessels, the barrier is flooded and sinks to the bottom of the chamber. Fish ladder The fish ladder at the Chittenden locks is unusual—materials published by the federal government say "unique"—in being located where salt and fresh water meet. Normally, fish ladders are located entirely within fresh water. Pacific salmon are anadromous; they hatch in lakes, rivers, and streams—or, nowadays fish hatcheries—migrate to sea, and only at the end of their life return to fresh water to spawn. Prior to the Locks construction, no significant salmon runs existed here, as there was only a small drainage stream from Lake Union into Salmon Bay. In order to provide enough water to operate the Locks, Cedar River was rerouted into Lake Washington (which was lowered 9 feet). Cedar River originally flowed into the Duwamish River along with the White River from the South. White River was rerouted into the Puyallup River. Cedar and White Rivers did support significant Salmon runs but also created severe flooding conditions for the early settlers. The rerouting of these two major rivers was a mixed blessing, while reducing flood threats, the Duwamish River salmon runs were decimated. To rectify this situation, salmon runs were rerouted through the Locks, which included introducing a major run of Sockeye Salmon using stock from Baker River, Washington. The ladder was designed to use attraction water: fresh water flowing swiftly out the bottom of the fish ladder, in the direction opposite which anadromous fish migrate at the end of their lives. However, the attraction water from this first ladder was not effective. Instead, most salmon used the locks. This made them an easy target for predators like Herschel the sea lion; also, many were injured by hitting the walls and gates of the locks, or by hitting boat propellers. The Corps rebuilt the fish ladder in 1976 by increasing the flow of attraction water and adding more weirs: most weirs are now one foot higher than the previous one. The old fish ladder had only 10 "steps"; the new one has 21. A diffuser well mixes salt water gradually into the last 10 weirs. As a part of the rebuilding, the Corps also added an underground chamber with a viewing gallery.The fish approaching the ladder smell the attraction water, recognizing the scent of Lake Washington and its tributaries. They enter the ladder, and either jump over each of the 21 weirs or swim though tunnel-like openings. They exit the ladder into the fresh water of Salmon Bay. They continue following the waterway to the lake, river, or stream where they were born. Once there, the females lay eggs, which the males fertilize. Most salmon die shortly after spawning. The offspring remain in the fresh water until they are ready to migrate to the ocean as smolts. In a few years, the surviving adults return, climb the fish ladder, and reach their spawning ground to continue the life cycle. Of the millions of young fish born, only a relative few survive to adulthood. Causes of death include natural predators, commercial and sport fishing, disease, low stream flows, poor water quality, flooding, and concentrated developments along streams and lakes. Visitors to the locks can observe the salmon through windows as they progress along their route. Although the viewing area is open year-round, the "peak" viewing time is during spawning season, from about the beginning of July through mid-August. A public art work, commissioned by the Seattle Arts Commission, provides literary interpretation of the experience through recordings of Seattle poet Judith Roche's "Salmon Suite," a sequence of five poems tied to the annual migratory sequence of the fish. Migratory fish Among the species of salmonids migrating routinely through the ladder at the Chittenden Locks are Chinook (king) salmon (Oncorhynchus tshawytscha), Coho (silver) salmon (Oncorhynchus kisutch), Sockeye (red) salmon (Oncorhynchus nerka). Sockeye primarily migrate up the Cedar River to spawn and most end up at the Landsberg Dam Hatchery. Chinook and Coho migrate up the Issaquah Creek and most end up at the Issaquah Hatchery. Steelhead (Oncorhynchus mykiss), once migrated through the Locks but none have been seen in years. The run is considered functionally extinct. Notes External links US Army Corps of Engineers, Seattle District: Lake Washington Ship Canal and Hiram M. Chittenden Locks Corps of Engineers Foundation http://www.ballardlocks.org 1917 establishments in Washington (state) Ballard, Seattle Dams on the National Register of Historic Places in Washington (state) Historic Civil Engineering Landmarks Locks on the National Register of Historic Places in Washington (state) National Register of Historic Places in Seattle Transport infrastructure completed in 1916 Transportation buildings and structures in Seattle Transportation buildings and structures on the National Register of Historic Places in Washington (state) United States Army Corps of Engineers Water transport in Seattle
Ballard Locks
[ "Engineering" ]
2,767
[ "Engineering units and formations", "United States Army Corps of Engineers", "Civil engineering", "Historic Civil Engineering Landmarks" ]
584,406
https://en.wikipedia.org/wiki/Edge-transitive%20graph
In the mathematical field of graph theory, an edge-transitive graph is a graph such that, given any two edges and of , there is an automorphism of that maps to . In other words, a graph is edge-transitive if its automorphism group acts transitively on its edges. Examples and properties The number of connected simple edge-transitive graphs on n vertices is 1, 1, 2, 3, 4, 6, 5, 8, 9, 13, 7, 19, 10, 16, 25, 26, 12, 28 ... Edge-transitive graphs include all symmetric graphs, such as the vertices and edges of the cube. Symmetric graphs are also vertex-transitive (if they are connected), but in general edge-transitive graphs need not be vertex-transitive. Every connected edge-transitive graph that is not vertex-transitive must be bipartite, (and hence can be colored with only two colors), and either semi-symmetric or biregular. Examples of edge but not vertex transitive graphs include the complete bipartite graphs where m ≠ n, which includes the star graphs . For graphs on n vertices, there are (n-1)/2 such graphs for odd n and (n-2) for even n. Additional edge transitive graphs which are not symmetric can be formed as subgraphs of these complete bi-partite graphs in certain cases. Subgraphs of complete bipartite graphs Km,n exist when m and n share a factor greater than 2. When the greatest common factor is 2, subgraphs exist when 2n/m is even or if m=4 and n is an odd multiple of 6. So edge transitive subgraphs exist for K3,6, K4,6 and K5,10 but not K4,10. An alternative construction for some edge transitive graphs is to add vertices to the midpoints of edges of a symmetric graph with v vertices and e edges, creating a bipartite graph with e vertices of order 2, and v of order 2e/v. An edge-transitive graph that is also regular, but still not vertex-transitive, is called semi-symmetric. The Gray graph, a cubic graph on 54 vertices, is an example of a regular graph which is edge-transitive but not vertex-transitive. The Folkman graph, a quartic graph on 20 vertices is the smallest such graph. The vertex connectivity of an edge-transitive graph always equals its minimum degree. See also Edge-transitive (in geometry) References External links Graph families Algebraic graph theory
Edge-transitive graph
[ "Mathematics" ]
540
[ "Mathematical relations", "Graph theory", "Algebra", "Algebraic graph theory" ]
584,438
https://en.wikipedia.org/wiki/Winner%27s%20curse
The winner's curse is a phenomenon that may occur in common value auctions, where all bidders have the same (ex post) value for an item but receive different private (ex ante) signals about this value and wherein the winner is the bidder with the most optimistic evaluation of the asset and therefore will tend to overestimate and overpay. Accordingly, the winner will be "cursed" in one of two ways: either the winning bid will exceed the value of the auctioned asset making the winner worse off in absolute terms, or the value of the asset will be less than the bidder anticipated, so the bidder may garner a net gain but will be worse off than anticipated. However, an actual overpayment will generally occur only if the winner fails to account for the winner's curse when bidding (an outcome that, according to the revenue equivalence theorem, need never occur). The winner’s curse phenomenon was first addressed in 1971 by three Atlantic Richfield petroleum engineers who claimed that oil companies suffered unexpectedly low returns "year after year" in early Outer Continental Shelf oil lease auctions. Outer Continental Shelf auctions are common value auctions, where the value of the oil in the ground is essentially the same to all bidders. Explanation In a common value auction, the auctioned item is of roughly equal value to all bidders, but the bidders don't know the item's market value when they bid. Each player independently estimates the value of the item before bidding. The winner of an auction is the bidder who submits the highest bid. Since the auctioned item is worth roughly the same to all bidders, they are distinguished only by their respective estimates of the market value. The winner, then, is the bidder making the highest estimate. If we assume that the average bid is accurate, then the highest bidder overestimates the item's value. Thus, the auction's winner is likely to overpay. More formally, this result is obtained using conditional expectation. We are interested in a bidder's expected value from the auction (the expected value of the item, minus the expected price) conditioned on the assumption that the bidder wins the auction. It turns out that for a bidder's true estimate the expected value is negative, meaning that on average the winning bidder is overpaying. Savvy bidders will avoid the winner's curse by bid shading, or placing a bid that is below their ex ante estimation of the value of the item for sale—but equal to their ex post belief about the value of the item, given that they win the auction. The key point is that winning the auction is bad news about the value of the item for the winner. It means that he or she was the most optimistic and, if bidders are correct in their estimations on average, that too much was paid. Therefore savvy bidders revise their ex ante estimations downwards to take account of this effect. The severity of the winner's curse increases with the number of bidders. This is because the more bidders, the more likely it is that some of them have overestimated the auctioned item's value. In technical terms, the winner's expected estimate is the value of the nth order statistic, which increases as the number of bidders increases. There is often confusion that the winner's curse applies to the winners of all auctions. However, it is worth repeating here that for auctions with private value (i.e. when the item is desired independently of its value in the market), winner's curse does not arise. Similarly, there may be occasions when the average bid is too low relative to exterior market conditions e.g. a dealer recognizing an antique or other collectible as highly saleable elsewhere when other bidders do not have the necessary expertise. Examples Since most auctions involve at least some amount of common value, and some degree of uncertainty about that common value, the winner's curse is an important phenomenon. In the 1950s, when the term winner's curse was first coined, there was no accurate method to estimate the potential value of an offshore oil field. So if, for example, an oil field had an actual intrinsic value of $10 million, oil companies might guess its value to be anywhere from $5 million to $20 million. The company who wrongly estimated at $20 million and placed a bid at that level would win the auction, and later find that it was not worth as much. Other auctions where the winner's curse is significant: Spectrum auctions in which companies bid on licenses to use portions of the electromagnetic spectrum. Here, the uncertainty would come from, for example, estimating the value of the cell phone market in New York City. IPOs, in which bidders need to estimate what the market value of a company's stock will be. Pay per click advertising online, in which advertisers gain higher ranking if they bid higher amounts per click from a search engine user. Federal offshore oil leases: the term winner's curse was originated in a paper published in the Journal of Petroleum Technology, volume 23, 1971, pages 641-653. The authors were Capen, Clapp & Campbell. Free agency in professional sports. Related uses The term winner's curse is also used in statistics to refer to the regression toward the mean phenomenon, particularly in genome-wide association studies and epidemiology. In studies involving many tests on one sample of the full population, the consequent stringent standards for significance make it likely that the first person to report a significant test (the winner) will also report an effect size much larger than is likely to be seen in subsequent replication studies. See also Buyer's remorse Wisdom of the crowd Proteus phenomenon War of attrition (game) Pyrrhic victory Vickrey auction Auction theory Paul Milgrom Robert B. Wilson References Further reading External links www.gametheory.net — applet demonstrating the winner's curse. www.techcentralstation.com — article explaining the winner's curse in the context of the Google IPO. The Winner's Curse in Baseball — article on how the winner's curse affects bidding for free agents. Auction theory Curses
Winner's curse
[ "Mathematics" ]
1,301
[ "Game theory", "Auction theory" ]
584,470
https://en.wikipedia.org/wiki/Henry%20Draper
Henry Draper (March 7, 1837 – November 20, 1882) was an American medical doctor and amateur astronomer. He is best known today as a pioneer of astrophotography. Life and work Henry Draper's father, John William Draper, was an accomplished doctor, chemist, botanist, and professor at New York University; he was also the first to photograph the moon through a telescope (1840). Draper's mother was Antonia Caetana de Paiva Pereira Gardner, daughter of the personal physician to the Emperor of Brazil. His niece, Antonia Maury was also an astronomer. He graduated from New York University School of Medicine, at the age of 20, in 1857. He worked first as a physician at Bellevue Hospital, and later as both a professor and dean of medicine at New York University (NYU). On May 31, 1862, he joined S Company, 12th New York Infantry Regiment as a surgeon along with his brother John Christopher, who joined as an assistant surgeon. They served until October 8, 1862. In 1867 he married Mary Anna Palmer, a wealthy socialite who collaborated with him in his astronomy work. Draper was one of the pioneers of the use of astrophotography. In 1872, he took a stellar spectrum that showed absorption lines. Others, such as Joseph Fraunhofer, Lewis Morris Rutherfurd and Angelo Secchi, preceded him in that ambition. He resigned his chair in the medical department in 1873, to allow for more time for original research. He directed an expedition to photograph the 1874 transit of Venus, and was the first to photograph the Orion Nebula, on September 30, 1880. Using his 11 inch Clark Brothers photographic refractor he took a 50-minute exposure. He photographed the spectrum of Jupiter in 1880. The Henry Draper Observatory where he took his much-admired photographs of the moon, was in Hastings-on-Hudson, New York. Today the building functions as the Hastings-on-Hudson Historical Society. Draper received numerous awards, including honorary LL.D. law degrees from NYU and the University of Wisconsin–Madison in 1882, a Congressional medal for directing the U.S. expedition to photograph the 1874 transit of Venus, and election to both the National Academy of Sciences and the Astronomische Gesellschaft. In addition, he held memberships in the American Photographic Society, the American Philosophical Society, the American Academy of Arts and Sciences, and the American Association for the Advancement of Science. After his untimely early death from double pleurisy, his widow Mary Anna Draper funded the Henry Draper Medal for outstanding contributions to astrophysics and a telescope, which was used to prepare the Henry Draper Catalog of stellar spectra. This historical Henry Draper telescope is now at the Toruń Centre for Astronomy (Nicolaus Copernicus University) at Piwnice, Poland. The small crater Draper on the Moon is named in his honor. Selected works The Changes of Blood-Cells in the Spleen, thesis, 1858. A Text-Book on Chemistry, 1866 revision of his father's 1846 text. Pages 1–10 of the revision give an overview of the history of chemistry. Are there other inhabited worlds?, 1866. Delusions of Medicine, Charms, talismans, amulets, astrology, and mesmerism, 1873. The Discovery of Oxygen in the Sun by Photography, 1877, American Journal of Science and Arts. See also Andrew Ainslie Common Henry Draper Medal Henry Draper Catalogue References Further reading External links National Academy of Sciences Biographical Memoir 1837 births 1882 deaths 19th-century American photographers American astronomers Astrophotographers Members of the United States National Academy of Sciences New York University Grossman School of Medicine alumni New York University faculty American people of Brazilian descent Photographers from New York (state) Physicians from New York City Scientists from New York (state) Bellevue Hospital physicians Union army surgeons
Henry Draper
[ "Astronomy" ]
781
[ "People associated with astronomy", "Astrophotographers" ]
584,504
https://en.wikipedia.org/wiki/Metaphase
Metaphase ( and ) is a stage of mitosis in the eukaryotic cell cycle in which chromosomes are at their second-most condensed and coiled stage (they are at their most condensed in anaphase). These chromosomes, carrying genetic information, align in the equator of the cell between the spindle poles at the metaphase plate, before being separated into each of the two daughter nuclei. This alignment marks the beginning of metaphase. Metaphase accounts for approximately 4% of the cell cycle's duration. In metaphase, microtubules from both duplicated centrosomes on opposite poles of the cell have completed attachment to kinetochores on condensed chromosomes. The centromeres of the chromosomes convene themselves on the metaphase plate, an imaginary line that is equidistant from the two spindle poles. This even alignment is due to the counterbalance of the pulling powers generated by the opposing kinetochore microtubules, analogous to a tug-of-war between two people of equal strength, ending with the destruction of B cyclin. In order to prevent deleterious nondisjunction events, a key cell cycle checkpoint, the spindle checkpoint, verifies this evenly balanced alignment and ensures that every kinetochore is properly attached to a bundle of microtubules and is under balanced bipolar tension. Sister chromatids require active separase to hydrolyze the cohesin that bind them together prior to progression to anaphase. Any unattached or improperly attached kinetochores generate signals that prevent the activation of the anaphase promoting complex (cyclosome or APC/C), a ubiquitin ligase which targets securin and cyclin B for degradation via the proteosome. As long as securin and cyclin B remain active, separase remains inactive, preventing premature progression to anaphase. Metaphase in cytogenetics and cancer studies The analysis of metaphase chromosomes is one of the main tools of classical cytogenetics and cancer studies. Chromosomes are condensed (thickened) and highly coiled in metaphase, which makes them most suitable for visual analysis. Metaphase chromosomes make the classical picture of chromosomes (karyotype). For classical cytogenetic analyses, cells are grown in short term culture and arrested in metaphase using mitotic inhibitor. Further they are used for slide preparation and banding (staining) of chromosomes to be visualised under microscope to study structure and number of chromosomes (karyotype). Staining of the slides, often with Giemsa (G banding) or Quinacrine, produces a pattern of in total up to several hundred bands. Normal metaphase spreads are used in methods like FISH and as a hybridization matrix for comparative genomic hybridization (CGH) experiments. Malignant cells from solid tumors or leukemia samples can also be used for cytogenetic analysis to generate metaphase preparations. Inspection of the stained metaphase chromosomes allows the determination of numerical and structural changes in the tumor cell genome, for example, losses of chromosomal segments or translocations, which may lead to chimeric oncogenes, such as bcr-abl in chronic myelogenous leukemia. References External links Mitosis Cell cycle de:Mitose#Metaphase
Metaphase
[ "Biology" ]
709
[ "Cell cycle", "Cellular processes", "Mitosis" ]
584,584
https://en.wikipedia.org/wiki/Otto%20Struve
Otto Lyudvigovich Struve (; 12 August 1897 – 6 April 1963) was a Russian-American astronomer of Baltic German origin. Otto was the descendant of famous astronomers of the Struve family; he was the son of Ludwig Struve, grandson of Otto Wilhelm von Struve and great-grandson of Friedrich Georg Wilhelm von Struve. He was also the nephew of Karl Hermann Struve. With more than 900 journal articles and books, Struve was one of the most distinguished and prolific astronomers of the mid-20th century. He served as director of Yerkes, McDonald, Leuschner and National Radio Astronomy Observatories and is credited with raising worldwide prestige and building schools of talented scientists at Yerkes and McDonald observatories. In particular, he hired Subrahmanyan Chandrasekhar and Gerhard Herzberg who later became Nobel Prize winners. Struve's research was mostly focused on binary and variable stars, stellar rotation and interstellar matter. He was one of the few eminent astronomers in the pre-Space Age era to publicly express a belief that extraterrestrial intelligence was abundant, and so was an early advocate of the search for extraterrestrial life. Early years in Russia Struve was born in 1897 in Kharkov, the largest city of Sloboda Ukraine, then Russian Empire (now Ukraine), as the first child of Ludwig Struve and Elizaveta Khristoforovna Struve (1874–1964). His father was a member of the extensive political and scientific Struve family of Baltic Germans who were prominent in 19th-century Russia. His astronomy experience started early: from the age of eight, he was accompanying father in the telescope tower and from 10 carried out some minor observations, despite his fear of the dark spaces. After having received home education, at the age of 12, Struve started attending a school in Kharkov and showed mathematical talents. Otto was the first child of the Struve family in Russia who attended a Russian-speaking rather than German-speaking school, and was bilingual in German and Russian. After graduating in 1914, he continued his astronomy work. In June 1914, Struve took part in preparations for observation of a total solar eclipse (August 21, 1914) and later used that experience and results for his master's degree work defended in 1919 at Kharkov University. Struve entered the Imperial Kharkov University in 1915, at the time of political unrest and wars in Russia. In the beginning of 1916, just having finished the first semester, he interrupted his studies and enlisted to a military artillery school in St. Petersburg. He passed an accelerated training program, and in February 1917, was sent to the Turkish front. After the Treaty of Brest-Litovsk was signed, Struve returned to Kharkov for a year between spring 1918 and spring 1919 and completed a full university course. in June 1919, he received a certificate signed by the rector of Kharkov University stating that Struve would stay with the university to prepare for professorship at the department of astronomy and geodesics. During that time, Struve also worked at the "workshop school of precision mechanics" and obtained a license of a workshop trainer. The workshop was organized by his father with the goal of creating traditions of astronomy engineers in Russia. Those were non-existent and foreign engineers were personally invited from abroad for high-quality mechanical work. Moving to the United States The German origin of Struves and the military history of Otto Struve with the White Russian Army took its toll. To avoid repression by the Bolsheviks, his family had to move from Kharkov to Sevastopol which was still under control of the White Army. There, a series of tragedies took away most of the family: the youngest sister Elizabeth drowned, brother Werner died from tuberculosis, and his father died from a stroke on November 4, 1920. Whereas his mother and sister chose to return to Kharkov, on November 16–17, 1920, Otto followed the escaping Wrangel's Army. With a military transport, he escaped from Sevastopol to Turkey. He never returned to Russia. He was later invited several times to conferences in the Soviet Union, but for various reasons declined to attend. During the year and a half that Otto spent in exile in Gallipoli and later in Constantinople, he became an impoverished refugee, eating at relief agencies and taking any job he could find. For some time, he worked as a woodcutter, residing with fellow Russian officers, often 6 people in a tent. One night, a neighboring tent was hit by lightning, killing everyone inside. Struve wrote to his uncle Hermann Struve in Germany for assistance, without knowing that his uncle had died a few months earlier, on August 12, 1920. However, the widow of Hermann, Eva Struve, contacted Paul Guthnick, her late husband's successor at the Berlin-Babelsberg Observatory. Germany itself was suffering after the wars, and there was little chance to obtain a position for a Russian there. Therefore, Guthnick wrote, on December 25, 1920, to the director of Yerkes Observatory in Williams Bay, Wisconsin, Edwin B. Frost asking a position for Struve. He received a reply on January 27, 1921, where Frost promised to do his best. On March 2, 1921, Frost wrote to Struve, offering him a position at Yerkes. Given his situation in Turkey, it was a lucky chance that Struve received that letter. On March 11, Struve sent a reply, thanking Frost for the offer and accepting it. The letter was formally written in English but with German grammar, revealing the poor English proficiency of Struve (when they later met in US, they spoke in German). Struve also acknowledged that he had no experience in spectral astrophysics. Nevertheless, when applying for his position, Frost mentioned that "I am perfectly willing to take him on his lineage. We regard Otto Struve as a first-class spectroscopist and astrophysicist", and that his degree in Kharkov was equivalent to a doctoral degree (which Struve never claimed and which was hardly so). It took several months to arrange for travel documents and funding. In late August 1921, Struve received his visa and travel tickets at the US Consulate in Turkey. In September, he boarded S.S. Hog Island and on October 7, 1921, arrived in New York. He was met there, put on the train, and two days later arrived in Chicago. Life in the United States In late 1921, Struve began working as a stellar spectroscopy assistant at Yerkes with a monthly salary of $75, starting with taking a training course. The observatory was in decline and Struve was alone in class. Three more students joined him in 1922, but only for a summer, and only one of those continued later. There were no lectures, and the students were learning by reading, practice and discussions with professors. Struve proved to be a quick learner and talented scientist. Five months after arrival, he made his first discovery of a pulsating star at Gamma Ursae Minoris and wrote an article on it in September 1922. He was spending more time with observations than anyone at Yerkes, trying every telescope available there, and also making weather observations at Williams Bay. On October 24, 1922, he discovered the asteroid 991 McDonalda and on November 14 of the same year, another asteroid 992 Swasey. As early as December 1923, Struve defended his PhD thesis on short-period spectroscopic double stars at the University of Chicago. Frost helped him in waiving some required PhD examinations, e.g. in French and German, stating that Struve had done ample reading of scientific literature back in Russia, and was fluent in those languages. Struve then became an instructor (January 1924), assistant professor (1927) and full professor (1932) at the university. His rapid promotion was again assisted by Frost, who also used job-offer letters from other observatories to Struve as proof that Struve was a highly valued scientist who must be kept at the University of Chicago. Between 1932 and 1947, Struve headed Yerkes Observatory; from 1939 to 1950 he acted as a founding director of the McDonald Observatory, and from July 1, 1952, to 1962 served as the first director of the National Radio Astronomy Observatory at University of Virginia. All those years, he remained in America except for conferences and an 8-month sabbatical leave to the University of Cambridge between August 1928 and May 1929. He applied for and won a Guggenheim Fellowship to cover his travel to, and living expenses in, Cambridge. While in Cambridge, Struve mostly worked on interstellar matter; he also went on a short trip to Leiden to meet Jan Oort. Struve was a highly successful administrator who brought fame to Yerkes Observatory and rebuilt the astronomy department of the University of Chicago. In particular, he gradually renewed the scientific staff, dismissing stagnated permanent researchers who were not making significant contributions to science but were occupying the faculty positions. The process was difficult. Struve used to arrive first and leave last from the observatory, taking notes on working hours of staff which he then used in his bureaucratic moves. In replacement, he hired several young and talented researchers who later became world-famous scientists. Those included Subrahmanyan Chandrasekhar (Nobel Prize in Physics in 1983), Gerard Kuiper (protagonist of the famous Kuiper Prize), Bengt Strömgren, Gerhard Herzberg (Nobel Prize in Chemistry in 1971), William Wilson Morgan and Jesse L. Greenstein. After World War II, he also invited a number of leading European researchers, such as Pol Swings, Jan Oort (father of radio astronomy), Marcel Minnaert, H. C. van der Hulst and Albrecht Unsöld. As most of them were foreigners, their appointment met strong opposition from the science officials for various reasons, such as taking jobs from Americans during the Great Depression. India-born Chandrasekhar, who spent a month in the Soviet Union in 1934, was also suspected of Communist connections. Struve spent extraordinary efforts defending and justifying each case, and those efforts paid off in building the scientific school at Yerkes and University of Chicago. For example, Chandrasekhar spent his entire career as a scientist and administrator at the University of Chicago, assisting Struve and eventually replacing him as president of the American Astronomical Society (from 1949) and as the Editor in Chief of the Astrophysical Journal. By the late 1940s, many young researchers whom Struve invited to Yerkes became established scientists. This created friction, as they did not want to follow his every word and were building their own careers. In 1947, Struve resigned as director of Yerkes Observatory and became chairman of the astronomy department at Berkeley and director of the Leuschner Observatory. He was succeeded by Kuiper at Yerkes; their relations were strained at times because of Struve's tendencies to keep control of Yerkes management. There were also rumors of similar strains between Struve and Chandrasekhar, but they were always dispersed by the latter, who insisted that Struve always kept scientific relations with his colleagues above the administrative ones. One reason for Struve's move to Berkeley was his tiredness of bureaucracy. In Berkeley, he was spending more time with personal research and students than ever before. Research In 1937, Struve discovered a phenomenon which was later named the Struve-Sahade effect (S-S effect), that is the apparent weakness of lines of the secondary star in massive binary stars when the secondary is receding. This effect poses problems for the accurate reconstruction of the separated primary and secondary spectra. The same year, he discovered interstellar hydrogen in ionized form. By 1959, Struve had published more than 900 journal articles and books, making him one of the most prolific astronomers (probably only Ernst Öpik published more, with 1,094 items). Many of those works aimed at popularizing astronomy. In particular, he published 39 articles (and 10 other items) in Popular Astronomy (1923–1951, the journal was discontinued in 1951), 154 in Sky and Telescope (1941–1963, the journal was started in 1941) and 83 reviews of books and works by other astronomers. Most of his co-authored scientific articles were co-authored with Pol Swings and were dedicated to spectroscopical studies of peculiar stars. To explain his interest in this topic, Struve once noted that he had never seen a spectrum of a star where he couldn't find anything to work on. Struve's major discoveries were detection of stellar rotation and dependence of the rotational speed on the stellar spectral class (temperature). They spurred the development of stellar evolution theory. In addition to stellar rotation, he also studied Stark effect in stellar spectra, that is broadening of the spectral lines by the electric field in the stellar atmosphere. He also worked on the turbulence of stellar atmosphere and expanding shells around stars. This topic required a larger telescope than those available to him. Therefore, between 1933 and 1939, he built an 82-inch telescope at the McDonald Observatory, which was then the second largest telescope in the world (after the Mt. Wilson telescope). Views on extraterrestrial life Struve's belief in the widespread existence of life and intelligence in the Universe stemmed from his studies of slow-rotating stars. Many stars, including the Sun, spin at a much lower rate than was predicted by contemporary theories of early stellar evolution. The reason for this, claimed Struve, was that they were surrounded by planetary systems which had carried away much of the stars' original angular momentum. So numerous were the slow-spinning stars that Struve estimated, in 1960, there might be as many as 50 billion planets in our Galaxy alone. As to how many might harbor intelligent life, he wrote: An intrinsically improbable event may become highly probable if the number of events is very great. ... [I]t is probable that a good many of the billions of planets in the Milky Way support intelligent forms of life. To me this conclusion is of great philosophical interest. I believe that science has reached the point where it is necessary to take into account the action of intelligent beings, in addition to the classical laws of physics. Personal life, family relations, and late years Struve had a younger brother and two sisters, all of whom died in Russia in their youth: Werner (1903–1920), Yadviga (1901–1924) and Elizabeth (1911–1920). The last death in 1924 left his mother with no close relatives in Russia. After Struve arranged visa documents, she immigrated to the US in January 1925. Remarkably, his mother began working in astronomy in the US and assisted in processing of the measurements. She lived with Struve even after his marriage. On May 25, 1925, Struve married Mary Martha Lanning, who considered herself a musician but worked as a secretary at Yerkes. Lanning was slightly older than Struve and had been previously married. They had no children; thus the famous Struve astronomical dynasty came to an end. Other branches of the Struve family besides the line of Otto Wilhelm von Struve continued, but yielded no distinguished scientists. On October 26, 1927, Struve became a naturalized US citizen. At that time, he was fluent in spoken and written English, but had a slight German accent which remained with him for life. Even after marriage, Struve continued working days and nights, something that his non-scientist wife could not fully accept. Although they remained together, their relations were cold in later years. Struve's health deteriorated in the late 1950s. He was suffering from hepatitis, first contracted back in Russia and Turkey. In 1956, while using a telescope at Mount Wilson, Struve had a bad fall, breaking several ribs and cracking two vertebrae. He was hospitalized for about two months and had to wear a body cast for a month after recovery. He was permanently hospitalized around 1963 and died on April 6, 1963, in Berkeley. He was survived by his mother and wife. His mother died on October 1, 1964, at the age of ninety. Mary was discovered dead on August 5, 1966, but was estimated to have died in July 1966. In 1925, Struve met his cousin, the astronomer Georg Hermann Struve at the Lick Observatory. In 1930s, they met again at Yerkes Observatory and reanalyzed observations of the complex multiple star system Zeta Cancri by their grandfather Otto Wilhelm von Struve. Personal qualities Struve was often described as a big and intimidating man. According to him, in his fifties, he was six feet tall, weighed 192 pounds, had gray hair and eyes. Struve was also known as persistent, dedicated and demanding, both to himself and others – the qualities preserved in his family for generations. He was first to arrive at the observatory, often working until late evening in the office, and then spending nights with a telescope. "He had only one interest and concern, namely, that astronomy should be developed and pushed to the maximum that was possible". As a result, Struve was usually overworked, developed an insomnia and often appeared as in a daze after a 2–3 hour sleep. Struve was hardly a good teacher: because of his devotion to research and frequent trips, he missed up to two-thirds of his lectures. Nobody was allowed to take his place and students had to do personal research in his absence. Yet, he kept the highest standards of knowledge at the qualifying exams. On the other hand, his infrequent appearances magnetized many students with his passion to astronomy. During his early years at Yerkes, he developed the practice of looking with one eye into the microscope of a micrograph instrument and with another at the nearby numerical table. This probably resulted in his eyes looking into slightly different directions. Although he did not care much about himself, Struve worried about people. His first paper published in Russian was titled "Aid to Russian Scientists". The Civil War brought suffering to most scientific families in Russia. Frost, Struve and George Van Biesbroeck formed a "Committee for Relief of Russian Astronomers" and organized sending packages of food and clothing. The funds and clothing came from astronomers from all over the US. During the Great Depression, he was concerned about hiring foreigners when many Americans were jobless. Around that time, the wife of his deputy George Van Biesbroeck wrote a letter to Belgium, mentioning how Yerkes Observatory was being run by two Europeans. The letter was published and it upset Struve. Eventually, Van Biesbroeck was replaced by American-born W. W. Morgan. Awards and honors Struve was elected to both the United States National Academy of Sciences and the American Philosophical Society in 1937. He was elected to the American Academy of Arts and Sciences in 1942. He received the Gold Medal of the Royal Astronomical Society (1944), the Bruce Medal (1948), the Henry Draper Medal of the National Academy of Sciences (1949) and the Henry Norris Russell Lectureship of the American Astronomical Society (1957). His Royal Society medal was the fourth (after Friedrich Georg Wilhelm, Otto Wilhelm and Hermann Struve) and the last received by Struves. The asteroid 768 Struveana was named in honor of Otto Wilhelm von Struve, Friedrich Georg Wilhelm Struve and Karl Hermann Struve; and a lunar crater was named for another 3 astronomers of the Struve family: Friedrich Georg Wilhelm, Otto Wilhelm and Otto. The 82-inch telescope which Struve used in his research at McDonald Observatory was named after him in 1966, three years after his death, whereas the asteroid 2227 Otto Struve bore Struve's name from its discovery on October 13, 1955. In 1925, Struve began reviewing articles for the Astrophysical Journal and from 1932 to 1947, acted as its Editor in Chief. From 1946 till 1949 he was president of American Astronomical Society. Between 1948 and 1952, he was vice-president and between 1952 and 1955 president of the International Astronomical Union. In April, 1954 he was elected a Fellow of the Royal Society. In 1950 he became foreign member of the Royal Netherlands Academy of Arts and Sciences. Between 1939 and 1961, he received honorary doctorate degrees from nine universities in Europe and America. See also Strömgren sphere References External links Guide to the Otto Struve Papers at The Bancroft Library Tells of Quest to Learn What's Between Stars 10 July 1933 Chicago Tribune Interview Literature Балышев М.А. Отто Людвигович Струве (1897-1963). Москва: Наука, 2008. 526 с. Artemenko T., Balyshev M., Vavilova I. The struve dynasty in the history of astronomy in Ukraine. Kinematics and Physics of Celestial Bodies. 2009. Vol. 25 (3). P. 153-167. Балышев М.А. Из истории Харьковской обсерватории: биографические очерки. В Кн.: 200 лет астрономии в Харьковском университете / Под. ред. проф. Ю.Г.Шкуратова. Харьков: Издательский центр ХНУ имени В.Н.Каразина, 2008. С. 99-154. Балишев М.А. Наукова біографія академіка О.Л.Струве: проблеми відтворення, аналіз бібліографії та джерел (2008). Наука і наукознавство. 2008. №2. С. 111-120. Балышев М.А. Sic transit gloria mundi: Жизнь и творчество Отто Людвиговича Струве (1897-1963). Историко-астрономические исследования / Институт истории естествознания и техники им. С.И. Вавилова РАН. Москва: Наука, 2007. Т.ХХХІІ. С. 138-206. Балышев М.А. Отто Людвигович Струве. Curriculum vitae: историко-биографическое исследование (2005). Харьков: СПДФО Яковлева, 2005. 150 с. Балышев М.А. Отто Людвигович Струве. Документально-биографический очерк. UNIVERSITATES. Наука и Просвещение. 2004. №3. С. 30-39. 1897 births 1963 deaths American people of Baltic German descent American people of German-Russian descent White Russian emigrants to the United States Ukrainian astrophysicists American astronomers Discoverers of asteroids Otto University of Chicago faculty National University of Kharkiv alumni Recipients of the Gold Medal of the Royal Astronomical Society Fellows of the American Physical Society Fellows of the Royal Society Members of the Royal Netherlands Academy of Arts and Sciences Naturalized citizens of the United States Academic staff of Kharkiv Observatory Deaths from falls Accidental deaths in California Astronomers from the Russian Empire Presidents of the International Astronomical Union Members of the Royal Swedish Academy of Sciences Russian scientists
Otto Struve
[ "Astronomy" ]
5,194
[ "Astronomers", "Presidents of the International Astronomical Union" ]
584,602
https://en.wikipedia.org/wiki/Dinitrogen%20pentoxide
Dinitrogen pentoxide (also known as nitrogen pentoxide or nitric anhydride) is the chemical compound with the formula . It is one of the binary nitrogen oxides, a family of compounds that contain only nitrogen and oxygen. It exists as colourless crystals that sublime slightly above room temperature, yielding a colorless gas. Dinitrogen pentoxide is an unstable and potentially dangerous oxidizer that once was used as a reagent when dissolved in chloroform for nitrations but has largely been superseded by nitronium tetrafluoroborate (). is a rare example of a compound that adopts two structures depending on the conditions. The solid is a salt, nitronium nitrate, consisting of separate nitronium cations and nitrate anions ; but in the gas phase and under some other conditions it is a covalently-bound molecule. History was first reported by Deville in 1840, who prepared it by treating silver nitrate () with chlorine. Structure and physical properties Pure solid is a salt, consisting of separated linear nitronium ions and planar trigonal nitrate anions . Both nitrogen centers have oxidation state +5. It crystallizes in the space group D (C6/mmc) with Z = 2, with the anions in the D3h sites and the cations in D3d sites. The vapor pressure P (in atm) as a function of temperature T (in kelvin), in the range , is well approximated by the formula being about 48 torr at 0 °C, 424 torr at 25 °C, and 760 torr at 32 °C (9 °C below the melting point). In the gas phase, or when dissolved in nonpolar solvents such as carbon tetrachloride, the compound exists as covalently-bonded molecules . In the gas phase, theoretical calculations for the minimum-energy configuration indicate that the angle in each wing is about 134° and the angle is about 112°. In that configuration, the two groups are rotated about 35° around the bonds to the central oxygen, away from the plane. The molecule thus has a propeller shape, with one axis of 180° rotational symmetry (C2) When gaseous is cooled rapidly ("quenched"), one can obtain the metastable molecular form, which exothermically converts to the ionic form above −70 °C. Gaseous absorbs ultraviolet light with dissociation into the free radicals nitrogen dioxide and nitrogen trioxide (uncharged nitrate). The absorption spectrum has a broad band with maximum at wavelength 160 nm. Preparation A recommended laboratory synthesis entails dehydrating nitric acid () with phosphorus(V) oxide: Another laboratory process is the reaction of lithium nitrate and bromine pentafluoride , in the ratio exceeding 3:1. The reaction first forms nitryl fluoride that reacts further with the lithium nitrate: The compound can also be created in the gas phase by reacting nitrogen dioxide or with ozone: However, the product catalyzes the rapid decomposition of ozone: Dinitrogen pentoxide is also formed when a mixture of oxygen and nitrogen is passed through an electric discharge. Another route is the reactions of Phosphoryl chloride or nitryl chloride with silver nitrate Reactions Dinitrogen pentoxide reacts with water (hydrolyses) to produce nitric acid . Thus, dinitrogen pentoxide is the anhydride of nitric acid: Solutions of dinitrogen pentoxide in nitric acid can be seen as nitric acid with more than 100% concentration. The phase diagram of the system − shows the well-known negative azeotrope at 60% (that is, 70% ), a positive azeotrope at 85.7% (100% ), and another negative one at 87.5% ("102% "). The reaction with hydrogen chloride also gives nitric acid and nitryl chloride : Dinitrogen pentoxide eventually decomposes at room temperature into and . Decomposition is negligible if the solid is kept at 0 °C, in suitably inert containers. Dinitrogen pentoxide reacts with ammonia to give several products, including nitrous oxide , ammonium nitrate , nitramide and ammonium dinitramide , depending on reaction conditions. Decomposition of dinitrogen pentoxide at high temperatures Dinitrogen pentoxide between high temperatures of , is decomposed in two successive stoichiometric steps: In the shock wave, has decomposed stoichiometrically into nitrogen dioxide and oxygen. At temperatures of 600 K and higher, nitrogen dioxide is unstable with respect to nitrogen oxide and oxygen. The thermal decomposition of 0.1 mM nitrogen dioxide at 1000 K is known to require about two seconds. Decomposition of dinitrogen pentoxide in carbon tetrachloride at 30 °C Apart from the decomposition of at high temperatures, it can also be decomposed in carbon tetrachloride at . Both and are soluble in and remain in solution while oxygen is insoluble and escapes. The volume of the oxygen formed in the reaction can be measured in a gas burette. After this step we can proceed with the decomposition, measuring the quantity of that is produced over time because the only form to obtain is with the decomposition. The equation below refers to the decomposition of in : And this reaction follows the first order rate law that says: Decomposition of nitrogen pentoxide in the presence of nitric oxide can also be decomposed in the presence of nitric oxide : The rate of the initial reaction between dinitrogen pentoxide and nitric oxide of the elementary unimolecular decomposition. Applications Nitration of organic compounds Dinitrogen pentoxide, for example as a solution in chloroform, has been used as a reagent to introduce the functionality in organic compounds. This nitration reaction is represented as follows: where Ar represents an arene moiety. The reactivity of the can be further enhanced with strong acids that generate the "super-electrophile" . In this use, has been largely replaced by nitronium tetrafluoroborate . This salt retains the high reactivity of , but it is thermally stable, decomposing at about 180 °C (into and ). Dinitrogen pentoxide is relevant to the preparation of explosives. Atmospheric occurrence In the atmosphere, dinitrogen pentoxide is an important reservoir of the species that are responsible for ozone depletion: its formation provides a null cycle with which and are temporarily held in an unreactive state. Mixing ratios of several parts per billion by volume have been observed in polluted regions of the nighttime troposphere. Dinitrogen pentoxide has also been observed in the stratosphere at similar levels, the reservoir formation having been postulated in considering the puzzling observations of a sudden drop in stratospheric levels above 50 °N, the so-called 'Noxon cliff'. Variations in reactivity in aerosols can result in significant losses in tropospheric ozone, hydroxyl radicals, and concentrations. Two important reactions of in atmospheric aerosols are hydrolysis to form nitric acid and reaction with halide ions, particularly , to form molecules which may serve as precursors to reactive chlorine atoms in the atmosphere. Hazards is a strong oxidizer that forms explosive mixtures with organic compounds and ammonium salts. The decomposition of dinitrogen pentoxide produces the highly toxic nitrogen dioxide gas. References Cited sources Nitrogen oxides Acid anhydrides Acidic oxides Nitrates Nitronium compounds
Dinitrogen pentoxide
[ "Chemistry" ]
1,586
[ "Nitronium compounds", "Nitrates", "Oxidizing agents", "Salts" ]
584,617
https://en.wikipedia.org/wiki/Small%20interfering%20RNA
Small interfering RNA (siRNA), sometimes known as short interfering RNA or silencing RNA, is a class of double-stranded non-coding RNA molecules, typically 20–24 base pairs in length, similar to microRNA (miRNA), and operating within the RNA interference (RNAi) pathway. It interferes with the expression of specific genes with complementary nucleotide sequences by degrading messenger RNA (mRNA) after transcription, preventing translation. It was discovered in 1998 by Andrew Fire at the Carnegie Institution for Science in Washington, D.C. and Craig Mello at the University of Massachusetts in Worcester. Structure Naturally occurring siRNAs have a well-defined structure that is a short (usually 20 to 24-bp) double-stranded RNA (dsRNA) with phosphorylated 5' ends and hydroxylated 3' ends with two overhanging nucleotides. The Dicer enzyme catalyzes production of siRNAs from long dsRNAs and small hairpin RNAs. siRNAs can also be introduced into cells by transfection. Since in principle any gene can be knocked down by a synthetic siRNA with a complementary sequence, siRNAs are an important tool for validating gene function and drug targeting in the post-genomic era. History In 1998, Andrew Fire at Carnegie Institution for Science in Washington DC and Craig Mello at University of Massachusetts in Worcester discovered the RNAi mechanism while working on the gene expression in the nematode, Caenorhabditis elegans. They won the Nobel prize for their research with RNAi in 2006. siRNAs and their role in post-transcriptional gene silencing (PTGS) was discovered in plants by David Baulcombe's group at the Sainsbury Laboratory in Norwich, England and reported in Science in 1999. Thomas Tuschl and colleagues soon reported in Nature that synthetic siRNAs could induce RNAi in mammalian cells. In 2001, the expression of a specific gene was successfully silenced by introducing chemically synthesized siRNA into mammalian cells (Tuschl et al.) These discoveries led to a surge in interest in harnessing RNAi for biomedical research and drug development. Significant developments in siRNA therapies have been made with both organic (carbon based) and inorganic (non-carbon based) nanoparticles, which have been successful in drug delivery to the brain, offering promising methods to deliver therapeutics into human subjects. However, human applications of siRNA have had significant limitations to its success. One of these being off-targeting. There is also a possibility that these therapies can trigger innate immunity. Animal models have not been successful in accurately representing the extent of this response in humans. Hence, studying the effects of siRNA therapies has been a challenge.   In recent years, siRNA therapies have been approved and new methods have been established to overcome these challenges. There are approved therapies available for commercial use and several currently in the pipeline waiting to get approval. Mechanism The mechanism by which natural siRNA causes gene silencing through repression of translation occurs as follows: Long dsRNA (which can come from hairpin, complementary RNAs, and RNA-dependent RNA polymerases) is cleaved by an endo-ribonuclease called Dicer. Dicer cuts the long dsRNA to form short interfering RNA or siRNA; this is what enables the molecules to form the RNA-Induced Silencing Complex (RISC). Once siRNA enters the cell it gets incorporated into other proteins to form the RISC. Once the siRNA is part of the RISC complex, the siRNA is unwound to form single stranded siRNA. The strand that is thermodynamically less stable due to its base pairing at the 5´end is chosen to remain part of the RISC-complex The single stranded siRNA which is part of the RISC complex now can scan and find a complementary mRNA Once the single stranded siRNA (part of the RISC complex) binds to its target mRNA, it induces mRNA cleavage. The mRNA is now cut and recognized as abnormal by the cell. This causes degradation of the mRNA and in turn no translation of the mRNA into amino acids and then proteins. Thus silencing the gene that encodes that mRNA. siRNA is also similar to miRNA, however, miRNAs are derived from shorter stemloop RNA products. miRNAs typically silence genes by repression of translation and have broader specificity of action, while siRNAs typically work with higher specificity by cleaving the mRNA before translation, with 100% complementarity. RNAi induction using siRNAs or their biosynthetic precursors Gene knockdown by transfection of exogenous siRNA is often unsatisfactory because the effect is only transient, especially in rapidly dividing cells. This may be overcome by creating an expression vector for the siRNA. The siRNA sequence is modified to introduce a short loop between the two strands. The resulting transcript is a short hairpin RNA (shRNA), which can be processed into a functional siRNA by Dicer in its usual fashion. Typical transcription cassettes use an RNA polymerase III promoter (e.g., U6 or H1) to direct the transcription of small nuclear RNAs (snRNAs) (U6 is involved in RNA splicing; H1 is the RNase component of human RNase P). It is theorized that the resulting siRNA transcript is then processed by Dicer. The gene knockdown efficiency can also be improved by using cell squeezing. The activity of siRNAs in RNAi is largely dependent on its binding ability to the RNA-induced silencing complex (RISC). Binding of the duplex siRNA to RISC is followed by unwinding and cleavage of the sense strand with endonucleases. The remaining anti-sense strand-RISC complex can then bind to target mRNAs for initiating transcriptional silencing. RNA activation It has been found that dsRNA can also activate gene expression, a mechanism that has been termed "small RNA-induced gene activation" or RNAa. It has been shown that dsRNAs targeting gene promoters induce potent transcriptional activation of associated genes. RNAa was demonstrated in human cells using synthetic dsRNAs, termed "small activating RNAs" (saRNAs). It is currently not known how conserved RNAa is in other organisms. One report in the Aedes aegypti mosquito has shown there is some evidence for RNAa and can be achieved by short or long dsRNAs targeting promoter regions. Post-transcriptional gene silencing The siRNA-induced post transcriptional gene silencing is initiated by the assembly of the RNA-induced silencing complex (RISC). The complex silences certain gene expression by cleaving the mRNA molecules coding the target genes. To begin the process, one of the two siRNA strands, the guide strand (anti-sense strand), will be loaded into the RISC while the other strand, the passenger strand (sense strand), is degraded. Certain Dicer enzymes may be responsible for loading the guide strand into RISC. Then, the siRNA scans for and directs RISC to perfectly complementary sequence on the mRNA molecules. The cleavage of the mRNA molecules is thought to be catalyzed by the Piwi domain of Argonaute proteins of the RISC. The mRNA molecule is then cut precisely by cleaving the phosphodiester bond between the target nucleotides which are paired to siRNA residues 10 and 11, counting from the 5'end. This cleavage results in mRNA fragments that are further degraded by cellular exonucleases. The 5' fragment is degraded from its 3' end by exosome while the 3' fragment is degraded from its 5' end by 5' -3' exoribonuclease 1(XRN1). Dissociation of the target mRNA strand from RISC after the cleavage allow more mRNA to be silenced. This dissociation process is likely to be promoted by extrinsic factors driven by ATP hydrolysis. Sometimes cleavage of the target mRNA molecule does not occur. In some cases, the endonucleolytic cleavage of the phosphodiester backbone may be suppressed by mismatches of siRNA and target mRNA near the cleaving site. Other times, the Argonaute proteins of the RISC lack endonuclease activity even when the target mRNA and siRNA are perfectly paired. In such cases, gene expression will be silenced by an miRNA induced mechanism instead Piwi-interacting RNAs are responsible for the silencing of transposons and are not siRNAs. PIWI-interacting RNAs (piRNAs) are a recently-discovered class of small non-coding RNAs (ncRNAs) with a length of 21-35 nucleotides. They play a role in gene expression regulation, transposon silencing, and viral infection inhibition. Once considered as "dark matter" of ncRNAs, piRNAs emerged as important players in multiple cellular functions in different organisms. Transcriptional Gene Silencing Many model organism, such as plants (Arabidopsis thaliana), yeast (Saccharomyces cerevisiae ), flies (Drosophila melanogaster) and worms (C. elegans), have been used to study small non coding RNA-directed Transcriptional gene silencing. In human cell, RNA-directed transcriptional gene silencing was observed a decade ago when exogenous siRNAs silenced a transgenic elongation factor 1 α promoter driving a Green Fluorescent Protein (GFP) reporter gene. The main mechanisms of transcriptional gene silencing (TGS) involving the RNAi machinery include DNA methylation, histone post-translational modifications, and subsequent chromatin remodeling around the target gene into a heterochromatic state. SiRNAs can be incorporated into a RNA-induced transcriptional silencing (RITS) complex. An active RITS complex will trigger the formation of heterochromatin around DNA matching the siRNA, effectively silencing the genes in that region of the DNA. Applications: Allele-specific gene silencing One of the potent applications of siRNAs is the ability to distinguish the target versus non-target sequence with a single-nucleotide difference. This approach has been considered as therapeutically crucial for the silencing dominant gain-of-function (GOF) disorders,where mutant allele causing disease is differed from wt-allele by a single nucleotide (nt). These types of siRNAs with the capability to distinguish a single-nt difference, are termed as, allele-specific siRNAs. ASP-RNAi is an innovative category of RNAi with the objective of suppressing the dominant mutant allele while sparing expression of the corresponding normal allele with the specificity of single-nucleotide differences between the two. ASP-siRNAs are potentially a novel and better remedial alternative for the treatment of autosomal dominant genetic disorders especially in cases where wild-type allele expression is crucial for organism survival such as Huntington disease (HD),DYT1 dystonia (Gonzalez-Alegre et al. 2003, 2005), Alzheimer's disease (Sierant et al. 2011), Parkinson's disease (PD) (Takahashi et al. 2015), amyloid lateral sclerosis (ALS) (Schwarz et al. 2006), and Machado–Joseph disease (Alves et al. 2008). Their therapeutic potential has also been assessed for various skin disorders like epidermolysis bullosa simplex (Atkinson et al. 2011), epidermolytic palmoplantar keratoderma (EPPK) (Lyu et al. 2016), and lattice corneal dystrophy type I (LCDI) (Courtney et al. 2014). Challenges: avoiding nonspecific effects RNAi intersects with a number of other pathways; as of 2010 it was not surprising that on occasion, nonspecific effects are triggered by the experimental introduction of an siRNA. When a mammalian cell encounters a double-stranded RNA such as an siRNA, it may mistake it as a viral by-product and mount an immune response. Furthermore, because structurally related microRNAs modulate gene expression largely via incomplete complementarity base pair interactions with a target mRNA, the introduction of an siRNA may cause unintended off-targeting. Chemical modifications of siRNA may alter the thermodynamic properties that also result in a loss of single nucleotide specificity. Innate immunity Introduction of too many siRNA can result in nonspecific events due to activation of innate immune responses. Most evidence to date suggests that this is probably due to activation of the dsRNA sensor PKR, although retinoic acid-inducible gene I (RIG-I) may also be involved. The induction of cytokines via toll-like receptor 7 (TLR7) has also been described. Chemical modification of siRNA is employed to reduce in the activation of the innate immune response for gene function and therapeutic applications. One promising method of reducing the nonspecific effects is to convert the siRNA into a microRNA. MicroRNAs occur naturally, and by harnessing this endogenous pathway it should be possible to achieve similar gene knockdown at comparatively low concentrations of resulting siRNAs. This should minimize nonspecific effects. Off-targeting Off-targeting is another challenge to the use of siRNAs as a gene knockdown tool. Here, genes with incomplete complementarity are inadvertently downregulated by the siRNA (in effect, the siRNA acts as a miRNA), leading to problems in data interpretation and potential toxicity. This, however, can be partly addressed by designing appropriate control experiments, and siRNA design algorithms are currently being developed to produce siRNAs free from off-targeting. Genome-wide expression analysis, e.g., by microarray technology, can then be used to verify this and further refine the algorithms. A 2006 paper from the laboratory of Dr. Khvorova implicates 6- or 7-basepair-long stretches from position 2 onward in the siRNA matching with 3'UTR regions in off-targeted genes. The tool of siRNA off-target predition is available at http://crdd.osdd.net/servers/aspsirna/asptar.php and published as ASPsiRNA resource. Adaptive immune responses Plain RNAs may be poor immunogens, but antibodies can easily be created against RNA-protein complexes. Many autoimmune diseases see these types of antibodies. There haven't yet been reports of antibodies against siRNA bound to proteins. Some methods for siRNA delivery adjoin polyethylene glycol (PEG) to the oligonucleotide reducing excretion and improving circulating half-life. However recently a large Phase III trial of PEGylated RNA aptamer against factor IX had to be discontinued by Regado Biosciences because of a severe anaphylactic reaction to the PEG part of the RNA. This reaction led to death in some cases and raises significant concerns about siRNA delivery when PEGylated oligonucleotides are involved. Saturation of the RNAi machinery siRNAs transfection into cells typically lowers the expression of many genes, however, the upregulation of genes is also observed. The upregulation of gene expression can partially be explained by the predicted gene targets of endogenous miRNAs. Computational analyses of more than 150 siRNA transfection experiments support a model where exogenous siRNAs can saturate the endogenous RNAi machinery, resulting in the de-repression of endogenous miRNA-regulated genes. Thus, while siRNAs can produce unwanted off-target effects, i.e. unintended downregulation of mRNAs via a partial sequence match between the siRNA and target, the saturation of RNAi machinery is another distinct nonspecific effect, which involves the de-repression of miRNA-regulated genes and results in similar problems in data interpretation and potential toxicity. Chemical modification siRNAs have been chemically modified to enhance their therapeutic properties, Short interfering RNA (siRNA) must be delivered to the site of action in the cells of target tissues in order for RNAi to fulfill its therapeutic promise. A detailed database of all such chemical modifications is manually curated as siRNAmod in scientific literature. Chemical modification of siRNA can also inadvertently result in loss of single-nucleotide specificity. Therapeutic applications and challenges Given the ability to knock down, in essence, any gene of interest, RNAi via siRNAs has generated a great deal of interest in both basic and applied biology. One of the biggest challenges to siRNA and RNAi based therapeutics is intracellular delivery. siRNA also has weak stability and pharmacokinetic behavior. Delivery of siRNA via nanoparticles has shown promise. siRNA oligos in vivo are vulnerable to degradation by plasma and tissue endonucleases and exonucleases and have shown only mild effectiveness in localized delivery sites, such as the human eye. Delivering pure DNA to target organisms is challenging because its large size and structure prevents it from diffusing readily across membranes. siRNA oligos circumvent this problem due to their small size of 21-23 oligos. This allows delivery via nano-scale delivery vehicles called nanovectors. A good nanovector for siRNA delivery should protect siRNA from degradation, enrich siRNA in the target organ and facilitate the cellular uptake of siRNA. The three main groups of siRNA nanovectors are: lipid based, non-lipid organic-based, and inorganic. Lipid based nanovectors are excellent for delivering siRNA to solid tumors, but other cancers may require different non-lipid based organic nanovectors such as cyclodextrin based nanoparticles. siRNAs delivered via lipid based nanoparticles have been shown to have therapeutic potential for central nervous system (CNS) disorders. Central nervous disorders are not uncommon, but the blood brain barrier (BBB) often blocks access of potential therapeutics to the brain. siRNAs that target and silence efflux proteins on the BBB surface have been shown to create an increase in BBB permeability. siRNA delivered via lipid based nanoparticles is able to cross the BBB completely. A huge difficulty in siRNA delivery is the problem of off-targeting. Since genes are read in both directions, there exists a possibility that even if the intended antisense siRNA strand is read and knocks out the target mRNA, the sense siRNA strand may target another protein involved in another function. Phase I results of the first two therapeutic RNAi trials (indicated for age-related macular degeneration, aka AMD) reported at the end of 2005 that siRNAs are well tolerated and have suitable pharmacokinetic properties. In a phase 1 clinical trial, 41 patients with advanced cancer metastasised to liver were administered RNAi delivered through lipid nanoparticles. The RNAi targeted two genes encoding key proteins in the growth of the cancer cells, vascular endothelial growth factor, (VEGF), and kinesin spindle protein (KSP). The results showed clinical benefits, with the cancer either stabilized after six months, or regression of metastasis in some of the patients. Pharmacodynamic analysis of biopsy samples from the patients revealed the presence of the RNAi constructs in the samples, proving that the molecules reached the intended target. Proof of concept trials have indicated that Ebola-targeted siRNAs may be effective as post-exposure prophylaxis in humans, with 100% of non-human primates surviving a lethal dose of Zaire Ebolavirus, the most lethal strain. Legal categorization and legal issues in a near future Currently, SiRNA are currently chemically synthesized and so, are legally categorized inside EU and in USA as simple medicinal products. But as bioengineered siRNA (BERAs) are in development, these would be classified as biological medicinal products, at least in EU. The development of the BERAs technology raises the question of the categorization of drugs having the same mechanism of action but being produced chemically or biologically. This lack of consistency should be addressed. Intracellular delivery There is great potential for RNA interference (RNAi) to be used therapeutically to reversibly silence any gene. For RNAi to realize its therapeutic potential, small interfering RNA (siRNA) must be delivered to the site of action in the cells of target tissues. But finding safe and efficient delivery mechanisms is a major obstacle to achieving the full potential of siRNA-based therapies.  Unmodified siRNA is unstable in the bloodstream, has the potential to cause immunogenicity, and has difficulty readily navigating cell membranes. As a result, chemical alterations and/or delivery tools are needed to safely transfer siRNA to its site of action. There are three main techniques of delivery for siRNA that differ on efficiency and toxicity. Transfection In this technique siRNA first must be designed against the target gene. Once the siRNA is configured against the gene it has to be effectively delivered through a transfection protocol. Delivery is usually done by cationic liposomes, polymer nanoparticles, and lipid conjugation. This method is advantageous because it can deliver siRNA to most types of cells, has high efficiency and reproducibility, and is offered commercially. The most common commercial reagents for transfection of siRNA are Lipofectamine and Neon Transfection. However, it is not compatible with all cell types and has low in vivo efficiency. Electroporation Electrical pulses are also used to intracellularly deliver siRNA into cells. The cell membrane is made of phospholipids which makes it susceptible to an electric field. When quick but powerful electrical pulses are initiated the lipid molecules reorient themselves, while undergoing thermal phase transitions because of heating. This results in the making of hydrophilic pores and localized perturbations in the lipid bilayer cell membrane also causing a temporary loss of semipermeability. This allows for the escape of many intracellular contents, such as ions and metabolites as well as the simultaneous uptake of drugs, molecular probes, and nucleic acids. For cells that are difficult to transfect electroporation is advantageous however cell death is more probable under this technique. This method has been used to deliver siRNA targeting VEGF into the xenografted tumors in nude mice, which resulted in a significant suppression of tumor growth. Viral-mediated delivery The gene silencing effects of transfected designed siRNA are generally transient, but this difficulty can be overcome through an RNAi approach. Delivering this siRNA from DNA templates can be done through several recombinant viral vectors based on retrovirus, adeno-associated virus, adenovirus, and lentivirus. The latter is the most efficient virus that stably delivers siRNA to target cells as it can transduce nondividing cells as well as directly target the nucleus. These specific viral vectors have been synthesized to effectively facilitate siRNA that is not viable for transfection into cells. Another aspect is that in some cases synthetic viral vectors can integrate siRNA into the cell genome which allows for stable expression of siRNA and long-term gene knockdown. This technique is advantageous because it is in vivo and effective for difficult to transfect cell. However problems arise because it can trigger antiviral responses in some cell types leading to mutagenic and immunogenic effects. This method has potential use in gene silencing of the central nervous system for the treatment of Huntington's disease. Therapies A decade after the discovery of RNAi mechanism in 1993, the pharmaceutical sector heavily invested in the research and development of siRNA therapy. There are several advantages that this therapy has over small molecules and antibodies. It can be administered quarterly or every six months. Another advantage is that, unlike small molecule and monoclonal antibodies that need to recognize specific conformation of a protein, siRNA functions by Watson-Crick basepairing with mRNA. Therefore, any target molecule that needs to be treated with high affinity and specificity can be selected if the right nucleotide sequence is available. One of the biggest challenges researchers needed to overcome was the identification and establishment of a delivery system through which the therapies would enter the body. And that the immune system often mistakes the RNAi therapies as remnants of infectious agents, which can trigger an immune response. Animal models did not accurately represent the degree of immune response that was seen in humans and despite the promise in the treatment investors divested away from RNAi. However, there were a few companies that continued with the development of RNAi therapy for humans. Alnylam Pharmaceuticals, Sirna Therapeutics and Dicerna Pharmaceuticals are few of the companies still working on bringing RNAi therapies to market. It was learned that almost all siRNA therapies administered in the bloodstream accumulated in the liver. That is why most of the early drug targets were diseases that affected the liver. Repeated developmental work also shed light on improving the chemical composition of the RNA molecule to reduce the immune response, subsequently causing little to no side effects. Listed below are some of approved therapies or therapies in pipeline. Alnylam Pharmaceuticals In 2018, Alnylam Pharmaceuticals became the first company to have a siRNA therapy approved by the FDA. Onpattro (patisiran) was approved for the treatment of polyneuropathy of hereditary transthyretin-mediated (hATTR) amyloidosis in adults. hATTR is a rare, progressively debilitating condition. During hATTR amyloidosis, misfolded transthyretin (TTR) protein is deposited in the extracellular space. Under typical folding conditions, TTR tetramers are made up of four monomers. Hereditary ATTR amyloidosis is caused by a fault or mutation in the transthyretin (TTR) gene which is inherited. Changing just one amino-acid changes the tetrameric transthyretin proteins, resulting in unstable tetrameric transthyretin protein that aggregates in monomers and forms insoluble extracellular amyloid deposits. Amyloid buildup in various organ systems causes cardiomyopathy, polyneuropathy, gastrointestinal dysfunction. It affects 50,000 people worldwide. To deliver the drug directly to the liver, siRNA is encased in a lipid nanoparticle. The siRNA molecule halts the production of amyloid proteins by interfering with the RNA production of abnormal TTR proteins. This prevents the accumulation of these proteins in different organs of the body and helps the patients manage this disease. Traditionally, liver transplantation has been the standard treatment for hereditary transthyretin amyloidosis, however its effectiveness may be limited by the persistent deposition of wild-type transthyretin amyloid after transplantation. There are also small molecule medications that provide temporary relief. Before Onpattro was released, the treatment options for hATTR were limited. After the approval of Onpattro, FDA awarded Alnylam with the Breakthrough Therapy Designation, which is given to drugs that are intended to treat a serious condition and are a substantial improvement over any available therapy. It was also awarded Orphan Drug Designations given to those treatments that are intended to safely treat conditions affecting less than 200,000 people. Along with Onpattro, another RNA interference therapeutic drug has also been discovered (Partisiran) which has property of inhibiting hepatic synthesis of transthyretin. Target messenger RNA (mRNA) is cleaved as a result by tiny interfering RNAs coupled to the RNA-induced silencing complex. Patisiran, an investigational RNAi therapeutic drug, uses this process to decrease the production of mutant and wild-type transthyretin by cleaving on 3-untranslated region of transthyretin mRNA. In 2019, FDA approved the second RNAi therapy, Givlaari (givosiran) used to treat acute hepatic porphyria (AHP). The disease is caused due to the accumulation of toxic porphobilinogen (PBG) molecules which are formed during the production of heme. These molecules accumulate in different organs and this can lead to the symptoms or attacks of AHP. Givlaari is an siRNA drug that downregulates the expression of aminolevulinic acid synthase 1 (ALAS1), a liver enzyme involved in an early step in heme production. The downregulation of ALAS1 lowers the levels of neurotoxic intermediates that cause AHP symptoms. Years of research has led to a greater understanding of siRNA therapies beyond those affecting the liver. As of 2019, Alnylam Pharmaceuticals was involved in therapies that may treat amyloidosis and CNS disorders like Huntington's disease and Alzheimer's disease. They have also partnered with Regeneron Pharmaceuticals to develop therapies for CNS, eye and liver diseases. As of 2020, ONPATTRO and GIVLAARI, were available for commercial application, and two siRNAs, lumasiran (ALN-GO1) and inclisiran, have been submitted for new drug application to the FDA. Several siRNAs are undergoing phase 3 clinical studies, and more candidates are in the early developmental stage. In 2020, Alnylam and Vir pharmaceuticals announced a partnership and have started working on a RNAi therapy that would treat severe cases of COVID-19. Other companies that have had success in developing a pipeline of siRNA therapies are Dicerna Pharmaceuticals, partnered Eli Lilly and Company and Arrowhead Pharmaceuticals partnered with Johnson and Johnson. Several other big pharmaceutical companies such as Amgen and AstraZeneca have also invested heavily in siRNA therapies as they see the potential success of this area of biological drugs. See also Gene knockdown Gene silencing Oligonucleotide synthesis EsiRNA NatsiRNA Viroid VIRsiRNAdb CRISPR Dharmacon Persomics References Further reading External links RNA small interfering RNA Molecular biology Non-coding RNA
Small interfering RNA
[ "Chemistry", "Biology" ]
6,346
[ "Biochemistry", "Molecular biology" ]
584,651
https://en.wikipedia.org/wiki/Trusted%20third%20party
In cryptography, a trusted third party (TTP) is an entity which facilitates interactions between two parties who both trust the third party; the third party reviews all critical transaction communications between the parties, based on the ease of creating fraudulent digital content. In TTP models, the relying parties use this trust to secure their own interactions. TTPs are common in any number of commercial transactions and in cryptographic digital transactions as well as cryptographic protocols, for example, a certificate authority (CA) would issue a digital certificate to one of the two parties in the next example. The CA then becomes the TTP to that certificate's issuance. Likewise transactions that need a third party recordation would also need a third-party repository service of some kind. 'Trusted' means that a system needs to be trusted to act in your interests, but it has the option (either at will or involuntarily) to act against your interests. 'Trusted' also means that there is no way to verify if that system is operating in your interests, hence the need to trust it. Corollary: if a system can be verified to operate in your interests, it would not need your trust. And if it can be shown to operate against your interests one would not use it. An example Suppose Alice and Bob wish to communicate securely – they may choose to use cryptography. Without ever having met Bob, Alice may need to obtain a key to use to encrypt messages to him. In this case, a TTP is a third party who may have previously seen Bob (in person), or is otherwise willing to vouch for that this key (typically in a public key certificate) belongs to the person indicated in that certificate, in this case, Bob. Let's call this third person Trent. Trent gives Bob's key to Alice, who then uses it to send secure messages to Bob. Alice can trust this key to be Bob's if she trusts Trent. In such discussions, it is simply assumed that she has valid reasons to do so (of course there is the issue of Alice and Bob being able to properly identify Trent as Trent and not someone impersonating Trent). Actual practice How to arrange for (trustable) third parties of this type is an unsolved problem. So long as there are motives of greed, politics, revenge, etc., those who perform (or supervise) work done by such an entity will provide potential loopholes through which the necessary trust may leak. The problem, perhaps an unsolvable one, is ancient and notorious. That large impersonal corporations make promises of accuracy in their attestations of the correctness of a claimed public-key-to-user correspondence (e.g., by a certificate authority as a part of a public key infrastructure) changes little. As in many environments, the strength of trust is as weak as its weakest link. When the infrastructure of a trusted CA is breached the whole chain of trust is broken. The 2011 incident at CA DigiNotar broke the trust of the Dutch government's PKI, and is a textbook example of the weaknesses of the system and the effects of it. As Bruce Schneier has pointed out, after the 2013 mass surveillance disclosures, no third party should in fact ever be trusted. The PGP cryptosystem includes a variant of the TTP in the form of the web of trust. PGP users digitally sign each other's certificates and are instructed to do so only if they are confident the person and the public key belong together. A key signing party is one way of combining a get-together with some certificate signing. Nonetheless, doubt and caution remain sensible as nothing prevents some users from being careless in signing others' certificates. Trusting humans, or their organizational creations, can be risky. For example, in financial matters, bonding companies have yet to find a way to avoid losses in the real world. Parallels outside cryptography Outside cryptography, the law in many places makes provision for trusted third parties upon whose claims one may rely. For instance, a notary public acts as a trusted third party for authenticating or acknowledging signatures on documents. A TTP's role in cryptography is much the same, at least in principle. A certificate authority partially fills such a notary function, attesting to the identity of a key's owner, but not to whether the party was mentally aware or was apparently free from duress (nor does the certificate authority attest to the date of the signature). See also Direct Anonymous Attestation Double-spending Trusted computing base References Public-key cryptography Computational trust
Trusted third party
[ "Engineering" ]
950
[ "Cybersecurity engineering", "Computational trust" ]
584,652
https://en.wikipedia.org/wiki/Thermomicrobia
The Thermomicrobia is a group of thermophilic green non-sulfur bacteria. Based on species Thermomicrobium roseum (type species) and Sphaerobacter thermophilus, this bacteria class has the following description: The class Thermomicrobia subdivides into two orders with validly published names: Thermomicrobiales Garrity and Holt 2001 and Sphaerobacterales Stackebrandt, Rainey and Ward-Rainey 1997. Gram negative. Pleomorphic, non-motile, non-spore-forming rods. Non-sporulating. No diamino acid present. No peptidoglycan in significant amount. Atypical proteinaceous cell walls. Hyper-thermophilic, optimum growth temperature at 70-75 °C. Obligatory aerobic and chemoorganotrophic. As thermophilic bacteria, members of this class are usually found in environments which are distant from human activity. However, they have features like improved growth in antibiotics and CO oxidizing activity, making them interesting topics of research (e.g. for biotechnology application). History In 1973, a strain of rose-pink thermophilic bacteria was isolated from Toadstool Spring in Yellowstone National Park, which was later named Thermomicrobium roseum and proposed as a novel species of the novel genus Thermomicrobium. At that time the genus was categorized under family Achromobacteraceae, but it became a distinct phylum by 2001. In 2004, it was proposed, on the basis of an analysis of genetic affiliations, that the Thermomicrobia should more properly be reclassified as a class belonging to the phylum Chloroflexota (formerly Chloroflexi). The bacteria Sphaerobacter thermophilus originally described as an Actinobacteria is now considered a Thermomicrobia. In the same year, another strain of rose-pink thermophilic bacteria was isolated from Yellowstone National Park, which was named Thermobaculum terrenum. Later analysis based on genome put this species under Thermomicrobia class. However, the current standing of Thermobaculum terrenum is disputed. In 2012, a thermo-tolerant nitrite-oxidizing bacterium was isolated from a bioreactor, which was named Nitrolancetus hollandica and proposed as a novel species later in 2014. While it has nitrite-oxidizing activity, which is unique in the Thermomicrobia class, it is placed under the Thermomicrobia class based on 16s rRNA phylogeny. In 2014, two thermophilic, Gram-positive, rod-shaped, non-spore-forming bacteria (strains KI3T and KI4T) isolated from geothermally heated biofilms growing on a tumulus in the Kilauea Iki pit crater on the flank of Kilauea Volcano (Hawai'i) were proposed as representatives of new species based on 16s rRNA phylogeny. The KI3T strain, later named as Thermomicrobium carboxidum, is closely related to Thermomicrobium roseum. The KI4T strain, later named as Thermorudis peleae, was proposed as a type strain of new genus Thermorudis. In 2015, a thermophilic bacteria strain WKT50.2 isolated from geothermal soil in Waitike (New Zealand) was proposed to be a novel species, later named Thermorudis pharmacophila. Phylogenic analysis based on 16s rRNA place it within Thermomicrobia class, as close relative to Thermorudis peleae. Characteristics Living environment Members of the class Thermomicrobia are broadly distributed across a wide range of both aquatic and terrestrial habitats. Thermomicrobium roseum was found in geothermally heated hot springs, Thermorudis pharmacophila and Thermobaculum terrenum from heated soils, and Thermomicrobium carboxidum and Thermorudis peleae from heated sediments In addition, Sphaerobacter thermophilus was found in sewage sludge that went through thermophilic treatment. The common features of their habitats include temperature ranging from around 65~75 °C and a pH around 6.0~8.0 (except for Nitrolancea hollandica which grow around 40 °C). Metabolism Members of Thermomicrobia class have variation in their basic metabolism. Nitrolancetus hollandica has nitrifying activity that utilize NO2− as energy source, which is unique in the whole Chloroflexota phylum. Thermomicrobium spp. and Sphaerobacter thermophilus have constitutive CO oxidizing not found in other species in this class. However, species of this class do share some features, as listed below: All members except Thermobaculum terrenum have inability to utilize some common monosaccharides (e.g. glucose, fructose, etc.) as sole carbon source. The mechanisms behind this inability are currently unknown. Antibiotic resistance Members of Thermomicrobia class exhibit certain level of resistance against metronidazole and/or trimethoprim, which are clinically relevant for humans. Thermomicrobium carboxidum and Thermorudis peleae show resistance against both of those antibiotics, while Sphaerobacter thermophilus shows resistance against only metronidazole. Interestingly, Thermomicrobium roseum and Thermorudis pharmacophila have an increased growth in both metronidazole and trimethoprim, a rare trait even within antibiotic resistant bacteria. The mechanisms behind are currently undocumented, and further study is required on this topic. Cell envelope structure Members of Thermomicrobia class have various Gram-staining results. Thermomicrobium roseum, Sphaerobacter thermophilus and Thermorudis pharmacophila are reported to be Gram-negative and have a typical layered diderm cell envelope structure. However, their cell envelope composition are atypical compared to typical Gram-negative bacteria. Cell envelope of Thermomicrobium roseum lacks significant amount of peptidoglycan, which is fundamental for typical Gram-negative bacteria, while being rich in protein. Membrane lipids of Thermomicrobium roseum are mostly long chain diols instead of glycerol-based lipids commonly found in bacteria. The same feature was found in Sphaerobacter thermophilus and Thermorudis pharmacophila. It was suggested that the high-protein and diol-based lipid composition are responsible for heat resistance of these bacteria. Meanwhile, other members of Thermomicrobia class are reported to be Gram-positive and have typical monoderm cell envelope. There are some possible explanations of the inconsistency of Gram-staining result within the class. For Thermorudis pharmacophila, a possible explanation suggested by Houghton et al. is that it is actually an atypical monoderm bacterium, because its cell envelope contains amino acids usually associated with Gram-positive bacteria, have reaction to KOH, vancomycin and ampicillin, and lacks genes responsible for diderm formation. It is also suggested that further study is required to resolve this problem, since the inconsistent reports of cell envelope structure are found for the whole Chloroflexota phylum. Phylogeny Taxonomy The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). Order Sphaerobacterales Stackebrandt, Rainey & Ward-Rainey 1997 Family Sphaerobacteraceae Stackebrandt, Rainey & Ward-Rainey 1997 Genus Sphaerobacter Demharter et al. 1989 S. thermophilus Demharter et al. 1989 Genus Nitrolancea Sorokin et al. 2014 N. hollandica Sorokin et al. 2014 "Ca. N. copahuensis" Spieck et al. 2020 Order Thermomicrobiales Garrity & Holt 2002 Family Thermomicrobiaceae Garrity & Holt 2002 Genus Thermalbibacter Zhao et al. 2023 T. longus Zhao et al. 2023 Genus Thermomicrobium Jackson, Ramaley & Meinschein 1973 T. carboxidum King & King 2014 T. roseum Jackson, Ramaley & Meinschein 1973 Genus Thermorudis King & King 2014 T. peleae King & King 2014 T. pharmacophila Houghton et al. 2015 See also List of bacteria genera List of bacterial orders Notes References Bacteria Bacteria classes
Thermomicrobia
[ "Biology" ]
1,919
[ "Prokaryotes", "Microorganisms", "Bacteria" ]
584,777
https://en.wikipedia.org/wiki/Thermophobia
Thermophobia (adjective: thermophobic) is intolerance for high temperatures by either inorganic materials or organisms. The term has a number of specific usages. In pharmacy, a thermophobic foam consisting of 0.1% betamethasone valerate was found to be at least as effective as conventional remedies for treating dandruff. In addition, the foam is non-greasy and does not irritate the scalp. Another use of thermophobic material is in treating hyperhydrosis of the axilla and the palm: A thermophobic foam named Bettamousse developed by Mipharm, an Italian company, was found to treat hyperhydrosis effectively. In biology, some bacteria are thermophobic, such as mycobacterium leprae which causes leprosy. Thermophobic response in living organisms is negative response to higher temperatures. In physics, thermophobia is motion of particles in mixtures (solutions, suspensions, etc.) towards the areas of lower temperatures, a particular case of thermophoresis. In medicine, thermophobia refers to a sensory dysfunction, sensation of abnormal heat, which may be associated with, e.g., hyperthyroidism. See also Heat intolerance References Temperature Physiology
Thermophobia
[ "Physics", "Chemistry", "Biology" ]
285
[ "Scalar physical quantities", "Thermodynamic properties", "Temperature", "Physical quantities", "Physiology", "SI base quantities", "Intensive quantities", "Thermodynamics", "Wikipedia categories named after physical quantities" ]
584,820
https://en.wikipedia.org/wiki/Source%20port
A source port is a software project based on the source code of a game engine that allows the game to be played on operating systems or computing platforms with which the game was not originally compatible. Description Source ports are often created by fans after the original developer hands over the maintenance support for a game by releasing its source code to the public (see List of commercial video games with later released source code). In some cases, the source code used to create a source port must be obtained through reverse engineering, in situations where the original source was never formally released by the game's developers. The term was coined after the release of the source code to Doom. Due to copyright issues concerning the sound library used by the original DOS version, id Software released only the source code to the Linux version of the game. Since the majority of Doom players were DOS users the first step for a fan project was to port the Linux source code to DOS. A source port typically only includes the engine portion of the game and requires that the data files of the game in question already be present on users' systems. Source ports share the similarity with unofficial patches that both don't change the original gameplay as such projects are by definition mods. However many source ports add support for gameplay mods, which is usually optional (e.g. DarkPlaces consists of a source port engine and a gameplay mod that are even distributed separately). While the primary goal of any source port is compatibility with newer hardware, many projects support other enhancements. Common examples of additions include support for higher video resolutions and different aspect ratios, hardware accelerated renderers (OpenGL and/or Direct3D), enhanced input support (including the ability to map controls onto additional input devices), 3D character models (in case of 2.5D games), higher resolution textures, support to replace MIDI with digital audio (MP3, Ogg Vorbis, etc.), and enhanced multiplayer support using the Internet. Several source ports have been created for various games specifically to address online multiplayer support. Most older games were not created to take advantage of the Internet and the low latency, high bandwidth Internet connections available to computer gamers today. Furthermore, old games may use outdated network protocols to create multiplayer connections, such as IPX protocol, instead of Internet Protocol. Another problem was games that required a specific IP address for connecting with another player. This requirement made it difficult to quickly find a group of strangers to play with — the way that online games are most commonly played today. To address this shortcoming, specific source ports such as Skulltag added "lobbies", which are basically integrated chat rooms in which players can meet and post the location of games they are hosting or may wish to join. Similar facilities may be found in newer games and online game services such as Valve's Steam, Blizzard's battle.net, and GameSpy Arcade. Alternatives If the source code of a software is not available, alternative approaches to achieve portability are Emulation, Engine remakes, and Static recompilation. Notable source ports See also Enhanced remake Game engine recreation Static recompilation Unofficial patch List of commercial video games with later released source code Fork (software development) References External links Software maintenance Software release Unofficial adaptations
Source port
[ "Engineering" ]
660
[ "Software engineering", "Software maintenance" ]
584,842
https://en.wikipedia.org/wiki/Bellatrix
Bellatrix is the third-brightest star in the constellation of Orion, positioned 5° west of the red supergiant Betelgeuse (Alpha Orionis). It has the Bayer designation γ Orionis, which is Latinized to Gamma Orionis. With a slightly variable magnitude of around 1.6, it is typically the 25th-brightest star in the night sky. Located at a distance of 250 light-years from the Sun, it is a blue giant star around 7.7 times as massive as the sun with 5.75 times its diameter. Nomenclature The traditional name Bellatrix is from the Latin bellātrix "female warrior". It first appeared in the works of Abu Ma'shar al-Balkhi and Johannes Hispalensis, where it originally referred to Capella, but was transferred to Gamma Orionis by the Vienna school of astronomers in the 15th century, and appeared in contemporary reprints of the Alfonsine tables. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Bellatrix for this star. It is now so entered in the IAU Catalog of Star Names. The designation of Bellatrix as γ Orionis (Latinized to Gamma Orionis) was made by Johann Bayer in 1603. The "gamma" designation is commonly given to the third-brightest star in each constellation. Standard star Bellatrix has been used as both a photometric and spectral standard star, but both characteristics have been shown to be unreliable. In 1963, Bellatrix was included with a set of bright stars used to define the UBV magnitude system. These are used for comparison with other stars to check for variability, and so by definition, the apparent magnitude of Bellatrix was set to 1.64. However, when an all-sky photometry survey was carried out in 1988, this star was suspected to be variable. It was measured ranging in apparent magnitude from 1.59 to 1.64, and appears to be a low amplitude, possibly irregular variable. Physical properties The spectral types for O and early B stars were defined more rigorously in 1971 and Bellatrix was used as a standard for the B2 III type. The expected brightness of Bellatrix from this spectral type is about one magnitude brighter than calculated from its apparent magnitude and Hipparcos distance. Analysis of the observed characteristics of the star indicate that it should be a B2 main sequence star, not the giant that it appears from its spectral type. Close analysis of high resolution spectra suggest that it is a spectroscopic binary composed of two similar stars less luminous than a B2 giant. Bellatrix is a massive star with about 7.7 times the mass and 5.8 times the radius of the Sun. It has an estimated age of approximately 25 million years—old enough for a star of this mass to consume the hydrogen at its core and begin to evolve away from the main sequence into a giant star. The effective temperature of the outer envelope of this star is , which is considerably hotter than the 5,778 K on the Sun. This high temperature gives this star the blue-white hue that occurs with B-type stars. It shows a projected rotational velocity of around 52 km/s. Bellatrix may have sufficient mass to end its life in a supernova explosion. Companions Bellatrix was thought to belong to the Orion OB1 association of stars that share a common motion through space, along with the stars of Orion's Belt: Alnitak (Zeta Orionis), Alnilam (Epsilon Orionis), and Mintaka (Delta Orionis). However, this is no longer believed to be the case, as Bellatrix is now known to be much closer than the rest of the group. It is not known to have a stellar companion, although researchers Maria-Fernanda Nieva and Norbert Przybilla raised the possibility it might be a spectroscopic binary. A 2011 search for nearby companions failed to conclusively find any objects that share a proper motion with Bellatrix. Three nearby candidates were all found to be background stars. Some researchers suspected that Bellatrix was a member of the 32 Orionis group. They proposed that the 32 Ori group should instead be termed the Bellatrix Cluster on the basis that the sky position and distance of Bellatrix are similar to those of the 32 Ori group. The proper motion of Bellatrix deviates significantly from the mean motion of the group, leaving its membership in question. However, it may be possible to reconcile membership if the divergent velocity is the result of an unseen companion. For example, a face-on orbit with a black hole companion orbiting from the star with a period measured in centuries could account for the discrepancy. Etymology and cultural significance Bellatrix was also called the Amazon Star, which Richard Hinckley Allen proposed came from a loose translation of the Arabic name Al Najīd, the Conqueror. A c.1275 Arabic celestial globe records the name as المرزم "the lion". Bellatrix is one of the four navigational stars in Orion that are used for celestial navigation. In the 17th century catalogue of stars in the Calendarium of Al Achsasi al Mouakket, this star was designated Menkib al Jauza al Aisr, which was translated into Latin as Humerus Sinister Gigantis (The Left Shoulder of the Giant). The Wardaman people of northern Australia know Bellatrix as Banjan, the sparkling pigment used in ceremonies conducted by Rigel the Red Kangaroo Leader in a songline when Orion is high in the sky. The other stars of Orion are his ceremonial tools and entourage. Betelgeuse is Ya-jungin "Owl Eyes Flicking", watching the ceremonies. To the Inuit, the appearance of Betelgeuse and Bellatrix high in the southern sky after sunset marked the beginning of spring and lengthening days in late February and early March. The two stars were known as Akuttujuuk "those (two) placed far apart", referring to the distance between them, mainly to people from North Baffin Island and Melville Peninsula. See also List of brightest stars List of nearest bright stars Historical brightest stars References External links Spectrum of Bellatrix Navigational Stars B-type giants Suspected variables Orion (constellation) Orionis, Gamma 1790 Durchmusterung objects Orionis, 24 035468 025336 Stars with proper names
Bellatrix
[ "Astronomy" ]
1,347
[ "Constellations", "Orion (constellation)" ]
584,887
https://en.wikipedia.org/wiki/Optical%20coating
An optical coating is one or more thin layers of material deposited on an optical component such as a lens, prism or mirror, which alters the way in which the optic reflects and transmits light. These coatings have become a key technology in the field of optics. One type of optical coating is an anti-reflective coating, which reduces unwanted reflections from surfaces, and is commonly used on spectacle and camera lenses. Another type is the high-reflector coating, which can be used to produce mirrors that reflect greater than 99.99% of the light that falls on them. More complex optical coatings exhibit high reflection over some range of wavelengths, and anti-reflection over another range, allowing the production of dichroic thin-film filters. Types of coating The simplest optical coatings are thin layers of metals, such as aluminium, which are deposited on glass substrates to make mirror surfaces, a process known as silvering. The metal used determines the reflection characteristics of the mirror; aluminium is the cheapest and most common coating, and yields a reflectivity of around 88%-92% over the visible spectrum. More expensive is silver, which has a reflectivity of 95%-99% even into the far infrared, but suffers from decreasing reflectivity (<90%) in the blue and ultraviolet spectral regions. Most expensive is gold, which gives excellent (98%-99%) reflectivity throughout the infrared, but limited reflectivity at wavelengths shorter than 550 nm, resulting in the typical gold colour. By controlling the thickness and density of metal coatings, it is possible to decrease the reflectivity and increase the transmission of the surface, resulting in a half-silvered mirror. These are sometimes used as "one-way mirrors". The other major type of optical coating is the dielectric coating (i.e. using materials with a different refractive index to the substrate). These are constructed from thin layers of materials such as magnesium fluoride, calcium fluoride, and various metal oxides, which are deposited onto the optical substrate. By careful choice of the exact composition, thickness, and number of these layers, it is possible to tailor the reflectivity and transmitivity of the coating to produce almost any desired characteristic. Reflection coefficients of surfaces can be reduced to less than 0.2%, producing an antireflection (AR) coating. Conversely, the reflectivity can be increased to greater than 99.99%, producing a high-reflector (HR) coating. The level of reflectivity can also be tuned to any particular value, for instance to produce a mirror that reflects 90% and transmits 10% of the light that falls on it, over some range of wavelengths. Such mirrors are often used as beamsplitters, and as output couplers in lasers. Alternatively, the coating can be designed such that the mirror reflects light only in a narrow band of wavelengths, producing an optical filter. The versatility of dielectric coatings leads to their use in many scientific optical instruments (such as lasers, optical microscopes, refracting telescopes, and interferometers) as well as consumer devices such as binoculars, spectacles, and photographic lenses. Dielectric layers are sometimes applied over top of metal films, either to provide a protective layer (as in silicon dioxide over aluminium), or to enhance the reflectivity of the metal film. Metal and dielectric combinations are also used to make advanced coatings that cannot be made any other way. One example is the so-called "perfect mirror", which exhibits high (but not perfect) reflection, with unusually low sensitivity to wavelength, angle, and polarization. Antireflection coatings Antireflection coatings are used to reduce reflection from surfaces. Whenever a ray of light moves from one medium to another (such as when light enters a sheet of glass after travelling through air), some portion of the light is reflected from the surface (known as the interface) between the two media. A number of different effects are used to reduce reflection. The simplest is to use a thin layer of material at the interface, with an index of refraction between those of the two media. The reflection is minimized when , where is the index of the thin layer, and and are the indices of the two media. The optimum refractive indices for multiple coating layers at angles of incidence other than 0° is given by Moreno et al. (2005). Such coatings can reduce the reflection for ordinary glass from about 4% per surface to around 2%. These were the first type of antireflection coating known, having been discovered by Lord Rayleigh in 1886. He found that old, slightly tarnished pieces of glass transmitted more light than new, clean pieces due to this effect. Practical antireflection coatings rely on an intermediate layer not only for its direct reduction of reflection coefficient, but also use the interference effect of a thin layer. If the layer's thickness is controlled precisely such that it is exactly one-quarter of the wavelength of the light in the layer (a quarter-wave coating), the reflections from the front and back sides of the thin layer will destructively interfere and cancel each other. In practice, the performance of a simple one-layer interference coating is limited by the fact that the reflections only exactly cancel for one wavelength of light at one angle, and by difficulties finding suitable materials. For ordinary glass (n≈1.5), the optimum coating index is n≈1.23. Few useful substances have the required refractive index. Magnesium fluoride (MgF2) is often used, since it is hard-wearing and can be easily applied to substrates using physical vapour deposition, even though its index is higher than desirable (n=1.38). With such coatings, reflection as low as 1% can be achieved on common glass, and better results can be obtained on higher index media. Further reduction is possible by using multiple coating layers, designed such that reflections from the surfaces undergo maximum destructive interference. By using two or more layers, broadband antireflection coatings which cover the visible range (400-700 nm) with maximum reflectivities of less than 0.5% are commonly achievable. Reflection in narrower wavelength bands can be as low as 0.1%. Alternatively, a series of layers with small differences in refractive index can be used to create a broadband antireflective coating by means of a refractive index gradient. High-reflection coatings High-reflection (HR) coatings work the opposite way to antireflection coatings. The general idea is usually based on the periodic layer system composed from two materials, one with a high index, such as zinc sulfide (n=2.32) or titanium dioxide (n=2.4), and one with a low index, such as magnesium fluoride (n=1.38) or silicon dioxide (n=1.49). This periodic system significantly enhances the reflectivity of the surface in the certain wavelength range called band-stop, whose width is determined by the ratio of the two used indices only (for quarter-wave systems), while the maximum reflectivity increases up to almost 100% with a number of layers in the stack. The thicknesses of the layers are generally quarter-wave (then they yield to the broadest high reflection band in comparison to the non-quarter-wave systems composed from the same materials), this time designed such that reflected beams constructively interfere with one another to maximize reflection and minimize transmission. The best of these coatings built-up from deposited dielectric lossless materials on perfectly smooth surfaces can reach reflectivities greater than 99.999% (over a fairly narrow range of wavelengths). Common HR coatings can achieve 99.9% reflectivity over a broad wavelength range (tens of nanometers in the visible spectrum range). As for AR coatings, HR coatings are affected by the incidence angle of the light. When used away from normal incidence, the reflective range shifts to shorter wavelengths, and becomes polarization dependent. This effect can be exploited to produce coatings that polarize a light beam. By manipulating the exact thickness and composition of the layers in the reflective stack, the reflection characteristics can be tuned to a particular application, and may incorporate both high-reflective and anti-reflective wavelength regions. The coating can be designed as a long- or short-pass filter, a bandpass or notch filter, or a mirror with a specific reflectivity (useful in lasers). For example, the dichroic prism assembly used in some cameras requires two dielectric coatings, one long-wavelength pass filter reflecting light below 500 nm (to separate the blue component of the light), and one short-pass filter to reflect red light, above 600 nm wavelength. The remaining transmitted light is the green component. Extreme ultraviolet coatings In the EUV portion of the spectrum (wavelengths shorter than about 30 nm) nearly all materials absorb strongly, making it difficult to focus or otherwise manipulate light in this wavelength range. Telescopes such as TRACE or EIT that form images with EUV light use multilayer mirrors that are constructed of hundreds of alternating layers of a high-mass metal such as molybdenum or tungsten, and a low-mass spacer such as silicon, vacuum deposited onto a substrate such as glass. Each layer pair is designed to have a thickness equal to half the wavelength of light to be reflected. Constructive interference between scattered light from each layer causes the mirror to reflect EUV light of the desired wavelength as would a normal metal mirror in visible light. Using multilayer optics it is possible to reflect up to 70% of incident EUV light (at a particular wavelength chosen when the mirror is constructed). Transparent conductive coatings Transparent conductive coatings are used in applications where it is important that the coating conduct electricity or dissipate static charge. Conductive coatings are used to protect the aperture from electromagnetic interference, while dissipative coatings are used to prevent the build-up of static electricity. Transparent conductive coatings are also used extensively to provide electrodes in situations where light is required to pass, for example in flat panel display technologies and in many photoelectrochemical experiments. A common substance used in transparent conductive coatings is indium tin oxide (ITO). ITO is not very optically transparent, however. The layers must be thin to provide substantial transparency, particularly at the blue end of the spectrum. Using ITO, sheet resistances of 20 to 10,000 ohms per square can be achieved. An ITO coating may be combined with an antireflective coating to further improve transmittance. Other TCOs (Transparent Conductive Oxides) include AZO (Aluminium doped Zinc Oxide), which offers much better UV transmission than ITO. A special class of transparent conductive coatings applies to infrared films for theater-air military optics where IR transparent windows need to have (Radar) stealth (Stealth technology) properties. These are known as RAITs (Radar Attenuating / Infrared Transmitting) and include materials such as boron doped DLC (Diamond-like carbon). Phase correction coatings The multiple internal reflections in roof prisms cause a polarization-dependent phase-lag of the transmitted light, in a manner similar to a Fresnel rhomb. This must be suppressed by multilayer phase-correction coatings applied to one of the roof surfaces to avoid unwanted interference effects and a loss of contrast in the image. Dielectric phase-correction prism coatings are applied in a vacuum chamber with maybe 30 different superimposed vapor coating layers deposits, making it a complex production process. In a roof prism without a phase-correcting coating, s-polarized and p-polarized light each acquire a different geometric phase as they pass through the upper prism. When the two polarized components are recombined, interference between the s-polarized and p-polarized light results in a different intensity distribution perpendicular to the roof edge as compared to that along the roof edge. This effect reduces contrast and resolution in the image perpendicular to the roof edge, producing an inferior image compared to that from a porro prism erecting system. This roof edge diffraction effect may also be seen as a diffraction spike perpendicular to the roof edge generated by bright points in the image. In technical optics, such a phase is also known as the Pancharatnam phase, and in quantum physics an equivalent phenomenon is known as the Berry phase. This effect can be seen in the elongation of the Airy disk in the direction perpendicular to the crest of the roof as this is a diffraction from the discontinuity at the roof crest. The unwanted interference effects are suppressed by vapour-depositing a special dielectric coating known as a phase-compensating coating on the roof surfaces of the roof prism. These phase-correction coating or P-coating on the roof surfaces was developed in 1988 by Adolf Weyrauch at Carl Zeiss Other manufacturers followed soon, and since then phase-correction coatings are used across the board in medium and high-quality roof prism binoculars. This coating corrects for the difference in geometric phase between s- and p-polarized light so both have effectively the same phase shift, preventing image-degrading interference. From a technical point of view, the phase-correction coating layer does not correct the actual phase shift, but rather the partial polarization of the light that results from total reflection. Such a correction can always only be made for a selected wavelength and for a specific angle of incidence; however, it is possible to approximately correct a roof prism for polychromatic light by superimposing several layers. In this way, since the 1990s, roof prism binoculars have also achieved resolution values that were previously only achievable with porro prisms. The presence of a phase-correction coating can be checked on unopened binoculars using two polarization filters. Fano-resonant optical coatings Fano-resonant optical coatings (FROCs) represent a new category of optical coatings. FROCs exhibit the photonic Fano resonance by coupling a broadband nanocavity, which serves as the continuum, with a narrowband Fabry–Perot nanocavity, representing the discrete state. The interference between these two resonances manifests as an asymmetric Fano-resonance line-shape. FROCs are considered a separate category of optical coatings because they enjoy optical properties that cannot be reproduced using other optical coatings. Mainly, semi-transparent FROCs act as a beam splitting filter that reflects and transmits the same color, a property that cannot be achieved with transmission filters, dielectric mirrors, or semi-transparent metals. FROCs enjoy remarkable structural coloring properties, as they can produce colors across a wide color gamut with both high brightness and high purity. Moreover, the dependence of color on the angle of incident light can be controlled through the dielectric cavity material, making FROCs adaptable for applications requiring either angle-independent or angle-dependent coloring. This includes decorative purposes and anti-counterfeit measures. FROCs were used as both monolithic spectrum splitters and selective solar absorbers, which makes them suitable for hybrid solar-thermal energy generation. They can be designed to reflect specific wavelength ranges, aligning with the energy band gap of photovoltaic cells, while absorbing the remaining solar spectrum. This enables higher photovoltaic efficiency at elevated optical concentrations by reducing the photovoltaic's cell temperature. The reduced temperature also increases the cell's lifetime. Additionally, their low infrared emissivity minimizes thermal losses, increasing the system's overall optothermal efficiency. Sources Hecht, Eugene. Chapter 9, Optics, 2nd ed. (1990), Addison Wesley. . I. Moreno, et al., "Thin-film spatial filters", Optics Letters, 30, 914–916 (2005), . C. Clark, et al., "Two-color Mach 3 IR coating for TAMD systems", Proc. SPIE, vol. 4375, p. 307–314 (2001), . References See also List of telescope parts and construction Thin-film optics
Optical coating
[ "Materials_science", "Mathematics" ]
3,359
[ "Thin-film optics", "Planes (geometry)", "Thin films" ]
584,911
https://en.wikipedia.org/wiki/External%20ballistics
External ballistics or exterior ballistics is the part of ballistics that deals with the behavior of a projectile in flight. The projectile may be powered or un-powered, guided or unguided, spin or fin stabilized, flying through an atmosphere or in the vacuum of space, but most certainly flying under the influence of a gravitational field. Gun-launched projectiles may be unpowered, deriving all their velocity from the propellant's ignition until the projectile exits the gun barrel. However, exterior ballistics analysis also deals with the trajectories of rocket-assisted gun-launched projectiles and gun-launched rockets; and rockets that acquire all their trajectory velocity from the interior ballistics of their on-board propulsion system, either a rocket motor or air-breathing engine, both during their boost phase and after motor burnout. External ballistics is also concerned with the free-flight of other projectiles, such as balls, arrows etc. Forces acting on the projectile When in flight, the main or major forces acting on the projectile are gravity, drag, and if present, wind; if in powered flight, thrust; and if guided, the forces imparted by the control surfaces. In small arms external ballistics applications, gravity imparts a downward acceleration on the projectile, causing it to drop from the line-of-sight. Drag, or the air resistance, decelerates the projectile with a force proportional to the square of the velocity. Wind makes the projectile deviate from its trajectory. During flight, gravity, drag, and wind have a major impact on the path of the projectile, and must be accounted for when predicting how the projectile will travel. For medium to longer ranges and flight times, besides gravity, air resistance and wind, several intermediate or meso variables described in the external factors paragraph have to be taken into account for small arms. Meso variables can become significant for firearms users that have to deal with angled shot scenarios or extended ranges, but are seldom relevant at common hunting and target shooting distances. For long to very long small arms target ranges and flight times, minor effects and forces such as the ones described in the long range factors paragraph become important and have to be taken into account. The practical effects of these minor variables are generally irrelevant for most firearms users, since normal group scatter at short and medium ranges prevails over the influence these effects exert on projectile trajectories. At extremely long ranges, artillery must fire projectiles along trajectories that are not even approximately straight; they are closer to parabolic, although air resistance affects this. Extreme long range projectiles are subject to significant deflections, depending on circumstances, from the line toward the target; and all external factors and long range factors must be taken into account when aiming. In very large-calibre artillery cases, like the Paris Gun, very subtle effects that are not covered in this article can further refine aiming solutions. In the case of ballistic missiles, the altitudes involved have a significant effect as well, with part of the flight taking place in a near-vacuum well above a rotating Earth, steadily moving the target from where it was at launch time. Stabilizing non-spherical projectiles during flight Two methods can be employed to stabilize non-spherical projectiles during flight: Projectiles like arrows or arrow like sabots such as the M829 Armor-Piercing, Fin-Stabilized, Discarding Sabot (APFSDS) achieve stability by forcing their center of pressure (CP) behind their center of mass (CM) with tail surfaces. The CP behind the CM condition yields stable projectile flight, meaning the projectile will not overturn during flight through the atmosphere due to aerodynamic forces. Projectiles like small arms bullets and artillery shells must deal with their CP being in front of their CM, which destabilizes these projectiles during flight. To stabilize such projectiles the projectile is spun around its longitudinal (leading to trailing) axis. The spinning mass creates gyroscopic forces that keep the bullet's length axis resistant to the destabilizing overturning torque of the CP being in front of the CM. Main effects in external ballistics Projectile/bullet drop and projectile path The effect of gravity on a projectile in flight is often referred to as projectile drop or bullet drop. It is important to understand the effect of gravity when zeroing the sighting components of a gun. To plan for projectile drop and compensate properly, one must understand parabolic shaped trajectories. Projectile/bullet drop In order for a projectile to impact any distant target, the barrel must be inclined to a positive elevation angle relative to the target. This is due to the fact that the projectile will begin to respond to the effects of gravity the instant it is free from the mechanical constraints of the bore. The imaginary line down the center axis of the bore and out to infinity is called the line of departure and is the line on which the projectile leaves the barrel. Due to the effects of gravity a projectile can never impact a target higher than the line of departure. When a positively inclined projectile travels downrange, it arcs below the line of departure as it is being deflected off its initial path by gravity. Projectile/Bullet drop is defined as the vertical distance of the projectile below the line of departure from the bore. Even when the line of departure is tilted upward or downward, projectile drop is still defined as the distance between the bullet and the line of departure at any point along the trajectory. Projectile drop does not describe the actual trajectory of the projectile. Knowledge of projectile drop however is useful when conducting a direct comparison of two different projectiles regarding the shape of their trajectories, comparing the effects of variables such as velocity and drag behavior. Projectile/bullet path For hitting a distant target an appropriate positive elevation angle is required that is achieved by angling the line of sight from the shooter's eye through the centerline of the sighting system downward toward the line of departure. This can be accomplished by simply adjusting the sights down mechanically, or by securing the entire sighting system to a sloped mounting having a known downward slope, or by a combination of both. This procedure has the effect of elevating the muzzle when the barrel must be subsequently raised to align the sights with the target. A projectile leaving a muzzle at a given elevation angle follows a ballistic trajectory whose characteristics are dependent upon various factors such as muzzle velocity, gravity, and aerodynamic drag. This ballistic trajectory is referred to as the bullet path. If the projectile is spin stabilized, aerodynamic forces will also predictably arc the trajectory slightly to the right, if the rifling employs "right-hand twist." Some barrels are cut with left-hand twist, and the bullet will arc to the left, as a result. Therefore, to compensate for this path deviation, the sights also have to be adjusted left or right, respectively. A constant wind also predictably affects the bullet path, pushing it slightly left or right, and a little bit more up and down, depending on the wind direction. The magnitude of these deviations are also affected by whether the bullet is on the upward or downward slope of the trajectory, due to a phenomenon called "yaw of repose," where a spinning bullet tends to steadily and predictably align slightly off center from its point mass trajectory. Nevertheless, each of these trajectory perturbations are predictable once the projectile aerodynamic coefficients are established, through a combination of detailed analytical modeling and test range measurements. Projectile/bullet path analysis is of great use to shooters because it allows them to establish ballistic tables that will predict how much vertical elevation and horizontal deflection corrections must be applied to the sight line for shots at various known distances. The most detailed ballistic tables are developed for long range artillery and are based on six-degree-of-freedom trajectory analysis, which accounts for aerodynamic behavior along the three axial directions—elevation, range, and deflection—and the three rotational directions—pitch, yaw, and spin. For small arms applications, trajectory modeling can often be simplified to calculations involving only four of these degrees-of-freedom, lumping the effects of pitch, yaw and spin into the effect of a yaw-of-repose to account for trajectory deflection. Once detailed range tables are established, shooters can relatively quickly adjust sights based on the range to target, wind, air temperature and humidity, and other geometric considerations, such as terrain elevation differences. Projectile path values are determined by both the sight height, or the distance of the line of sight above the bore centerline, and the range at which the sights are zeroed, which in turn determines the elevation angle. A projectile following a ballistic trajectory has both forward and vertical motion. Forward motion is slowed due to air resistance, and in point mass modeling the vertical motion is dependent on a combination of the elevation angle and gravity. Initially, the projectile is rising with respect to the line of sight or the horizontal sighting plane. The projectile eventually reaches its apex (highest point in the trajectory parabola) where the vertical speed component decays to zero under the effect of gravity, and then begins to descend, eventually impacting the earth. The farther the distance to the intended target, the greater the elevation angle and the higher the apex. The projectile path crosses the horizontal sighting plane two times. The point closest to the gun occurs while the bullet is climbing through the line of sight and is called the near zero. The second point occurs as the projectile is descending through the line of sight. It is called the far zero and defines the current sight in distance for the gun. Projectile path is described numerically as distances above or below the horizontal sighting plane at various points along the trajectory. This is in contrast to projectile drop which is referenced to the plane containing the line of departure regardless of the elevation angle. Since each of these two parameters uses a different reference datum, significant confusion can result because even though a projectile is tracking well below the line of departure it can still be gaining actual and significant height with respect to the line of sight as well as the surface of the Earth in the case of a horizontal or near horizontal shot taken over flat terrain. Maximum point-blank range and battle zero Knowledge of the projectile drop and path has some practical uses to shooters even if it does not describe the actual trajectory of the projectile. For example, if the vertical projectile position over a certain range reach is within the vertical height of the target area the shooter wants to hit, the point of aim does not necessarily need to be adjusted over that range; the projectile is considered to have a sufficiently flat point-blank range trajectory for that particular target. Also known as "battle zero", maximum point-blank range is also of importance to the military. Soldiers are instructed to fire at any target within this range by simply placing their weapon's sights on the center of mass of the enemy target. Any errors in range estimation are tactically irrelevant, as a well-aimed shot will hit the torso of the enemy soldier. The current trend for elevated sights and higher-velocity cartridges in assault rifles is in part due to a desire to extend the maximum point-blank range, which makes the rifle easier to use. Drag resistance Mathematical models, such as computational fluid dynamics, are used for calculating the effects of drag or air resistance; they are quite complex and not yet completely reliable, but research is ongoing. The most reliable method, therefore, of establishing the necessary projectile aerodynamic properties to properly describe flight trajectories is by empirical measurement. Fixed drag curve models generated for standard-shaped projectiles Use of ballistics tables or ballistics software based on the Mayevski/Siacci method and G1 drag model, introduced in 1881, are the most common method used to work with external ballistics. Projectiles are described by a ballistic coefficient, or BC, which combines the air resistance of the bullet shape (the drag coefficient) and its sectional density (a function of mass and bullet diameter). The deceleration due to drag that a projectile with mass m, velocity v, and diameter d will experience is proportional to 1/BC, 1/m, v² and d². The BC gives the ratio of ballistic efficiency compared to the standard G1 projectile, which is a fictitious projectile with a flat base, a length of 3.28 calibers/diameters, and a 2 calibers/diameters radius tangential curve for the point. The G1 standard projectile originates from the "C" standard reference projectile defined by the German steel, ammunition and armaments manufacturer Krupp in 1881. The G1 model standard projectile has a BC of 1. The French Gâvre Commission decided to use this projectile as their first reference projectile, giving the G1 name. Sporting bullets, with a calibre d ranging from 0.177 to 0.50 inches (4.50 to 12.7 mm), have G1 BC's in the range 0.12 to slightly over 1.00, with 1.00 being the most aerodynamic, and 0.12 being the least. Very-low-drag bullets with BC's ≥ 1.10 can be designed and produced on CNC precision lathes out of mono-metal rods, but they often have to be fired from custom made full bore rifles with special barrels. Sectional density is a very important aspect of a projectile or bullet, and is for a round projectile like a bullet the ratio of frontal surface area (half the bullet diameter squared, times pi) to bullet mass. Since, for a given bullet shape, frontal surface increases as the square of the calibre, and mass increases as the cube of the diameter, then sectional density grows linearly with bore diameter. Since BC combines shape and sectional density, a half scale model of the G1 projectile will have a BC of 0.5, and a quarter scale model will have a BC of 0.25. Since different projectile shapes will respond differently to changes in velocity (particularly between supersonic and subsonic velocities), a BC provided by a bullet manufacturer will be an average BC that represents the common range of velocities for that bullet. For rifle bullets, this will probably be a supersonic velocity, for pistol bullets it will probably be subsonic. For projectiles that travel through the supersonic, transonic and subsonic flight regimes BC is not well approximated by a single constant, but is considered to be a function BC(M) of the Mach number M; here M equals the projectile velocity divided by the speed of sound. During the flight of the projectile the M will decrease, and therefore (in most cases) the BC will also decrease. Most ballistic tables or software takes for granted that one specific drag function correctly describes the drag and hence the flight characteristics of a bullet related to its ballistics coefficient. Those models do not differentiate between wadcutter, flat-based, spitzer, boat-tail, very-low-drag, etc. bullet types or shapes. They assume one invariable drag function as indicated by the published BC. Several drag curve models optimized for several standard projectile shapes are however available. The resulting fixed drag curve models for several standard projectile shapes or types are referred to as the: G1 or Ingalls (flatbase with 2 caliber (blunt) nose ogive - by far the most popular) G2 (Aberdeen J projectile) G5 (short 7.5° boat-tail, 6.19 calibers long tangent ogive) G6 (flatbase, 6 calibers long secant ogive) G7 (long 7.5° boat-tail, 10 calibers tangent ogive, preferred by some manufacturers for very-low-drag bullets) G8 (flatbase, 10 calibers long secant ogive) GL (blunt lead nose) How different speed regimes affect .338 calibre rifle bullets can be seen in the .338 Lapua Magnum product brochure which states Doppler radar established G1 BC data. The reason for publishing data like in this brochure is that the Siacci/Mayevski G1 model can not be tuned for the drag behavior of a specific projectile whose shape significantly deviates from the used reference projectile shape. Some ballistic software designers, who based their programs on the Siacci/Mayevski G1 model, give the user the possibility to enter several different G1 BC constants for different speed regimes to calculate ballistic predictions that closer match a bullets flight behavior at longer ranges compared to calculations that use only one BC constant. The above example illustrates the central problem fixed drag curve models have. These models will only yield satisfactory accurate predictions as long as the projectile of interest has the same shape as the reference projectile or a shape that closely resembles the reference projectile. Any deviation from the reference projectile shape will result in less accurate predictions. How much a projectile deviates from the applied reference projectile is mathematically expressed by the form factor (i). The form factor can be used to compare the drag experienced by a projectile of interest to the drag experienced by the employed reference projectile at a given velocity (range). The problem that the actual drag curve of a projectile can significantly deviate from the fixed drag curve of any employed reference projectile systematically limits the traditional drag resistance modeling approach. The relative simplicity however makes that it can be explained to and understood by the general shooting public and hence is also popular amongst ballistic software prediction developers and bullet manufacturers that want to market their products. More advanced drag models Pejsa model Another attempt at building a ballistic calculator is the model presented in 1980 by Dr. Arthur J. Pejsa. Dr. Pejsa claims on his website that his method was consistently capable of predicting (supersonic) rifle bullet trajectories within 2.5 mm (0.1 in) and bullet velocities within 0.3 m/s (1 ft/s) out to 914 m (1,000 yd) in theory. The Pejsa model is a closed-form solution. The Pejsa model can predict a projectile within a given flight regime (for example the supersonic flight regime) with only two velocity measurements, a distance between said velocity measurements, and a slope or deceleration constant factor. The model allows the drag curve to change slopes (true/calibrate) or curvature at three different points. Down range velocity measurement data can be provided around key inflection points allowing for more accurate calculations of the projectile retardation rate, very similar to a Mach vs CD table. The Pejsa model allows the slope factor to be tuned to account for subtle differences in the retardation rate of different bullet shapes and sizes. It ranges from 0.1 (flat-nose bullets) to 0.9 (very-low-drag bullets). If this slope or deceleration constant factor is unknown a default value of 0.5 is used. With the help of test firing measurements the slope constant for a particular bullet/rifle system/shooter combination can be determined. These test firings should preferably be executed at 60% and for extreme long range ballistic predictions also at 80% to 90% of the supersonic range of the projectiles of interest, staying away from erratic transonic effects. With this the Pejsa model can easily be tuned. A practical downside of the Pejsa model is that accurate projectile specific down range velocity measurements to provide these better predictions can not be easily performed by the vast majority of shooting enthusiasts. An average retardation coefficient can be calculated for any given slope constant factor if velocity data points are known and distance between said velocity measurements is known. Obviously this is true only within the same flight regime. With velocity actual speed is meant, as velocity is a vector quantity and speed is the magnitude of the velocity vector. Because the power function does not have constant curvature a simple chord average cannot be used. The Pejsa model uses a weighted average retardation coefficient weighted at 0.25 range. The closer velocity is more heavily weighted. The retardation coefficient is measured in feet whereas range is measured in yards hence 0.25 × 3.0 = 0.75, in some places 0.8 rather than 0.75 is used. The 0.8 comes from rounding in order to allow easy entry on hand calculators. Since the Pejsa model does not use a simple chord weighted average, two velocity measurements are used to find the chord average retardation coefficient at midrange between the two velocity measurements points, limiting it to short range accuracy. In order to find the starting retardation coefficient Dr. Pejsa provides two separate equations in his two books. The first involves the power function. The second equation is identical to the one used to find the weighted average at R / 4; add N × (R/2) where R is the range in feet to the chord average retardation coefficient at midrange and where N is the slope constant factor. After the starting retardation coefficient is found the opposite procedure is used in order find the weighted average at R / 4; the starting retardation coefficient minus N × (R/4). In other words, N is used as the slope of the chord line. Dr. Pejsa states that he expanded his drop formula in a power series in order to prove that the weighted average retardation coefficient at R / 4 was a good approximation. For this Dr. Pejsa compared the power series expansion of his drop formula to some other unnamed drop formula's power expansion to reach his conclusions. The fourth term in both power series matched when the retardation coefficient at 0.25 range was used in Pejsa's drop formula. The fourth term was also the first term to use N. The higher terms involving N where insignificant and disappeared at N = 0.36, which according to Dr. Pejsa was a lucky coincidence making for an exceedingly accurate linear approximation, especially for N's around 0.36. If a retardation coefficient function is used exact average values for any N can be obtained because from calculus it is trivial to find the average of any integrable function. Dr. Pejsa states that the retardation coefficient can be modeled by C × VN where C is a fitting coefficient which disappears during the derivation of the drop formula and N the slope constant factor. The retardation coefficient equals the velocity squared divided by the retardation rate A. Using an average retardation coefficient allows the Pejsa model to be a closed-form expression within a given flight regime. In order to allow the use of a G1 ballistic coefficient rather than velocity data Dr. Pejsa provided two reference drag curves. The first reference drag curve is based purely on the Siacci/Mayevski retardation rate function. The second reference drag curve is adjusted to equal the Siacci/Mayevski retardation rate function at a projectile velocity of 2600 fps (792.5 m/s) using a .30-06 Springfield Cartridge, Ball, Caliber .30 M2 rifle spitzer bullet with a slope or deceleration constant factor of 0.5 in the supersonic flight regime. In other flight regimes the second Pejsa reference drag curve model uses slope constant factors of 0.0 or -4.0. These deceleration constant factors can be verified by backing out Pejsa's formulas (the drag curve segments fits the form V(2 - N) / C and the retardation coefficient curve segments fits the form V2 / (V(2 - N) / C) = C × VN where C is a fitting coefficient). The empirical test data Pejsa used to determine the exact shape of his chosen reference drag curve and pre-defined mathematical function that returns the retardation coefficient at a given Mach number was provided by the US military for the Cartridge, Ball, Caliber .30 M2 bullet. The calculation of the retardation coefficient function also involves air density, which Pejsa did not mention explicitly. The Siacci/Mayevski G1 model uses the following deceleration parametrization (60 °F, 30 inHg and 67% humidity, air density ρ = 1.2209 kg/m3). Dr. Pejsa suggests using the second drag curve because the Siacci/Mayevski G1 drag curve does not provide a good fit for modern spitzer bullets. To obtain relevant retardation coefficients for optimal long range modeling Dr. Pejsa suggested using accurate projectile specific down range velocity measurement data for a particular projectile to empirically derive the average retardation coefficient rather than using a reference drag curve derived average retardation coefficient. Further he suggested using ammunition with reduced propellant loads to empirically test actual projectile flight behavior at lower velocities. When working with reduced propellant loads utmost care must be taken to avoid dangerous or catastrophic conditions (detonations) with can occur when firing experimental loads in firearms. Manges model Although not as well known as the Pejsa model, an additional alternative ballistic model was presented in 1989 by Colonel Duff Manges (U S Army Retired) at the American Defense Preparedness (ADPA) 11th International Ballistic Symposium held at the Brussels Congress Center, Brussels, Belgium, May 9–11, 1989. A paper titled "Closed Form Trajectory Solutions for Direct Fire Weapons Systems" appears in the proceedings, Volume 1, Propulsion Dynamics, Launch Dynamics, Flight Dynamics, pages 665–674. Originally conceived to model projectile drag for 120 mm tank gun ammunition, the novel drag coefficient formula has been applied subsequently to ballistic trajectories of center-fired rifle ammunition with results comparable to those claimed for the Pejsa model. The Manges model uses a first principles theoretical approach that eschews "G" curves and "ballistic coefficients" based on the standard G1 and other similarity curves. The theoretical description has three main parts. The first is to develop and solve a formulation of the two dimensional differential equations of motion governing flat trajectories of point mass projectiles by defining mathematically a set of quadratures that permit closed form solutions for the trajectory differential equations of motion. A sequence of successive approximation drag coefficient functions is generated that converge rapidly to actual observed drag data. The vacuum trajectory, simplified aerodynamic, d'Antonio, and Euler drag law models are special cases. The Manges drag law thereby provides a unifying influence with respect to earlier models used to obtain two dimensional closed form solutions to the point-mass equations of motion. The third purpose of this paper is to describe a least squares fitting procedure for obtaining the new drag functions from observed experimental data. The author claims that results show excellent agreement with six degree of freedom numerical calculations for modern tank ammunition and available published firing tables for center-fired rifle ammunition having a wide variety of shapes and sizes. A Microsoft Excel application has been authored that uses least squares fits of wind tunnel acquired tabular drag coefficients. Alternatively, manufacturer supplied ballistic trajectory data, or Doppler acquired velocity data can be fitted as well to calibrate the model. The Excel application then employs custom macroinstructions to calculate the trajectory variables of interest. A modified 4th order Runge–Kutta integration algorithm is used. Like Pejsa, Colonel Manges claims center-fired rifle accuracies to the nearest one tenth of an inch for bullet position, and nearest foot per second for the projectile velocity. The Proceedings of the 11th International Ballistic Symposium are available through the National Defense Industrial Association (NDIA) at the website http://www.ndia.org/Resources/Pages/Publication_Catalog.aspx . Six degrees of freedom model There are also advanced professional ballistic models like PRODAS available. These are based on six degrees of freedom (6 DoF) calculations. 6 DoF modeling accounts for x, y, and z position in space along with the projectiles pitch, yaw, and roll rates. 6 DoF modeling needs such elaborate data input, knowledge of the employed projectiles and expensive data collection and verification methods that it is impractical for non-professional ballisticians, but not impossible for the curious, computer literate, and mathematically inclined. Semi-empirical aeroprediction models have been developed that reduced extensive test range data on a wide variety of projectile shapes, normalizing dimensional input geometries to calibers; accounting for nose length and radius, body length, and boattail size, and allowing the full set of 6-dof aerodynamic coefficients to be estimated. Early research on spin-stabilized aeroprediction software resulted in the SPINNER computer program. The FINNER aeroprediction code calculates 6-dof inputs for fin stabilized projectiles. Solids modeling software that determines the projectile parameters of mass, center of gravity, axial and transverse moments of inertia necessary for stability analysis are also readily available, and simple to computer program. Finally, algorithms for 6-dof numerical integration suitable to a 4th order Runge-Kutta are readily available. All that is required for the amateur ballistician to investigate the finer analytical details of projectile trajectories, along with bullet nutation and precession behavior, is computer programming determination. Nevertheless, for the small arms enthusiast, aside from academic curiosity, one will discover that being able to predict trajectories to 6-dof accuracy is probably not of practical significance compared to more simplified point mass trajectories based on published bullet ballistic coefficients. 6 DoF is generally used by the aerospace and defense industry and military organizations that study the ballistic behavior of a limited number of (intended) military issue projectiles. Calculated 6 DoF trends can be incorporated as correction tables in more conventional ballistic software applications. Though 6 DoF modeling and software applications are used by professional well equipped organizations for decades, the computing power restrictions of mobile computing devices like (ruggedized) personal digital assistants, tablet computers or smartphones impaired field use as calculations generally have to be done on the fly. In 2016 the Scandinavian ammunition manufacturer Nammo Lapua Oy released a 6 DoF calculation model based ballistic free software named Lapua Ballistics. The software is distributed as a mobile app only and available for Android and iOS devices. The employed 6 DoF model is however limited to Lapua bullets as a 6 DoF solver needs bullet specific drag coefficient (Cd)/Doppler radar data and geometric dimensions of the projectile(s) of interest. For other bullets the Lapua Ballistics solver is limited to and based on G1 or G7 ballistic coefficients and the Mayevski/Siacci method. Artillery software suites Military organizations have developed ballistic models like the NATO Armament Ballistic Kernel (NABK) for fire-control systems for artillery like the SG2 Shareable (Fire Control) Software Suite (S4) from the NATO Army Armaments Group (NAAG). The NATO Armament Ballistic Kernel is a 4-DoF modified point mass model. This is a compromise between a simple point mass model and a computationally intensive 6-DoF model. A six- and seven-degree-of-freedom standard called BALCO has also been developed within NATO working groups. BALCO is a trajectory simulation program based on the mathematical model defined by the NATO Standardization Recommendation 4618. The primary goal of BALCO is to compute high-fidelity trajectories for both conventional axisymmetric and precision-guided projectiles featuring control surfaces. The BALCO trajectory model is a FORTRAN 2003 program that implements the following features: 6/7‐DoF equations of motion 7th‐order Runge‐Kutta‐Fehlberg integration Earth models Atmosphere models Aerodynamic models Thrust and Base Burn models Actuator models The predictions these models yield are subject to comparison study. Doppler radar measurements For the precise establishment of drag or air resistance effects on projectiles, Doppler radar measurements are required. Weibel 1000e or Infinition BR-1001 Doppler radars are used by governments, professional ballisticians, defence forces and a few ammunition manufacturers to obtain real-world data of the flight behavior of projectiles of their interest. Correctly established state of the art Doppler radar measurements can determine the flight behavior of projectiles as small as airgun pellets in three-dimensional space to within a few millimetres accuracy. The gathered data regarding the projectile deceleration can be derived and expressed in several ways, such as ballistic coefficients (BC) or drag coefficients (Cd). Because a spinning projectile experiences both precession and nutation about its center of gravity as it flies, further data reduction of doppler radar measurements is required to separate yaw induced drag and lift coefficients from the zero yaw drag coefficient, in order to make measurements fully applicable to 6-dof trajectory analysis. Doppler radar measurement results for a lathe-turned monolithic solid .50 BMG very-low-drag bullet (Lost River J40 .510-773 grain monolithic solid bullet / twist rate 1:15 in) look like this: The initial rise in the BC value is attributed to a projectile's always present yaw and precession out of the bore. The test results were obtained from many shots not just a single shot. The bullet was assigned 1.062 for its BC number by the bullet's manufacturer Lost River Ballistic Technologies. Doppler radar measurement results for a Lapua GB528 Scenar 19.44 g (300 gr) 8.59 mm (0.338 in) calibre very-low-drag bullet look like this: This tested bullet experiences its maximum drag coefficient when entering the transonic flight regime around Mach 1.200. With the help of Doppler radar measurements projectile specific drag models can be established that are most useful when shooting at extended ranges where the bullet speed slows to the transonic speed region near the speed of sound. This is where the projectile drag predicted by mathematic modeling can significantly depart from the actual drag experienced by the projectile. Further Doppler radar measurements are used to study subtle in-flight effects of various bullet constructions. Governments, professional ballisticians, defence forces and ammunition manufacturers can supplement Doppler radar measurements with measurements gathered by telemetry probes fitted to larger projectiles. General trends in drag or ballistic coefficient In general, a pointed projectile will have a better drag coefficient (Cd) or ballistic coefficient (BC) than a round nosed bullet, and a round nosed bullet will have a better Cd or BC than a flat point bullet. Large radius curves, resulting in a shallower point angle, will produce lower drags, particularly at supersonic velocities. Hollow point bullets behave much like a flat point of the same point diameter. Projectiles designed for supersonic use often have a slightly tapered base at the rear, called a boat tail, which reduces air resistance in flight. The usefulness of a "tapered rear" for long-range firing was well established already by early 1870s, but technological difficulties prevented their wide adoption before well into 20th century. Cannelures, which are recessed rings around the projectile used to crimp the projectile securely into the case, will cause an increase in drag. Analytical software was developed by the Ballistics Research Laboratory – later called the Army Research Laboratory – which reduced actual test range data to parametric relationships for projectile drag coefficient prediction. Large caliber artillery also employ drag reduction mechanisms in addition to streamlining geometry. Rocket-assisted projectiles employ a small rocket motor that ignites upon muzzle exit providing additional thrust to overcome aerodynamic drag. Rocket assist is most effective with subsonic artillery projectiles. For supersonic long range artillery, where base drag dominates, base bleed is employed. Base bleed is a form of a gas generator that does not provide significant thrust, but rather fills the low-pressure area behind the projectile with gas, effectively reducing the base drag and the overall projectile drag coefficient. Transonic problem A projectile fired at supersonic muzzle velocity will at some point slow to approach the speed of sound. At the transonic region (about Mach 1.2–0.8) the centre of pressure (CP) of most non spherical projectiles shifts forward as the projectile decelerates. That CP shift affects the (dynamic) stability of the projectile. If the projectile is not well stabilized, it cannot remain pointing forward through the transonic region (the projectile starts to exhibit an unwanted precession or coning motion called limit cycle yaw that, if not damped out, can eventually end in uncontrollable tumbling along the length axis). However, even if the projectile has sufficient stability (static and dynamic) to be able to fly through the transonic region and stays pointing forward, it is still affected. The erratic and sudden CP shift and (temporary) decrease of dynamic stability can cause significant dispersion (and hence significant accuracy decay), even if the projectile's flight becomes well behaved again when it enters the subsonic region. This makes accurately predicting the ballistic behavior of projectiles in the transonic region very difficult. Because of this, marksmen normally restrict themselves to engaging targets close enough that the projectile is still supersonic. In 2015, the American ballistician Bryan Litz introduced the "Extended Long Range" concept to define rifle shooting at ranges where supersonic fired (rifle) bullets enter the transonic region. According to Litz, "Extended Long Range starts whenever the bullet slows to its transonic range. As the bullet slows down to approach Mach 1, it starts to encounter transonic effects, which are more complex and difficult to account for, compared to the supersonic range where the bullet is relatively well-behaved." The ambient air density has a significant effect on dynamic stability during transonic transition. Though the ambient air density is a variable environmental factor, adverse transonic transition effects can be negated better by a projectile traveling through less dense air, than when traveling through denser air. Projectile or bullet length also affects limit cycle yaw. Longer projectiles experience more limit cycle yaw than shorter projectiles of the same diameter. Another feature of projectile design that has been identified as having an effect on the unwanted limit cycle yaw motion is the chamfer at the base of the projectile. At the very base, or heel of a projectile or bullet, there is a chamfer, or radius. The presence of this radius causes the projectile to fly with greater limit cycle yaw angles. Rifling can also have a subtle effect on limit cycle yaw. In general faster spinning projectiles experience less limit cycle yaw. Research into guided projectiles To circumvent the transonic problems encountered by spin-stabilized projectiles, projectiles can theoretically be guided during flight. The Sandia National Laboratories announced in January 2012 it has researched and test-fired 4-inch (102 mm) long prototype dart-like, self-guided bullets for small-caliber, smooth-bore firearms that could hit laser-designated targets at distances of more than a mile (about 1,610 meters or 1760 yards). These projectiles are not spin stabilized and the flight path can steered within limits with an electromagnetic actuator 30 times per second. The researchers also claim they have video of the bullet radically pitching as it exits the barrel and pitching less as it flies down range, a disputed phenomenon known to long-range firearms experts as “going to sleep”. Because the bullet's motions settle the longer it is in flight, accuracy improves at longer ranges, Sandia researcher Red Jones said. “Nobody had ever seen that, but we’ve got high-speed video photography that shows that it’s true,” he said. Recent testing indicates it may be approaching or already achieved initial operational capability. Testing the predictive qualities of software Due to the practical inability to know in advance and compensate for all the variables of flight, no software simulation, however advanced, will yield predictions that will always perfectly match real world trajectories. It is however possible to obtain predictions that are very close to actual flight behavior. Empirical measurement method Ballistic prediction computer programs intended for (extreme) long ranges can be evaluated by conducting field tests at the supersonic to subsonic transition range (the last 10 to 20% of the supersonic range of the rifle/cartridge/bullet combination). For a typical .338 Lapua Magnum rifle for example, shooting standard 16.2 gram (250 gr) Lapua Scenar GB488 bullets at 905 m/s (2969 ft/s) muzzle velocity, field testing of the software should be done at ≈ 1200-1300 meters (1312-1422 yd) under International Standard Atmosphere sea level conditions (air density ρ = 1.225 kg/m³). To check how well the software predicts the trajectory at shorter to medium range, field tests at 20, 40 and 60% of the supersonic range have to be conducted. At those shorter to medium ranges, transonic problems and hence unbehaved bullet flight should not occur, and the BC is less likely to be transient. Testing the predictive qualities of software at (extreme) long ranges is expensive because it consumes ammunition; the actual muzzle velocity of all shots fired must be measured to be able to make statistically dependable statements. Sample groups of less than 24 shots may not obtain the desired statistically significant confidence interval. Doppler radar measurement method Governments, professional ballisticians, defence forces and a few ammunition manufacturers use Doppler radars and/or telemetry probes fitted to larger projectiles to obtain precise real world data regarding the flight behavior of the specific projectiles of their interest and thereupon compare the gathered real world data against the predictions calculated by ballistic computer programs. The normal shooting or aerodynamics enthusiast, however, has no access to such expensive professional measurement devices. Authorities and projectile manufacturers are generally reluctant to share the results of Doppler radar tests and the test derived drag coefficients (Cd) of projectiles with the general public. Around 2020 more affordable but less capable (amateur) Doppler rader equipment to determine free flight drag coefficients became available for the general public. In January 2009, the Scandinavian ammunition manufacturer Nammo/Lapua published Doppler radar test-derived drag coefficient data for most of their rifle projectiles. In 2015 the US ammunition manufacturer Berger Bullets announced the use of Doppler radar in unison with PRODAS 6 DoF software to generate trajectory solutions. In 2016 US ammunition manufacturer Hornady announced the use of Doppler radar derived drag data in software utilizing a modified point mass model to generate trajectory solutions. With the measurement derived Cd data engineers can create algorithms that utilize both known mathematical ballistic models as well as test specific, tabular data in unison. When used by predictive software like QuickTARGET Unlimited, Lapua Edition, Lapua Ballistics or Hornady 4DOF the Doppler radar test-derived drag coefficient data can be used for more accurate external ballistic predictions. Some of the Lapua-provided drag coefficient data shows drastic increases in the measured drag around or below the Mach 1 flight velocity region. This behavior was observed for most of the measured small calibre bullets, and not so much for the larger calibre bullets. This implies some (mostly smaller calibre) rifle bullets exhibited more limit cycle yaw (coning and/or tumbling) in the transonic/subsonic flight velocity regime. The information regarding unfavourable transonic/subsonic flight behavior for some of the tested projectiles is important. This is a limiting factor for extended range shooting use, because the effects of limit cycle yaw are not easily predictable and potentially catastrophic for the best ballistic prediction models and software. Presented Cd data can not be simply used for every gun-ammunition combination, since it was measured for the barrels, rotational (spin) velocities and ammunition lots the Lapua testers used during their test firings. Variables like differences in rifling (number of grooves, depth, width and other dimensional properties), twist rates and/or muzzle velocities impart different rotational (spin) velocities and rifling marks on projectiles. Changes in such variables and projectile production lot variations can yield different downrange interaction with the air the projectile passes through that can result in (minor) changes in flight behavior. This particular field of external ballistics is currently (2009) not elaborately studied nor well understood. Predictions of several drag resistance modelling and measuring methods The method employed to model and predict external ballistic behavior can yield differing results with increasing range and time of flight. To illustrate this several external ballistic behavior prediction methods for the Lapua Scenar GB528 19.44 g (300 gr) 8.59 mm (0.338 in) calibre very-low-drag rifle bullet with a manufacturer stated G1 ballistic coefficient (BC) of 0.785 fired at 830 m/s (2723 ft/s) muzzle velocity under International Standard Atmosphere sea level conditions (air density ρ = 1.225 kg/m³), Mach 1 = 340.3 m/s, Mach 1.2 = 408.4 m/s), predicted this for the projectile velocity and time of flight from 0 to 3,000 m (0 to 3,281 yd): The table shows the Doppler radar test derived drag coefficients (Cd) prediction method and the 2017 Lapua Ballistics 6 DoF App predictions produce similar results. The 6 DoF modeling estimates bullet stability ((Sd) and (Sg)) that gravitates to over-stabilization for ranges over for this bullet. At the total drop predictions deviate 47.5 cm (19.7 in) or 0.20 mil (0.68 moa) at 50° latitude and up to the total drop predictions are within 0.30 mil (1 moa) at 50° latitude. The 2016 Lapua Ballistics 6 DoF App version predictions were even closer to the Doppler radar test predictions. The traditional Siacci/Mayevski G1 drag curve model prediction method generally yields more optimistic results compared to the modern Doppler radar test derived drag coefficients (Cd) prediction method. At range the differences will be hardly noticeable, but at and beyond the differences grow over 10 m/s (32.8 ft/s) projectile velocity and gradually become significant. At range the projectile velocity predictions deviate 25 m/s (82.0 ft/s), which equates to a predicted total drop difference of 125.6 cm (49.4 in) or 0.83 mil (2.87 moa) at 50° latitude. The Pejsa drag model closed-form solution prediction method, without slope constant factor fine tuning, yields very similar results in the supersonic flight regime compared to the Doppler radar test derived drag coefficients (Cd) prediction method. At range the projectile velocity predictions deviate 10 m/s (32.8 ft/s), which equates to a predicted total drop difference of 23.6 cm (9.3 in) or 0.16 mil (0.54 moa) at 50° latitude. The G7 drag curve model prediction method (recommended by some manufacturers for very-low-drag shaped rifle bullets) when using a G7 ballistic coefficient (BC) of 0.377 yields very similar results in the supersonic flight regime compared to the Doppler radar test derived drag coefficients (Cd) prediction method. At range the projectile velocity predictions have their maximum deviation of 10 m/s (32.8 ft/s). The predicted total drop difference at is 0.4 cm (0.16 in) at 50° latitude. The predicted total drop difference at is 45.0 cm (17.7 in), which equates to 0.25 mil (0.86 moa). Decent prediction models are expected to yield similar results in the supersonic flight regime. The five example models down to all predict supersonic Mach 1.2+ projectile velocities and total drop differences within a 51 cm (20.1 in) bandwidth. In the transonic flight regime at the models predict projectile velocities around Mach 1.0 to Mach 1.1 and total drop differences within a much larger 150 cm (59 in) bandwidth. External factors Wind Wind has a range of effects, the first being the effect of making the projectile deviate to the side (horizontal deflection). From a scientific perspective, the "wind pushing on the side of the projectile" is not what causes horizontal wind drift. What causes wind drift is drag. Drag makes the projectile turn into the wind, much like a weather vane, keeping the centre of air pressure on its nose. From the shooter’s perspective, this causes the nose of the projectile to turn into the wind and the tail to turn away from the wind. The result of this turning effect is that the drag pushes the projectile downwind in a nose-to-tail direction. Wind also causes aerodynamic jump which is the vertical component of cross wind deflection caused by lateral (wind) impulses activated during free flight of a projectile or at or very near the muzzle leading to dynamic imbalance. The amount of aerodynamic jump is dependent on cross wind speed, the gyroscopic stability of the bullet at the muzzle and if the barrel twist is clockwise or anti-clockwise. Like the wind direction reversing the twist direction will reverse the aerodynamic jump direction. A somewhat less obvious effect is caused by head or tailwinds. A headwind will slightly increase the relative velocity of the projectile, and increase drag and the corresponding drop. A tailwind will reduce the drag and the projectile/bullet drop. In the real world, pure head or tailwinds are rare, since wind is seldom constant in force and direction and normally interacts with the terrain it is blowing over. This often makes ultra long range shooting in head or tailwind conditions difficult. Vertical angles The vertical angle (or elevation) of a shot will also affect the trajectory of the shot. Ballistic tables for small calibre projectiles (fired from pistols or rifles) assume a horizontal line of sight between the shooter and target with gravity acting perpendicular to the earth. Therefore, if the shooter-to-target angle is up or down, (the direction of the gravity component does not change with slope direction), then the trajectory curving acceleration due to gravity will actually be less, in proportion to the cosine of the slant angle. As a result, a projectile fired upward or downward, on a so-called "slant range," will over-shoot the same target distance on flat ground. The effect is of sufficient magnitude that hunters must adjust their target hold off accordingly in mountainous terrain. A well known formula for slant range adjustment to horizontal range hold off is known as the Rifleman's rule. The Rifleman's rule and the slightly more complex and less well known Improved Rifleman's rule models produce sufficiently accurate predictions for many small arms applications. Simple prediction models however ignore minor gravity effects when shooting uphill or downhill. The only practical way to compensate for this is to use a ballistic computer program. Besides gravity at very steep angles over long distances, the effect of air density changes the projectile encounters during flight become problematic. The mathematical prediction models available for inclined fire scenarios, depending on the amount and direction (uphill or downhill) of the inclination angle and range, yield varying accuracy expectation levels. Less advanced ballistic computer programs predict the same trajectory for uphill and downhill shots at the same vertical angle and range. The more advanced programs factor in the small effect of gravity on uphill and on downhill shots resulting in slightly differing trajectories at the same vertical angle and range. No publicly available ballistic computer program currently (2017) accounts for the complicated phenomena of differing air densities the projectile encounters during flight. Ambient air density Air pressure, temperature, and humidity variations make up the ambient air density. Humidity has a counter intuitive impact. Since water vapor has a density of 0.8 grams per litre, while dry air averages about 1.225 grams per litre, higher humidity actually decreases the air density, and therefore decreases the drag. Precipitation Precipitation can cause significant yaw and accompanying deflection when a bullet collides with a raindrop. The further downrange such a coincidental collision occurs, the less the deflection on target will be. The weight of the raindrop and bullet also influences how much yaw is induced during such a collision. A big heavy raindrop and a light bullet will yield maximal yaw effect. A heavy bullet colliding with an equal raindrop will experience significant less yaw effect. Long range factors Gyroscopic drift (spin drift) Gyroscopic drift is an interaction of the bullet's mass and aerodynamics with the atmosphere that it is flying in. Even in completely calm air, with no sideways air movement at all, a spin-stabilized projectile will experience a spin-induced sideways component, due to a gyroscopic phenomenon known as "yaw of repose." For a right hand (clockwise) direction of rotation this component will always be to the right. For a left hand (counterclockwise) direction of rotation this component will always be to the left. This is because the projectile's longitudinal axis (its axis of rotation) and the direction of the velocity vector of the center of gravity (CG) deviate by a small angle, which is said to be the equilibrium yaw or the yaw of repose. The magnitude of the yaw of repose angle is typically less than 0.5 degree. Since rotating objects react with an angular velocity vector 90 degrees from the applied torque vector, the bullet's axis of symmetry moves with a component in the vertical plane and a component in the horizontal plane; for right-handed (clockwise) spinning bullets, the bullet's axis of symmetry deflects to the right and a little bit upward with respect to the direction of the velocity vector, as the projectile moves along its ballistic arc. As the result of this small inclination, there is a continuous air stream, which tends to deflect the bullet to the right. Thus the occurrence of the yaw of repose is the reason for the bullet drifting to the right (for right-handed spin) or to the left (for left-handed spin). This means that the bullet is "skidding" sideways at any given moment, and thus experiencing a sideways component. The following variables affect the magnitude of gyroscopic drift: Projectile or bullet length: longer projectiles experience more gyroscopic drift because they produce more lateral "lift" for a given yaw angle. Spin rate: faster spin rates will produce more gyroscopic drift because the nose ends up pointing farther to the side. Range, time of flight and trajectory height: gyroscopic drift increases with all of these variables. density of the atmosphere: denser air will increase gyroscopic drift. Doppler radar measurement results for the gyroscopic drift of several US military and other very-low-drag bullets at 1000 yards (914.4 m) look like this: The table shows that the gyroscopic drift cannot be predicted on weight and diameter alone. In order to make accurate predictions on gyroscopic drift several details about both the external and internal ballistics must be considered. Factors such as the twist rate of the barrel, the velocity of the projectile as it exits the muzzle, barrel harmonics, and atmospheric conditions, all contribute to the path of a projectile. Magnus effect Spin stabilized projectiles are affected by the Magnus effect, whereby the spin of the bullet creates a force acting either up or down, perpendicular to the sideways vector of the wind. In the simple case of horizontal wind, and a right hand (clockwise) direction of rotation, the Magnus effect induced pressure differences around the bullet cause a downward (wind from the right) or upward (wind from the left) force viewed from the point of firing to act on the projectile, affecting its point of impact. The vertical deflection value tends to be small in comparison with the horizontal wind induced deflection component, but it may nevertheless be significant in winds that exceed 4 m/s (14.4 km/h or 9 mph). Magnus effect and bullet stability The Magnus effect has a significant role in bullet stability because the Magnus force does not act upon the bullet's center of gravity, but the center of pressure affecting the yaw of the bullet. The Magnus effect will act as a destabilizing force on any bullet with a center of pressure located ahead of the center of gravity, while conversely acting as a stabilizing force on any bullet with the center of pressure located behind the center of gravity. The location of the center of pressure depends on the flow field structure, in other words, depending on whether the bullet is in supersonic, transonic or subsonic flight. What this means in practice depends on the shape and other attributes of the bullet, in any case the Magnus force greatly affects stability because it tries to "twist" the bullet along its flight path. Paradoxically, very-low-drag bullets, owing to their length, have a tendency to exhibit greater Magnus destabilizing errors because they have a greater surface area to present to the oncoming air they are travelling through, thereby reducing their aerodynamic efficiency. This subtle effect is one of the reasons why a calculated Cd or BC based on shape and sectional density is of limited use. Poisson effect Another minor cause of drift, which depends on the nose of the projectile being above the trajectory, is the Poisson Effect. This, if it occurs at all, acts in the same direction as the gyroscopic drift and is even less important than the Magnus effect. It supposes that the uptilted nose of the projectile causes an air cushion to build up underneath it. It further supposes that there is an increase of friction between this cushion and the projectile so that the latter, with its spin, will tend to roll off the cushion and move sideways. This simple explanation is quite popular. There is, however, no evidence to show that increased pressure means increased friction and unless this is so, there can be no effect. Even if it does exist it must be quite insignificant compared with the gyroscopic and Coriolis drifts. Both the Poisson and Magnus Effects will reverse their directions of drift if the nose falls below the trajectory. When the nose is off to one side, as in equilibrium yaw, these effects will make minute alterations in range. Coriolis drift The Coriolis effect causes Coriolis drift in a direction perpendicular to the Earth's axis; for most locations on Earth and firing directions, this deflection includes horizontal and vertical components. The deflection is to the right of the trajectory in the northern hemisphere, to the left in the southern hemisphere, upward for eastward shots, and downward for westward shots. The vertical Coriolis deflection is also known as the Eötvös effect. Coriolis drift is not an aerodynamic effect; it is a consequence of the rotation of the Earth. The magnitude of the Coriolis effect is small. For small arms, the magnitude of the Coriolis effect is generally insignificant (for high powered rifles in the order of about at ), but for ballistic projectiles with long flight times, such as extreme long-range rifle projectiles, artillery, and rockets like intercontinental ballistic missiles, it is a significant factor in calculating the trajectory. The magnitude of the drift depends on the firing and target location, azimuth of firing, projectile velocity and time of flight. Horizontal effect Viewed from a non-rotating reference frame (i.e. not one rotating with the Earth) and ignoring the forces of gravity and air resistance, a projectile moves in a straight line. When viewed from a reference frame fixed with respect to the Earth, that straight trajectory appears to curve sideways. The direction of this horizontal curvature is to the right in the northern hemisphere and to the left in the southern hemisphere, and does not depend on the azimuth of the shot. The horizontal curvature is largest at the poles and decreases to zero at the equator. Vertical (Eötvös) effect The Eötvös effect changes the perceived gravitational pull on a moving object based on the relationship between the direction and velocity of movement and the direction of the Earth's rotation. The Eötvös effect is largest at the equator and decreases to zero at the poles. It causes eastward-traveling projectiles to deflect upward, and westward-traveling projectiles to deflect downward. The effect is less pronounced for trajectories in other directions, and is zero for trajectories aimed due north or south. In the case of large changes of momentum, such as a spacecraft being launched into Earth orbit, the effect becomes significant. It contributes to the fastest and most fuel-efficient path to orbit: a launch from the equator that curves to a directly eastward heading. Equipment factors Though not forces acting on projectile trajectories there are some equipment related factors that influence trajectories. Since these factors can cause otherwise unexplainable external ballistic flight behavior they have to be briefly mentioned. Lateral jump Lateral jump is caused by a slight lateral and rotational movement of a gun barrel at the instant of firing. It has the effect of a small error in bearing. The effect is ignored, since it is small and varies from round to round. Lateral throw-off Lateral throw-off is caused by mass imbalance in applied spin stabilized projectiles or pressure imbalances during the transitional flight phase when a projectile leaves a gun barrel off axis leading to static imbalance. If present it causes dispersion. The effect is unpredictable, since it is generally small and varies from projectile to projectile, round to round and/or gun barrel to gun barrel. Maximum effective small arms range The maximum practical range of all small arms and especially high-powered sniper rifles depends mainly on the aerodynamic or ballistic efficiency of the spin stabilised projectiles used. Long-range shooters must also collect relevant information to calculate elevation and windage corrections to be able to achieve first shot strikes at point targets. The data to calculate these fire control corrections has a long list of variables including: ballistic coefficient or test derived drag coefficients (Cd)/behavior of the bullets used height of the sighting components above the rifle bore axis the zero range at which the sighting components and rifle combination were sighted in bullet mass actual muzzle velocity (powder temperature affects muzzle velocity, primer ignition is also temperature dependent) range to target supersonic range of the employed gun, cartridge and bullet combination inclination angle in case of uphill/downhill firing target speed and direction wind speed and direction (main cause for horizontal projectile deflection and generally the hardest ballistic variable to measure and judge correctly. Wind effects can also cause vertical deflection.) air pressure, temperature, altitude and humidity variations (these make up the ambient air density) Earth's gravity (changes slightly with latitude and altitude) gyroscopic drift (horizontal and vertical plane gyroscopic effect — often known as spin drift - induced by the barrel's twist direction and twist rate) Coriolis effect drift (latitude, direction of fire and northern or southern hemisphere data dictate this effect) Eötvös effect (interrelated with the Coriolis effect, latitude and direction of fire dictate this effect) aerodynamic jump (the vertical component of cross wind deflection caused by lateral (wind) impulses activated during free flight or at or very near the muzzle leading to dynamic imbalance) lateral throw-off (dispersion that is caused by mass imbalance in the applied projectile or it leaving the barrel off axis leading to static imbalance) the inherent potential accuracy and adjustment range of the sighting components the inherent potential accuracy of the rifle the inherent potential accuracy of the ammunition the inherent potential accuracy of the computer program and other firing control components used to calculate the trajectory The ambient air density is at its maximum at Arctic sea level conditions. Cold gunpowder also produces lower pressures and hence lower muzzle velocities than warm powder. This means that the maximum practical range of rifles will be at it shortest at Arctic sea level conditions. The ability to hit a point target at great range has a lot to do with the ability to tackle environmental and meteorological factors and a good understanding of exterior ballistics and the limitations of equipment. Without (computer) support and highly accurate laser rangefinders and meteorological measuring equipment as aids to determine ballistic solutions, long-range shooting beyond 1000 m (1100 yd) at unknown ranges becomes guesswork for even the most expert long-range marksmen. Interesting further reading: Marksmanship Wikibook Using ballistics data Here is an example of a ballistic table for a .30 calibre Speer 169 grain (11 g) pointed boat tail match bullet, with a BC of 0.480. It assumes sights 1.5 inches (38 mm) above the bore line, and sights adjusted to result in point of aim and point of impact matching 200 yards (183 m) and 300 yards (274 m) respectively. This table demonstrates that, even with a fairly aerodynamic bullet fired at high velocity, the "bullet drop" or change in the point of impact is significant. This change in point of impact has two important implications. Firstly, estimating the distance to the target is critical at longer ranges, because the difference in the point of impact between 400 and is 25–32 in (depending on zero), in other words if the shooter estimates that the target is 400 yd away when it is in fact 500 yd away the shot will impact 25–32 in (635–813 mm) below where it was aimed, possibly missing the target completely. Secondly, the rifle should be zeroed to a distance appropriate to the typical range of targets, because the shooter might have to aim so far above the target to compensate for a large bullet drop that he may lose sight of the target completely (for instance being outside the field of view of a telescopic sight). In the example of the rifle zeroed at , the shooter would have to aim 49 in or more than 4 ft (1.2 m) above the point of impact for a target at 500 yd. See also Internal ballistics - The behavior of the projectile and propellant before it leaves the barrel. Transitional ballistics - The behavior of the projectile from the time it leaves the muzzle until the pressure behind the projectile is equalized. Terminal ballistics - The behavior of the projectile upon impact with the target. Trajectory of a projectile - Basic external ballistics mathematic formulas. Rifleman's rule - Procedures or "rules" for a rifleman for aiming at targets at a distance either uphill or downhill. Franklin Ware Mann - Early scientific study of external ballistics. Table of handgun and rifle cartridges Sighting in - Calibrating the sights on a ranged weapon so that the point of aim intersects with the trajectory at a given distance, allowing the user to consistently hit the target being aimed at. Notes References External links General external ballistics (Simplified calculation of the motion of a projectile under a drag force proportional to the square of the velocity) - basketball ballistics. Small arms external ballistics Software for calculating ball ballistics How do bullets fly? by Ruprecht Nennstiel, Wiesbaden, Germany Exterior Ballistics.com articles A Short Course in External Ballistics Articles on long range shooting by Bryan Litz Probabalistic Weapon Employment Zone (WEZ) Analysis A Conceptual Overview by Bryan Litz Weite Schüsse - part 4, Basic explanation of the Pejsa model by Lutz Möller Patagonia Ballistics ballistics mathematical software engine JBM Small Arms Ballistics with online ballistics calculators Bison Ballistics Point Mass Online Ballistics Calculator Virtual Wind Tunnel Experiments for Small Caliber Ammunition Aerodynamic Characterization - Paul Weinacht US Army Research Laboratory Aberdeen Proving Ground, MD Artillery external ballistics British Artillery Fire Control - Ballistics & Data Field Artillery, Volume 6, Ballistics and Ammunition The Production of Firing Tables for Cannon Artillery, BRL rapport no. 1371 by Elizabeth R. Dickinson, U.S. Army Materiel Command Ballistic Research Laboratories, November 1967 NABK (NATO Armament Ballistic Kernel) Based Next Generation Ballistic Table Tookit, 23rd International Symposium on Ballistics, Tarragona, Spain 16-20 April 2007 Trajectory Calculator in C++ that can deduce drag function from firing tables Freeware small arms external ballistics software Hawke X-ACT Pro FREE ballistics app. iOS, Android, OSX & Windows. ChairGun Pro free ballistics for rim fire and pellet guns. Ballistic_XLR. (MS Excel spreadsheet)] - A substantial enhancement & modification of the Pejsa spreadsheet (below). GNU Exterior Ballistics Computer (GEBC) - An open source 3DOF ballistics computer for Windows, Linux, and Mac - Supports the G1, G2, G5, G6, G7, and G8 drag models. Created and maintained by Derek Yates. 6mmbr.com ballistics section links to / hosts 4 freeware external ballistics computer programs. 2DOF & 3DOF R.L. McCoy - Gavre exterior ballistics (zip file) - Supports the G1, G2, G5, G6, G7, G8, GS, GL, GI, GB and RA4 drag models PointBlank Ballistics (zip file) - Siacci/Mayevski G1 drag model. Remington Shoot! A ballistic calculator for Remington factory ammunition (based on Pinsoft's Shoot! software). - Siacci/Mayevski G1 drag model. JBM's small-arms ballistics calculators Online trajectory calculators - Supports the G1, G2, G5, G6, G7 (for some projectiles experimentally measured G7 ballistic coefficients), G8, GI, GL and for some projectiles doppler radar-test derived (Cd) drag models. Pejsa Ballistics (MS Excel spreadsheet) - Pejsa model. Sharpshooter Friend (Palm PDA software) - Pejsa model. Quick Target Unlimited, Lapua Edition - A version of QuickTARGET Unlimited ballistic software (requires free registration to download) - Supports the G1, G2, G5, G6, G7, G8, GL, GS Spherical 9/16"SAAMI, GS Spherical Don Miller, RA4, Soviet 1943, British 1909 Hatches Notebook and for some Lapua projectiles doppler radar-test derived (Cd) drag models. Lapua Ballistics Exterior ballistic software for Java or Android mobile phones. Based on doppler radar-test derived (Cd) drag models for Lapua projectiles and cartridges. Lapua Ballistics App 6 DoF model limited to Lapua bullets for Android and iOS. BfX - Ballistics for Excel Set of MS Excel add-ins functions - Supports the G1, G2, G5, G6, G7 G8 and RA4 and Pejsa drag models as well as one for air rifle pellets. Able to handle user supplied models, e.g. Lapua projectiles doppler radar-test derived (Cd) ones. GunSim "GunSim" free browser-based ballistics simulator program for Windows and Mac. BallisticSimulator "Ballistic Simulator" free ballistics simulator program for Windows. 5H0T Free online web-based ballistics calculator, with data export capability and charting. SAKO Ballistics Free online ballistic calculatoy by SAKO. Calculator also available as an android app (maybe on iOS also, I don't know) under "SAKO Ballistics" name. py-ballisticcalc LGPL Python library for point-mass ballistic calculations . Ballistics Projectiles Aerodynamics Articles containing video clips
External ballistics
[ "Physics", "Chemistry", "Engineering" ]
14,776
[ "Applied and interdisciplinary physics", "Aerodynamics", "Aerospace engineering", "Ballistics", "Fluid dynamics" ]
584,987
https://en.wikipedia.org/wiki/Harmonic%20Convergence
The Harmonic Convergence was the world's first synchronized global peace meditation, coinciding with an exceptional alignment of Solar System planets on August 16–17, 1987. The event was organized by spouses José Argüelles and Lloydine Burris Argüelles, via the Planet Art Network (PAN), a peace movement they founded in 1983. Timing of the Harmonic Convergence allegedly marked a significant celestial alignment of the Sun, Moon, and six planets as "part of the grand trine." Origins Though Arguelles eventually connected the timing of the Harmonic Convergence with his understanding of the significance of Maya calendrics, the dates themselves were derived not from Maya cosmology but from the reconstructed Aztec prophecies of Tony Shearer in his 1971 book Lord of the Dawn. According to Shearer's interpretation of the Aztec calendar, the selected date marked the end of twenty-two cycles of 52 years each, or 1,144 years in all. The twenty-two cycles were divided into thirteen "heaven" cycles, which began in AD 843 and ended in 1519, when the nine "hell" cycles began, ending 468 years later in 1987. The very beginning of the nine "hell" cycles was precisely the day that Hernán Cortés landed in Mexico, April 22, 1519 (coinciding with "1 Reed" on the Aztec/Mayan calendar, the day sacred to Mesoamerican cultural hero Quetzalcoatl). The 9 hell cycles of 52 years each ended precisely on August 16–17, 1987. Shearer introduced the dates and the prophecy to Arguelles, and he eventually co-opted them and coined Harmonic Convergence to promote the event. Astrological alignment According to the astrologer Neil Michelsen's "The American Ephemeris," on 24 August 1987 there was an exceptional alignment of planets in the Solar System. Eight planets were aligned in an unusual configuration called a grand trine. The Sun, Moon and six out of eight planets formed part of the grand trine, that is, they were aligned at the apexes of an equilateral triangle when viewed from the Earth. The Sun, Moon, Mars, and Venus were in exact alignment, astrologically called a conjunction at the first degree of Virgo in Tropical Astrology. Mercury was in the fourth degree of Virgo which most astrologers count as part of the same conjunction being within the "orb" of influence. Jupiter was in Aries, and Saturn and Uranus in Sagittarius completing the grand trine. However some believe that this is an Earth grand trine with Sun/Moon/Mars/Venus/Mercury in the initial degrees of Virgo, Neptune at 5 degrees of Capricorn, and Jupiter in the last degree of Aries (anaretic degree), on the cusp of Taurus. Uranus, and especially Saturn are on the edge of this trine. There is disagreement regarding this occurrence being a unique event. Grand trines, where planets share 120 degree positions forming an equilateral triangle, are not uncommon or particularly noteworthy. Traditional astrology does not consider trines to be action points, and would not regard such an occurrence to be significant. Astrological interpretations The convergence is purported to have "corresponded with a great shift in the earth’s energy from warlike to peaceful." Believers of this esoteric prophecy maintain that the Harmonic Convergence ushered in a five-year period of Earth's "cleansing", where many of the planet's "false structures of separation" would collapse. Adherents deemed the event as beginning a new age of universal peace, with signs indicating a "major energy shift" was about to occur, a turning point in Earth's collective karma and dharma, and that this energy was powerful enough to change the global perspective of man from one of conflict to one of co-operation. Actress and author Shirley MacLaine called it a "window of light," allowing access to higher realms of awareness. According to Argüelles, the Harmonic Convergence also began the final 25-year countdown to the end of the Mayan Long Count in 2012, which would be the so-called end of history and the beginning of a new 5,125-year cycle. Evils of the modern world (war, materialism, violence, abuses, injustice, oppression, etc.) would have ended with the birth of the 6th Sun and the 5th Earth on December 21, 2012. Power centers An important part of the Harmonic Convergence observances was the idea of congregating at "power centers." Power centers were places, such as Mount Shasta, California, Mount Fuji, and Mount Yamnuska where the spiritual energy was held to be particularly strong. The belief was that if 144,000 people assembled at these power centers and meditated for peace, that the arrival of the new era would be facilitated. See also Big Generator, 1987 album by the band Yes including the song "Holy Lamb (Song for Harmonic Convergence)" Harmonic Convergence (The Legend of Korra) Planetary alignment World Contact Day References Adapted in part from the Wikinfo article Harmonic Convergence, licensed under the GNU Free Documentation License. • The Legend of Korra season 2 episodes 11, 12, 13, and 14 mentioned the Harmonic Convergence External links Thirty Years Ago, People Tried to Save the World By Meditating (Smithsonian, August 17, 2017) Almanac: The Harmonic Convergence (CBS Sunday Morning, August 16, 2015) Simulations of planetary positions on August 16, 1987 and December 21, 2012 Astrology Fringe theories 1987 in the United States 2012 phenomenon
Harmonic Convergence
[ "Astronomy" ]
1,144
[ "Astrology", "History of astronomy" ]
584,997
https://en.wikipedia.org/wiki/In-band%20signaling
In telecommunications, in-band signaling is the sending of control information within the same band or channel used for data such as voice or video. This is in contrast to out-of-band signaling which is sent over a different channel, or even over a separate network. In-band signals may often be heard by telephony participants, while out-of-band signals are inaccessible to the user. The term is also used more generally, for example of computer data files that include both literal data, and metadata and/or instructions for how to process the literal data. Telephony When dialing from a land-line telephone, the telephone number is encoded and transmitted across the telephone line in form of dual-tone multi-frequency signaling (DTMF). The tones control the telephone system by instructing the telephone switch where to route the call. These control tones are sent over the same channel, the copper wire, and in the frequency range (300 Hz to 3.4 kHz) as the audio of the telephone call. In-band signaling is also used on older telephone carrier systems to provide inter-exchange information for routing calls. Examples of this kind of in-band signaling system are the Signaling System No. 5 (SS5) and its predecessors, and R2 signalling. Separating the control signals, also referred to as the control plane, from the data, if a bit-transparent connection is desired, is usually done by escaping the control instructions. Occasionally, however, networks are designed so that data is, to a varying degree, garbled by the signaling. Allowing data to become garbled is usually acceptable when transmitting sounds between humans, since the users rarely notice the slight degradation, but this leads to problems when sending data that has very low error tolerance, such as information transmitted using a modem. In-band signaling is insecure because it exposes control signals, protocols and management systems to end users, which may result in falsing. In the 1960s and 1970s, so-called phone phreaks used blue boxes for deliberate falsing, in which the appropriate tones for routing were intentionally generated, enabling the caller to abuse functions intended for testing and administrative use and to make free long-distance calls. Modems may also interfere with in-band signaling, in which case a guard tone may be employed to prevent this. Voice over IP In voice over IP (VoIP), DTMF signals are transmitted in-band by two methods. When transmitted as audio tones in the voice stream, voice encoding must use a lossless coder, such as μ-law or A-law pulse-code modulation, to preserve the integrity of frequency signals. Still, this method proved often unreliable and was subject to interference from other audio sources. The standard method is to digitally remove DTMF tones from the audio at the source and from the Real-time Transport Protocol (RTP) voice stream and encode them separately as a digital information payload, often termed named telephone events (NTE), according to RFC 4733. Such DTMF frames are transmit in-band with all other RTP packets on the identical network path. In contrast to in-band transmission of DTMF, VoIP signaling protocols also implement out-of-band method of DTMF transmission. For example, the Session Initiation Protocol (SIP), as well as the Media Gateway Control Protocol (MGCP) define special message types for the transmission of digits. Other applications As a method of in-band signaling, DTMF tones were also used by cable television broadcasters to indicate the start and stop times of local insertion points during station breaks for the benefit of cable companies. Until better, out-of-band signaling equipment was developed in the 1990s, fast, unacknowledged, and loud DTMF tone sequences could be heard during the commercial breaks of cable channels in the United States and elsewhere. These DTMF sequences were sent by the originating cable network's equipment at the uplink satellite facility, and were decoded by equipment at local cable companies. A specific tone sequence indicated the exact time that the feeds should be switched to and away from the master control feed, to locally-broadcast commercials. The following is an example of such a sequence by a cable company that communicated the following to the cable company's broadcast equipment: SWITCH TO LOCAL NOW - SWITCH TO LOCAL NOW - PREPARE TO SWITCH BACK - PREPARE TO SWITCH BACK - SWITCH BACK TO NATIONAL NOW - SWITCH BACK TO NATIONAL NOW - "IF YOU HAVEN'T SWITCHED BACK TO NATIONAL NOW, DO SO IMMEDIATELY" DTMF signaling in the cable industry was discontinued because it was distracting to viewers, and was susceptible to interference when DTMF tones were sounded by characters in television shows. For example, a character dialing a Touch-Tone telephone in a television show could cause the cable company computers to switch away from a "hot feed" to dead air, and the cost of human-imperceptible signaling technologies decreased. In-band signaling applies only to channel-associated signaling (CAS). In common channel signaling (CCS) separate channels are used for control and data, as opposed to the shared channel in CAS, so all control is out-of-band by definition. In computer data, the term refers to embedding any kind of metadata directly within regular data. These uses have similar tradeoffs as in telecommunications, such as opening an attack surface vs. simplifying processing. A few of many examples: Embedding a magic number at the very start of files, to signal the format or language of the following data. Embedding a NULL character as in C strings, to signal the end of the string (as opposed to keeping that information outside the string). Embedding markup within text, whether to categorize parts of the text, provide processing or formatting instructions, or for other purposes. Reserving some characters in regular expressions, such as "*", to have special processing meanings, rather than representing literals. Embedding control codes in computer terminal input as a means of device control, allowing command-line users to issue single-character commands directly, e.g. issues a ^D code, causing command-line programs to expect no further input from the user, and therefore to quit. When out-of-band communication is unavailable, one of two techniques may be used to preserve network transparency. Encapsulation: The bundling of the control data in the packet's header and then removing the header (and/or footer) of the packet at the far end, restoring the data to be the same as the original. Bit stuffing: The insertion of non-information or escape characters to modify, synchronize and justify the data so it never looks like signaling information (and remove the stuffed bits and escape codes at the far end, restoring the data to be the same as the original). See also Control character Escape sequence In-band control Line signaling Out-of-band control Quindar tones +++ (modem) References Network management Telephony signals
In-band signaling
[ "Engineering" ]
1,451
[ "Computer networks engineering", "Network management" ]
585,102
https://en.wikipedia.org/wiki/Julius%20von%20Mayer
Julius Robert von Mayer (25 November 1814 – 20 March 1878) was a German physician, chemist, and physicist and one of the founders of thermodynamics. He is best known for enunciating in 1841 one of the original statements of the conservation of energy or what is now known as one of the first versions of the first law of thermodynamics, namely that "energy can be neither created nor destroyed". In 1842, Mayer described the vital chemical process now referred to as oxidation as the primary source of energy for any living creature. He also proposed that plants convert light into chemical energy. His achievements were overlooked and priority for the discovery in 1842 of the mechanical equivalent of heat was attributed to James Joule in the following year. Early life Mayer was born on 25 November 1814 in Heilbronn, Württemberg (Baden-Württemberg, modern day Germany), the son of a pharmacist. He grew up in Heilbronn. After completing his Abitur, he studied medicine at the University of Tübingen, where he was a member of the Corps Guestphalia, a German Student Corps. During 1838 he attained his doctorate as well as passing the Staatsexamen. After a stay in Paris (1839/40) he left as a ship's physician on a Dutch three-mast sailing ship for a journey to Jakarta. Although he had hardly been interested before this journey in physical phenomena, his observation that storm-whipped waves are warmer than the calm sea started him thinking about the physical laws, in particular about the physical phenomenon of warmth and the question whether the directly developed heat alone (the heat of combustion), or the sum of the quantities of heat developed in direct and indirect ways are to be accounted for in the burning process. After his return in February 1841 Mayer dedicated his efforts to solve this problem. In 1841 he settled in Heilbronn and married. Development of ideas Even as a young child, Mayer showed an intense interest with various mechanical mechanisms. He was a young man who performed various experiments of the physical and chemical variety. In fact, one of his favorite hobbies was creating various types of electrical devices and air pumps. It was obvious that he was intelligent. Hence, Mayer attended Eberhard-Karls University in May 1832. He studied medicine during his time there. In 1837, he and some of his friends were arrested for wearing the couleurs of a forbidden organization. The consequences for this arrest included a one year expulsion from the college and a brief period of incarceration. This diversion sent Mayer traveling to Switzerland, France, and the Dutch East Indies. Mayer drew some additional interest in mathematics and engineering from his friend Carl Baur through private tutoring. In 1841, Mayer returned to Heilbronn to practice medicine, but physics became his new passion. In June 1841 he completed his first scientific paper entitled "On the Quantitative and Qualitative Determination of Forces". It was largely ignored by other professionals in the area. Then, Mayer became interested in the area of heat and its motion. He presented a value in numerical terms for the mechanical equivalent of heat. He also was the first person to describe the vital chemical process now referred to as oxidation as the primary source of energy for any living creature. In 1848 he calculated that in the absence of a source of energy the Sun would cool down in only 5000 years, and he suggested that the impact of meteorites kept it hot. Since he was not taken seriously at the time, his achievements were overlooked and credit was given to James Joule. Mayer almost committed suicide after he discovered this fact. He spent some time in mental institutions to recover from this and the loss of some of his children. Several of his papers were published due to the advanced nature of the physics and chemistry. He was awarded an honorary doctorate in 1859 by the philosophical faculty at the University of Tübingen. His overlooked work was revived in 1862 by fellow physicist John Tyndall in a lecture at the London Royal Institution. In July 1867 Mayer published "Die Mechanik der Wärme." This publication dealt with the mechanics of heat and its motion. On 5 November 1867 Mayer was awarded personal nobility by the Kingdom of Württemberg (von Mayer) which is the German equivalent of a British knighthood. von Mayer died in Germany in 1878. After Sadi Carnot stated it for caloric, Mayer was the first person to state the law of the conservation of energy, one of the most fundamental tenets of modern day physics. The law of the conservation of energy states that the total mechanical energy of a system remains constant in any isolated system of objects that interact with each other only by way of forces that are conservative. Mayer's first attempt at stating the conservation of energy was a paper he sent to Johann Christian Poggendorff's Annalen der Physik, in which he postulated a conservation of force (Erhaltungssatz der Kraft). However, owing to Mayer's lack of advanced training in physics, it contained some fundamental mistakes and was not published. Mayer continued to pursue the idea steadfastly and argued with the Tübingen physics professor Johann Gottlieb Nörremberg, who rejected his hypothesis. Nörremberg did, however, give Mayer a number of valuable suggestions on how the idea could be examined experimentally; for example, if kinetic energy transforms into heat energy, water should be warmed by vibration. Mayer not only performed this demonstration, but determined also the quantitative factor of the transformation, calculating the mechanical equivalent of heat. The result of his investigations was published 1842 in the May edition of Justus von Liebig's Annalen der Chemie und Pharmacie. It was translated as Remarks on the Forces of Inorganic Nature In his booklet Die organische Bewegung im Zusammenhang mit dem Stoffwechsel (The Organic Movement in Connection with the Metabolism, 1845) he specified the numerical value of the mechanical equivalent of heat: at first as 365 kgf·m/kcal, later as 425 kgf·m/kcal; the modern values are 4.184 kJ/kcal (426.6 kgf·m/kcal) for the thermochemical calorie and 4.1868 kJ/kcal (426.9 kgf·m/kcal) for the international steam table calorie. This relation implies that, although work and heat are different forms of energy, they can be transformed into one another. This law is now called the first law of thermodynamics, and led to the formulation of the general principle of conservation of energy, definitively stated by Hermann von Helmholtz in 1847. Mayer's relation Mayer derived a relation between specific heat at constant pressure and the specific heat at constant volume for an ideal gas. The relation is: , where CP,m is the molar specific heat at constant pressure, CV,m is the molar specific heat at constant volume and R is the gas constant. Later life Mayer was aware of the importance of his discovery, but his inability to express himself scientifically led to degrading speculation and resistance from the scientific establishment. Contemporary physicists rejected his principle of conservation of energy, and even acclaimed physicists Hermann von Helmholtz and James Prescott Joule viewed his ideas with hostility. The former doubted Mayer's qualifications in physical questions, and a bitter dispute over priority developed with the latter. In 1848 two of his children died rapidly in succession, and Mayer's mental health deteriorated. He attempted suicide on 18 May 1850 and was committed to a mental institution. After he was released, he was a broken man and only timidly re-entered public life in 1860. However, in the meantime, his scientific fame had grown and he received a late appreciation of his achievement, although perhaps at a stage where he was no longer able to enjoy it. He continued to work vigorously as a physician until his death. Honors 1840 Mayer received the Knight Cross of the Order of the Crown (Württemberg). 1869 Mayer received the prix Poncelet. The Robert-Mayer-Gymnasium and the Robert-Mayer-Volks- und Schulsternwarte in Heilbronn bear his name. In chemistry, he invented Mayer's reagent which is used in detecting alkaloids. Works Ueber das Santonin : eine Inaugural-Dissertation, welche zur Erlangung der Doctorwürde in der Medicin & Chirurgie unter dem Praesidium von Wilhelm Rapp im July 1838 der öffentlichen Prüfung vorlegt Julius Robert Mayer . M. Müller, Heilbronn 1838 Digital edition by the University and State Library Düsseldorf References Further reading External links 1814 births 1878 deaths 19th-century German physicists Recipients of the Copley Medal People from Heilbronn Thermodynamicists
Julius von Mayer
[ "Physics", "Chemistry" ]
1,815
[ "Thermodynamics", "Thermodynamicists" ]
585,143
https://en.wikipedia.org/wiki/Closed-form%20expression
In mathematics, an expression or equation is in closed form if it is formed with constants, variables and a finite set of basic functions connected by arithmetic operations (, and integer powers) and function composition. Commonly, the allowed functions are nth root, exponential function, logarithm, and trigonometric functions. However, the set of basic functions depends on the context. The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object, that is, an expression of this object in terms of previous ways of specifying it. Example: roots of polynomials The quadratic formula is a closed form of the solutions to the general quadratic equation More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only th-roots and field operations In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions. There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). The size of these expressions increases significantly with the degree, limiting their usefulness. In higher degrees, the Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. A simple example is the equation Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals. Symbolic integration Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expressions. In this context, the basic functions used for defining closed forms are commonly logarithms, exponential function and polynomial roots. Functions that have a closed form for these basic functions are called elementary functions and include trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic functions. The fundamental problem of symbolic integration is thus, given an elementary function specified by a closed-form expression, to decide whether its antiderivative is an elementary function, and, if it is, to find a closed-form expression for this antiderivative. For rational functions; that is, for fractions of two polynomial functions; antiderivatives are not always rational fractions, but are always elementary functions that may involve logarithms and polynomial roots. This is usually proved with partial fraction decomposition. The need for logarithms and polynomial roots is illustrated by the formula which is valid if and are coprime polynomials such that is square free and Alternative definitions Changing the definition of "well known" to include additional functions can change the set of equations with closed-form solutions. Many cumulative distribution functions cannot be expressed in closed form, unless one considers special functions such as the error function or gamma function to be well known. It is possible to solve the quintic equation if general hypergeometric functions are included, although the solution is far too complicated algebraically to be useful. For many practical computer applications, it is entirely reasonable to assume that the gamma function and other special functions are well known since numerical implementations are widely available. Analytic expression An analytic expression (also known as expression in analytic form or analytic formula) is a mathematical expression constructed using well-known operations that lend themselves readily to calculation. Similar to closed-form expressions, the set of well-known functions allowed can vary according to context but always includes the basic arithmetic operations (addition, subtraction, multiplication, and division), exponentiation to a real exponent (which includes extraction of the th root), logarithms, and trigonometric functions. However, the class of expressions considered to be analytic expressions tends to be wider than that for closed-form expressions. In particular, special functions such as the Bessel functions and the gamma function are usually allowed, and often so are infinite series and continued fractions. On the other hand, limits in general, and integrals in particular, are typically excluded. If an analytic expression involves only the algebraic operations (addition, subtraction, multiplication, division, and exponentiation to a rational exponent) and rational constants then it is more specifically referred to as an algebraic expression. Comparison of different classes of expressions Closed-form expressions are an important sub-class of analytic expressions, which contain a finite number of applications of well-known functions. Unlike the broader analytic expressions, the closed-form expressions do not include infinite series or continued fractions; neither includes integrals or limits. Indeed, by the Stone–Weierstrass theorem, any continuous function on the unit interval can be expressed as a limit of polynomials, so any class of functions containing the polynomials and closed under limits will necessarily include all continuous functions. Similarly, an equation or system of equations is said to have a closed-form solution if, and only if, at least one solution can be expressed as a closed-form expression; and it is said to have an analytic solution if and only if at least one solution can be expressed as an analytic expression. There is a subtle distinction between a "closed-form function" and a "closed-form number" in the discussion of a "closed-form solution", discussed in and below. A closed-form or analytic solution is sometimes referred to as an explicit solution. Dealing with non-closed-form expressions Transformation into closed-form expressions The expression: is not in closed form because the summation entails an infinite number of elementary operations. However, by summing a geometric series this expression can be expressed in the closed form: Differential Galois theory The integral of a closed-form expression may or may not itself be expressible as a closed-form expression. This study is referred to as differential Galois theory, by analogy with algebraic Galois theory. The basic theorem of differential Galois theory is due to Joseph Liouville in the 1830s and 1840s and hence referred to as Liouville's theorem. A standard example of an elementary function whose antiderivative does not have a closed-form expression is: whose one antiderivative is (up to a multiplicative constant) the error function: Mathematical modelling and computer simulation Equations or systems too complex for closed-form or analytic solutions can often be analysed by mathematical modelling and computer simulation (for an example in physics, see). Closed-form number Three subfields of the complex numbers have been suggested as encoding the notion of a "closed-form number"; in increasing order of generality, these are the Liouvillian numbers (not to be confused with Liouville numbers in the sense of rational approximation), EL numbers and elementary numbers. The Liouvillian numbers, denoted , form the smallest algebraically closed subfield of closed under exponentiation and logarithm (formally, intersection of all such subfields)—that is, numbers which involve explicit exponentiation and logarithms, but allow explicit and implicit polynomials (roots of polynomials); this is defined in . was originally referred to as elementary numbers, but this term is now used more broadly to refer to numbers defined explicitly or implicitly in terms of algebraic operations, exponentials, and logarithms. A narrower definition proposed in , denoted , and referred to as EL numbers, is the smallest subfield of closed under exponentiation and logarithm—this need not be algebraically closed, and corresponds to explicit algebraic, exponential, and logarithmic operations. "EL" stands both for "exponential–logarithmic" and as an abbreviation for "elementary". Whether a number is a closed-form number is related to whether a number is transcendental. Formally, Liouvillian numbers and elementary numbers contain the algebraic numbers, and they include some but not all transcendental numbers. In contrast, EL numbers do not contain all algebraic numbers, but do include some transcendental numbers. Closed-form numbers can be studied via transcendental number theory, in which a major result is the Gelfond–Schneider theorem, and a major open question is Schanuel's conjecture. Numerical computations For purposes of numeric computations, being in closed form is not in general necessary, as many limits and integrals can be efficiently computed. Some equations have no closed form solution, such as those that represent the Three-body problem or the Hodgkin–Huxley model. Therefore, the future states of these systems must be computed numerically. Conversion from numerical forms There is software that attempts to find closed-form expressions for numerical values, including RIES, in Maple and SymPy, Plouffe's Inverter, and the Inverse Symbolic Calculator. See also Notes References Further reading External links Closed-form continuous-time neural networks Algebra Special functions
Closed-form expression
[ "Mathematics" ]
1,871
[ "Special functions", "Algebra", "Combinatorics" ]
585,182
https://en.wikipedia.org/wiki/F%C3%A9lix%20Gaillard
Félix Gaillard d'Aimé (; 5 November 1919 – 10 July 1970) was a French Radical politician who served as Prime Minister under the Fourth Republic from 1957 to 1958. He was the youngest head of a French government since Napoleon. Career A senior civil servant in the Inland Revenue Service, Gaillard joined the Resistance and served on its Finance committee. As a member of the Radical Party, he was elected deputy of Charente département in 1946. During the Fourth Republic, he held a number of governmental offices, notably as Minister of Economy and Finance in 1957. Prime minister He became Prime Minister in 1957, but, not unusually for the French Fourth Republic; his term of office lasted only a few months. Gaillard was defeated in a vote of no confidence by the French National Assembly, in March 1958, after the bombing of Sakiet-Sidi-Youssef, a Tunisian village. Later political career President of the Radical Party from 1958 to 1961, he advocated an alliance of the center-left and the center-right parties. He represented a generation of young politicians whose careers were blighted by the advent of the Fifth Republic. Death Gaillard was last seen alive on 9 July 1970, when he and three passengers boarded his yacht, the Marie Grillon and departed the island of Jersey to return to the French mainland after a brief stay. The next day, bits of the wreckage of the yacht were found at the Minquiers reefs, along with the bodies of the two passengers. Gaillard's body was found, along with that of another passenger, floating in the English Channel on 12 July. Gaillard's Ministry, 6 November 1957 – 14 May 1958 Félix Gaillard – President of the Council Christian Pineau – Minister of Foreign Affairs Jacques Chaban-Delmas – Minister of National Defense and Armed Forces Maurice Bourgès-Maunoury – Minister of the Interior Pierre Pflimlin – Minister of Finance, Economic Affairs, and Planning Paul Ribeyre – Minister of Commerce and Industry Paul Bacon – Minister of Labour and Social Security Robert Lecourt – Minister of Justice René Billères – Minister of National Education, Youth, and Sports Antoine Quinson – Minister of Veterans and War Victims Roland Boscary-Monsservin – Minister of Agriculture Gérard Jaquet – Minister of Overseas France Édouard Bonnefous – Minister of Public Works, Transport, and Tourism Félix Houphouët-Boigny – Minister of Public Health and Population Pierre Garet – Minister of Reconstruction and Housing Max Lejeune – Minister for the Sahara References 1919 births 1970 deaths Politicians from Paris Radical Party (France) politicians Prime ministers of France Finance ministers of France Deputies of the 1st National Assembly of the French Fourth Republic Deputies of the 2nd National Assembly of the French Fourth Republic Deputies of the 3rd National Assembly of the French Fourth Republic Deputies of the 1st National Assembly of the French Fifth Republic Deputies of the 2nd National Assembly of the French Fifth Republic Deputies of the 3rd National Assembly of the French Fifth Republic Deputies of the 4th National Assembly of the French Fifth Republic Members of Parliament for Charente Sciences Po alumni University of Paris alumni French Resistance members French people of the First Indochina War French people of the Algerian War Deaths due to shipwreck at sea Deaths from explosion
Félix Gaillard
[ "Chemistry" ]
655
[ "Deaths from explosion", "Explosions" ]
585,185
https://en.wikipedia.org/wiki/Chemical%20Abstracts%20Service
Chemical Abstracts Service (CAS) is a division of the American Chemical Society. It is a source of chemical information and is located in Columbus, Ohio, United States. Print periodicals Chemical Abstracts is a periodical index that provides numerous tools such as SciFinder as well as tagged keywords, summaries, indexes of disclosures, and structures of compounds in recently published scientific documents. Approximately 8,000 journals, technical reports, dissertations, conference proceedings, and new books, available in at least 50 different languages, are monitored yearly, as are patent specifications from 27 countries and two international organizations. Chemical Abstracts ceased print publication on January 1, 2010. Databases The two principal databases that support the different products are CAplus and Registry. CAS References CAS References consists of bibliographic information and abstracts for all articles in chemical journals worldwide, and chemistry-related articles from all scientific journals, patents, and other scientific publications. Registry , the CAS Registry contains information on more than 200 million organic and inorganic substances, and about 70 million protein and nucleic acid sequences. The sequence information comes from CAS and GenBank, produced by the National Institutes of Health. The chemical information is produced by CAS, and is prepared by the CAS Registry System, which identifies each compound with a specific CAS registry number, index name, and graphic representation of its chemical structure. The assignment of chemical names is done according to the chemical nomenclature rules for CA index names, which is slightly different from the internationally standard IUPAC names, according to the rules of IUPAC. Products CAS databases are available via two principal database systems, STN, and SciFinder. STN STN (Scientific & Technical Information Network) International is operated jointly by CAS and FIZ Karlsruhe, and is intended primarily for information professionals, using a command language interface. In addition to CAS databases, STN also provides access to many other databases, similar to Dialog. SciFinder SciFinder is a database of chemical and bibliographic information. Originally it was available only as a client application (for both Windows and MacOS operating systems), a web version was released in 2008. By that time it had a graphical interface, and was able to do graphical searches for chemical structures and reactions (the first database to allow such functions), as well as keyword searches for literature in chemistry and related disciplines. SciFinder Scholar was a very similar a product developed for academic institutions, but discontinued in 2023. In 2017 the ACS released SciFinder-n as a web-only product with the same data content and improved user interface and search functions. SciFinder is considered as the best source of chemical information worldwide, with substantially larger number of relevant information sources than Web of Science or Scopus with Reaxys. However, due to its unique and unusual search functions, substantial training is needed in order to fully take advantage of SciFinder capabilities. CASSI CASSI stands for Chemical Abstracts Service Source Index. Since 2009, this formerly print and CD-ROM compilation is available as a free online resource to look up and confirm publication information. The online CASSI Search Tool provides titles and abbreviations, CODEN, ISSN, publisher, and date of first issue (history) for a selected journal. Also included is its language of text and language of summaries. The range is from 1907 to the present, including both serial and non-serial scientific and technical publications. The database is updated quarterly. Beyond CASSI lists abbreviated journal titles from early chemical literature and other historical reference sources. History Chemical Abstracts (CA) began as a volunteer effort and developed from there. The use of volunteer abstractors was phased out in 1994. Chemical Abstracts has been associated with the American Chemical Society in one way or another since 1907. For many years, beginning in 1909, the offices of Chemical Abstracts were housed in various places on the Columbus, Ohio campus of Ohio State University, including McPherson Laboratory and Watts Hall. In 1965, CAS moved to a new site on the west bank of the Olentangy River, just north of The Ohio State campus. This campus became well known in the Columbus area and famous as the site of many Columbus Symphony Orchestra pop concerts. In 2009, the campus consisted of three buildings. In 1907, William A. Noyes had enlarged the Review of American Chemical Research, an abstracting publication begun by Arthur Noyes in 1895 that was the forerunner of Chemical Abstracts. When it became evident that a separate publication containing these abstracts was needed, Noyes became the first editor of the new publication, Chemical Abstracts. E. J. Crane became the first Director of Chemical Abstracts Service when it became an American Chemical Society division in 1956. Crane had been CA editor since 1915, and his dedication was a key factor in its long-term success. Dale B. Baker became the CAS Director upon Crane's retirement in 1958. According to CAS, his visionary view of CAS' potential "led to expansion, modernization, and the forging of international alliances with other information organizations." CAS was an early leader in the use of computer technology to organize and disseminate information. The CAS Chemical Registry System was introduced in 1965. CAS developed a unique registry number to identify chemical substances. Agencies such as the U.S. Environmental Protection Agency and local fire departments around the world now rely on these numbers for the definite identification of substances. According to the ACS, this is the largest chemical substance database in the world. In 1965, CAS left their offices at OSU for a new headquarters north of campus. Ground was broken in 1971 for an expansion to the building designed by architects Brubaker/Brandt to accommodate the review of 400,000 new research reports printed each year. The 5-story 142,000 square foot building opened in May 1973. In 2007, the ACS designated its Chemical Abstracts Service subdivision an ACS National Historic Chemical Landmark in recognition of its significance as a comprehensive repository of research in chemistry and related sciences. In 2021, CAS rebranded along with a change in logo. The organization updated their mission to be more focused on dynamic responsiveness due to ongoing changes within scientific industries and communities. In 2022, CAS announced the release of almost half a million CAS registry numbers under an open license in their Common Chemistry project. See also Beilstein database Chemical database ChemInform ChemSpider SPRESI database FIZ Karlsruhe Google Scholar Inorganic Crystal Structure Database List of academic databases and search engines List of chemical databases List of open-access journals List of scientific journals PubChem References External links American Chemical Society Chemical databases Chemistry journals Bibliographic databases and indexes 1907 establishments in Ohio Companies based in the Columbus, Ohio metropolitan area
Chemical Abstracts Service
[ "Chemistry" ]
1,345
[ "American Chemical Society", "Chemical databases" ]
585,226
https://en.wikipedia.org/wiki/Cosmography
The term cosmography has two distinct meanings: traditionally it has been the protoscience of mapping the general features of the cosmos, heaven and Earth; more recently, it has been used to describe the ongoing effort to determine the large-scale features of the observable universe. Premodern views of cosmography can be traditionally divided into those following the tradition of ancient near eastern cosmology, dominant in the Ancient Near East and in early Greece. Traditional usage The 14th-century work 'Aja'ib al-makhluqat wa-ghara'ib al-mawjudat by Persian physician Zakariya al-Qazwini is considered to be an early work of cosmography. Traditional Hindu, Buddhist and Jain cosmography schematize a universe centered on Mount Meru surrounded by rivers, continents and seas. These cosmographies posit a universe being repeatedly created and destroyed over time cycles of immense lengths. In 1551, Martín Cortés de Albacar, from Zaragoza, Spain, published Breve compendio de la esfera y del arte de navegar. Translated into English and reprinted several times, the work was of great influence in Britain for many years. He proposed spherical charts and mentioned magnetic deviation and the existence of magnetic poles. Peter Heylin's 1652 book Cosmographie (enlarged from his Microcosmos of 1621) was one of the earliest attempts to describe the entire world in English, and is the first known description of Australia, and among the first of California. The book has four sections, examining the geography, politics, and cultures of Europe, Asia, Africa, and America, with an addendum on Terra Incognita, including Australia, and extending to Utopia, Fairyland, and the "Land of Chivalrie". In 1659, Thomas Porter published a smaller, but extensive Compendious Description of the Whole World, which also included a chronology of world events from Creation forward. These were all part of a major trend in the European Renaissance to explore (and perhaps comprehend) the known world. Modern usage In astrophysics, the term "cosmography" is beginning to be used to describe attempts to determine the large-scale matter distribution and kinematics of the observable universe, dependent on the Friedmann–Lemaître–Robertson–Walker metric but independent of the temporal dependence of the scale factor on the matter/energy composition of the Universe. The word was also commonly used by Buckminster Fuller in his lectures. Using the Tully-Fisher relation on a catalog of 10000 galaxies has allowed the construction of 3D images of the local structure of the cosmos. This led to the identification of a local supercluster named the Laniakea Supercluster. See also Johann Bayer Andreas Cellarius Cosmographia Julius Schiller Star cartography Chronology of the Universe Cosmogony Cosmology Timeline of cosmological theories Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure Large-scale structure of the cosmos Timeline of astronomical maps, catalogs, and surveys References Physical cosmology
Cosmography
[ "Physics", "Astronomy" ]
643
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
585,271
https://en.wikipedia.org/wiki/Perfect%20group
In mathematics, more specifically in group theory, a group is said to be perfect if it equals its own commutator subgroup, or equivalently, if the group has no non-trivial abelian quotients. Examples The smallest (non-trivial) perfect group is the alternating group A5. More generally, any non-abelian simple group is perfect since the commutator subgroup is a normal subgroup with abelian quotient. However, a perfect group need not be simple; for example, the special linear group over the field with 5 elements, SL(2,5) (or the binary icosahedral group, which is isomorphic to it) is perfect but not simple (it has a non-trivial center containing ). The direct product of any two simple non-abelian groups is perfect but not simple; the commutator of two elements is [(a,b),(c,d)] = ([a,c],[b,d]). Since commutators in each simple group form a generating set, pairs of commutators form a generating set of the direct product. The fundamental group of is a perfect group of order 120. More generally, a quasisimple group (a perfect central extension of a simple group) that is a non-trivial extension (and therefore not a simple group itself) is perfect but not simple; this includes all the insoluble non-simple finite special linear groups SL(n,q) as extensions of the projective special linear group PSL(n,q) (SL(2,5) is an extension of PSL(2,5), which is isomorphic to A5). Similarly, the special linear group over the real and complex numbers is perfect, but the general linear group GL is never perfect (except when trivial or over , where it equals the special linear group), as the determinant gives a non-trivial abelianization and indeed the commutator subgroup is SL. A non-trivial perfect group, however, is necessarily not solvable; and 4 divides its order (if finite), moreover, if 8 does not divide the order, then 3 does. Every acyclic group is perfect, but the converse is not true: A5 is perfect but not acyclic (in fact, not even superperfect), see . In fact, for the alternating group is perfect but not superperfect, with for . Any quotient of a perfect group is perfect. A non-trivial finite perfect group that is not simple must then be an extension of at least one smaller simple non-abelian group. But it can be the extension of more than one simple group. In fact, the direct product of perfect groups is also perfect. Every perfect group G determines another perfect group E (its universal central extension) together with a surjection f: E → G whose kernel is in the center of E, such that f is universal with this property. The kernel of f is called the Schur multiplier of G because it was first studied by Issai Schur in 1904; it is isomorphic to the homology group . In the plus construction of algebraic K-theory, if we consider the group for a commutative ring , then the subgroup of elementary matrices forms a perfect subgroup. Ore's conjecture As the commutator subgroup is generated by commutators, a perfect group may contain elements that are products of commutators but not themselves commutators. Øystein Ore proved in 1951 that the alternating groups on five or more elements contained only commutators, and conjectured that this was so for all the finite non-abelian simple groups. Ore's conjecture was finally proven in 2008. The proof relies on the classification theorem. Grün's lemma A basic fact about perfect groups is Otto Grün's proposition of Grün's lemma : the quotient of a perfect group by its center is centerless (has trivial center). Proof: If G is a perfect group, let Z1 and Z2 denote the first two terms of the upper central series of G (i.e., Z1 is the center of G, and Z2/Z1 is the center of G/Z1). If H and K are subgroups of G, denote the commutator of H and K by [H, K] and note that [Z1, G] = 1 and [Z2, G] ⊆ Z1, and consequently (the convention that [X, Y, Z] = [[X, Y], Z] is followed): By the three subgroups lemma (or equivalently, by the Hall-Witt identity), it follows that [G, Z2] = [[G, G], Z2] = [G, G, Z2] = {1}. Therefore, Z2 ⊆ Z1 = Z(G), and the center of the quotient group G / Z(G) is the trivial group. As a consequence, all higher centers (that is, higher terms in the upper central series) of a perfect group equal the center. Group homology In terms of group homology, a perfect group is precisely one whose first homology group vanishes: H1(G, Z) = 0, as the first homology group of a group is exactly the abelianization of the group, and perfect means trivial abelianization. An advantage of this definition is that it admits strengthening: A superperfect group is one whose first two homology groups vanish: . An acyclic group is one all of whose (reduced) homology groups vanish (This is equivalent to all homology groups other than vanishing.) Quasi-perfect group Especially in the field of algebraic K-theory, a group is said to be quasi-perfect if its commutator subgroup is perfect; in symbols, a quasi-perfect group is one such that G(1) = G(2) (the commutator of the commutator subgroup is the commutator subgroup), while a perfect group is one such that G(1) = G (the commutator subgroup is the whole group). See and . Notes References External links Properties of groups Lemmas
Perfect group
[ "Mathematics" ]
1,309
[ "Mathematical structures", "Properties of groups", "Algebraic structures", "Mathematical problems", "Mathematical theorems", "Lemmas" ]
585,300
https://en.wikipedia.org/wiki/Return%20merchandise%20authorization
A return merchandise authorization (RMA), return authorization (RA) or return goods authorization (RGA) is a part of the process of returning a product to receive a refund, replacement, or repair to which buyer and seller agree during the product's warranty period. Reverse logistics The issuance of an RMA/RGA is a key gatekeeping point in the reverse logistics cycle, providing the vendor with a final opportunity to diagnose and correct the customer's problem with the product. The reasons for a product return vary and include improper installation by the customer or inability to configure the product. RMA/RGA comes before the customer permanently relinquishes ownership of the product to the manufacturer, commonly referred to as a return. A return is costly for the vendor and inconvenient for the customer; any return that can be prevented benefits both parties. Returned merchandise requires management by the manufacturer after the return. The product has a second life cycle after the return. An important aspect of RMA management is learning from RMA trends to prevent further returns. Depending on what the rules are, the manufacturer may send the customer an advance replacement. RMAs may be minimized in a number of ways. Adding a customer survey capability may prevent RMAs by detecting problems in advance of returns. Returns are sometimes minimized by reducing transaction errors prior to the merchandise leaving the seller. Providing additional information to consumers also reduces returns. Return to vendor Return to vendor (RTV) is the process where goods are returned to the original vendor instead of the distributor. In many cases the RTV was originally returned to the seller by the end consumer. While RTV transactions usually occur between the seller and the vendor, in some instances the end consumer returns the product directly to the vendor, sidestepping the distributor. See also Product return References Contract law Computer law Product return
Return merchandise authorization
[ "Technology" ]
384
[ "Computer law", "Computing and society" ]
585,308
https://en.wikipedia.org/wiki/Chroot
chroot is an operation on Unix and Unix-like operating systems that changes the apparent root directory for the current running process and its children. A program that is run in such a modified environment cannot name (and therefore normally cannot access) files outside the designated directory tree. The term "chroot" may refer to the system call or the wrapper program. The modified environment is called a chroot jail. History The chroot system call was introduced during development of Version 7 Unix in 1979. One source suggests that Bill Joy added it on 18 March 1982 – 17 months before 4.2BSD was released – in order to test its installation and build system. All versions of BSD that had a kernel have chroot(2). An early use of the term "jail" as applied to chroot comes from Bill Cheswick creating a honeypot to monitor a hacker in 1991. The first article about a jailbreak has been discussed on the security column of SunWorld Online which is written by Carole Fennelly; the August 1999 and January 1999 editions cover most of the chroot() topics. To make it useful for virtualization, FreeBSD expanded the concept and in its 4.0 release in 2000 introduced the jail command. By 2002, an article written by Nicolas Boiteux described how to create a jail on Linux. By 2003, first internet microservices providers with Linux jails provide SAAS/PAAS (shell containers, proxy, ircd, bots, ...) services billed for consumption into the jail by usage. By 2005, Sun released Solaris Containers (also known as Solaris Zones), described as "chroot on steroids." By 2008, LXC (upon which Docker was later built) adopted the "container" terminology and gained popularity in 2013 due to inclusion into Linux kernel 3.8 of user namespaces. Uses A chroot environment can be used to create and host a separate virtualized copy of the software system. This can be useful for: Testing and development A test environment can be set up in the chroot for software that would otherwise be too risky to deploy on a production system. Dependency control Software can be developed, built and tested in a chroot populated only with its expected dependencies. This can prevent some kinds of linkage skew that can result from developers building projects with different sets of program libraries installed. Compatibility Legacy software or software using a different ABI must sometimes be run in a chroot because their supporting libraries or data files may otherwise clash in name or linkage with those of the host system. Recovery Should a system be rendered unbootable, a chroot can be used to move back into the damaged environment after bootstrapping from an alternate root file system (such as from installation media, or a Live CD). Privilege separation Programs are allowed to carry open file descriptors (for files, pipelines and network connections) into the chroot, which can simplify jail design by making it unnecessary to leave working files inside the chroot directory. This also simplifies the common arrangement of running the potentially vulnerable parts of a privileged program in a sandbox, in order to pre-emptively contain a security breach. Note that chroot is not necessarily enough to contain a process with root privileges. Limitations The chroot mechanism is not intended to defend against intentional tampering by privileged (root) users. A notable exception is NetBSD, on which chroot is considered a security mechanism and no escapes are known. On most systems, chroot contexts do not stack properly and chrooted programs with sufficient privileges may perform a second chroot to break out. To mitigate the risk of these security weakness, chrooted programs should relinquish root privileges as soon as practical after chrooting, or other mechanisms – such as FreeBSD jails – should be used instead. Note that some systems, such as FreeBSD, take precautions to prevent a second chroot attack. On systems that support device nodes on ordinary filesystems, a chrooted root user can still create device nodes and mount the file systems on them; thus, the chroot mechanism is not intended by itself to be used to block low-level access to system devices by privileged users. It is not intended to restrict the use of resources like I/O, bandwidth, disk space or CPU time. Most Unixes are not completely file system-oriented and leave potentially disruptive functionality like networking and process control available through the system call interface to a chrooted program. At startup, programs expect to find scratch space, configuration files, device nodes and shared libraries at certain preset locations. For a chrooted program to successfully start, the chroot directory must be populated with a minimum set of these files. This can make chroot difficult to use as a general sandboxing mechanism. Tools such as Jailkit can help to ease and automate this process. Only the root user can perform a chroot. This is intended to prevent users from putting a setuid program inside a specially crafted chroot jail (for example, with a fake and file) that would fool it into a privilege escalation. Some Unixes offer extensions of the chroot mechanism to address at least some of these limitations (see Implementations of operating system-level virtualization technology). Graphical applications on chroot It is possible to run graphical applications on a chrooted environment, using methods such as: Use xhost (or copy the secret from .Xauthority) Nested X servers like Xnest or the more modern Xephyr (or start a real X server from inside the jail) Accessing the chroot via SSH using the X11 forwarding (ssh -X) feature xchroot an extended version of chroot for users and Xorg/X11 forwarding (socat/mount) An X11 VNC server and connecting a VNC client outside the environment. Atoms is a Linux Chroot Management Tool with a User-Friendly GUI. Notable applications The Postfix mail transfer agent operates as a pipeline of individually chrooted helper programs. Like 4.2BSD before it, the Debian and Ubuntu internal package-building farms use chroots extensively to catch unintentional build dependencies between packages. SUSE uses a similar method with its build program. Fedora, Red Hat, and various other RPM-based distributions build all RPMs using a chroot tool such as mock. Many FTP servers for POSIX systems use the chroot mechanism to sandbox untrusted FTP clients. This may be done by forking a process to handle an incoming connection, then chrooting the child (to avoid having to populate the chroot with libraries required for program startup). If privilege separation is enabled, the OpenSSH daemon will chroot an unprivileged helper process into an empty directory to handle pre-authentication network traffic for each client. The daemon can also sandbox SFTP and shell sessions in a chroot (from version 4.9p1 onwards). ChromeOS can use a chroot to run a Linux instance using Crouton, providing an otherwise thin OS with access to hardware resources. The security implications related in this article apply here. Linux host kernel virtual file systems and configuration files To have a functional chroot environment in Linux, the kernel virtual file systems and configuration files also have to be mounted/copied from host to chroot. # Mount Kernel Virtual File Systems TARGETDIR="/mnt/chroot" mount -t proc proc $TARGETDIR/proc mount -t sysfs sysfs $TARGETDIR/sys mount -t devtmpfs devtmpfs $TARGETDIR/dev mount -t tmpfs tmpfs $TARGETDIR/dev/shm mount -t devpts devpts $TARGETDIR/dev/pts # Copy /etc/hosts /bin/cp -f /etc/hosts $TARGETDIR/etc/ # Copy /etc/resolv.conf /bin/cp -f /etc/resolv.conf $TARGETDIR/etc/resolv.conf # Link /etc/mtab chroot $TARGETDIR rm /etc/mtab 2> /dev/null chroot $TARGETDIR ln -s /proc/mounts /etc/mtab See also List of Unix commands Operating system-level virtualization Sandbox (computer security) sudo References External links Integrating GNU/Linux with Android using chroot Computer security procedures Free virtualization software Unix process- and task-management-related software Virtualization software Linux kernel features System calls
Chroot
[ "Engineering" ]
1,848
[ "Cybersecurity engineering", "Computer security procedures" ]
585,358
https://en.wikipedia.org/wiki/Plastic%20model%20kit
A plastic model kit, (plamo in Eastern influenced parlance), is a consumer-grade plastic scale model manufactured as a kit, primarily assembled by hobbyists, and intended primarily for display. A plastic model kit depicts various subjects, ranging from real life military and civilian vehicles to characters and machinery from original kit lines and pop fiction, especially from eastern pop culture. A kit varies in difficulty, ranging from a "snap-together" model that assembles straight from the box, to a kit that requires special tools, paints, and plastic cements. Subjects The most popular subjects of plastic models by far are vehicles such as aircraft, ships, automobiles, and armored vehicles such as tanks. The majority of models throughout its early history depict military vehicles, due to the wider variety of form and historical context compared to civilian vehicles. Other subjects include science fiction vehicles and mecha, real spacecraft, buildings, animals, human(oid) dolls/action figures, and characters from pop culture. While military, ship, and aircraft modelers prize accuracy above all, modelers of automobiles and science-fiction themes may attempt to duplicate an existing subject, or may depict a completely imaginary subject. The creation of custom automobile models is related to the creation of actual custom cars and often an individual may have an interest in both, although the cost of customizing a real car is obviously enormously greater than that of customizing a model. Construction and techniques The first plastic models were injection molded in cellulose acetate (e.g. Frog Penguin and Varney Trains), but currently most plastic models are injection-molded in polystyrene, and the parts are bonded together, usually with a plastic solvent-based adhesive, although experienced modelers may also use epoxy, cyanoacrylate, and white glue where their particular properties would be advantageous. While often omitted by novice modellers, specially formulated paint is sold for application to plastic models. Complex markings such as aircraft insignia or automobile body decorative details and model identification badges are typically provided with kits as screen-printed water-slide decals. Recently, models requiring less skill, time, and/or effort have been marketed, targeted to younger or less skilled modelers as well as those who just wish to reduce the time and effort required to complete a model. One such trend has been to offer a fully detailed kit requiring normal assembly and gluing, but eliminate the often frustrating task of painting the kit by molding it out of colored plastic, or by supplying it pre-painted and with decals applied. Often these kits are identical to another kit supplied in normal white or gray plastic except for the colored plastic or the prepainting, thus eliminating the large expense of creating another set of molds. Another trend which has become very extensive is to produce kits where the parts snap together, with no glue needed; sometimes the majority of the parts snap together with a few requiring glue. Often there is some simplification of detail as well; for instance, automotive kits without opening hoods and no engine detail, or sometimes opaque windows with no interior detail. These are often supplied in colored plastic, although smaller details would still require painting. Decals are usually not supplied with these but sometimes vinyl stickers are provided for insignia and similar details. Resin casting and vacuum forming are also used to produce models, or particular parts where the scale of production is not such as to support the investment required for injection molding. Plastic ship model kits typically provide thread in several sizes and colors for the rigging. Automobile kits typically contain vinyl tires, although sometimes these are molded from polystyrene as well, particularly in very inexpensive kits. Thin metal details produced by photoetching have become popular relatively recently, both as detailing parts manufactured and sold by small businesses, and as parts of a complete kit. Detail parts of other materials are sometimes included in kits or sold separately, such as metal tubing to simulate exhaust systems, or vinyl tubing to simulate hoses or wiring. Scales Almost all plastic models are designed in a well-established scale. Each type of subject has one or more common scales, though they differ from one to the other. The general aim is to allow the finished model to be of a reasonable size, while maintaining consistency across models for collections. The following are the most common scales for popular subjects: Aircraft: 1/24, 1/32, 1/48, 1/72, 1/100, and 1/144. The most popular scales are 1/48 and 1/72. Military vehicles: 1/16, 1/24, 1/32, 1/35, 1/48, 1/72, and 1/76. Automobiles: 1/8, 1/12, 1/16, 1/18, 1/20, 1/24, 1/25, 1/32, 1/35, and 1/43. Ships: 1/72, 1/96, 1/144, 1/200, 1/350, 1/400 1/450, 1/600, and 1/700. Figures: 1/72, 1/48, 1/35, 1/24, 1/16, 1/13, 1/8, 1/6, and 1/4. The smaller scale figures are usually used in dioramas; the larger scales (1/8 and 1/6) are popular for stand-alone subjects. Figurine busts: 1/12, 1/10, 1/9 Railways: 1:43.5 (7 mm/1 ft : O scale), 1:76.2 (4 mm/1 ft : OO scale), 1:87 (3.5 mm/1 ft : HO scale) Mecha: 1/144, 1/100, 1/72, 1/60, and 1/35. In reality, models do not always conform to their nominal scale; there are 1/25 scale automobile models which are larger than some 1/24 scale models, for instance. For example, the engine in the recent reissue of the AMT Ala Kart show truck is significantly smaller than the engine in the original issue. AMT employees from the 1960s note that, at that time, all AMT kits were packaged into boxes of a standardized size, to simplify shipping; and the overriding requirement of designing any kit was that it had to fit into that precise size of box, no matter how large or small the original vehicle. This practice was common for other genres and manufacturers of models as well. In modern times this practice has become known as fit-the-box scale. In practice, this means that kits of the same subject in nominally identical scales may produce finished models which actually differ in size, and that hypothetically identical parts in such kits may not be easily swapped between them, even when the kits are both by the same manufacturer. The shape of the model may not entirely conform to the subject, as well; reviews of kits in modeling magazines often comment on how well the model depicts the original. History The first plastic models were manufactured at the end of 1936 by Frog in the UK, with a range of 1/72nd scale model kits called 'Penguin'. In the late 1940s several American companies such as Hawk, Varney, Empire, Renwal and Lindberg began to produce plastic models. Many manufacturers began production in the 1950s and gained ascendancy in the 1960s such as Aurora, Revell, AMT, and Monogram in America, Airfix in UK and Heller SA in France. Other manufacturers included; Matchbox (UK), Italeri, ESCI, (both Italian) Novo {ex-Frog moulds} (former Soviet Union), and Fujimi, Nichimo and Bandai (Japan). American model companies who had been producing assembled promotional scale models of new automobiles each year for automobile dealers found a lucrative side business selling the unassembled parts of these "promos" to hobbyists to assemble, thus finding a new revenue stream for the injection molds which were so expensive to update each year. These early models were typically lower in detail than currently standard, with non-opening hoods and no engines, and simplified or no detail on the chassis, which attached to the body with very visible screws. Within a short time, the kit business began to overshadow the production of promos, and the level of accuracy and detail was raised to satisfy the demands of the marketplace. In the 1960s, Tamiya manufactured aircraft kits in the peculiar (at the time) scale of 1/100. Although the range included famous aircraft such as the Boeing B-52 Stratofortress, McDonnell Douglas F-4 Phantom II, North American F-86 Sabre, Dassault Mirage III, Grumman A-6 Intruder and the LTV A-7 Corsair II, it never enjoyed the same success as 1/72 scale kits did. Soon, Tamiya stopped manufacturing 1/100 scale aircraft but re-issued a small selection of them in 2004. Since the 1970s, Japanese firms such as Hasegawa and Tamiya, and since the 1990s also Chinese firms such as DML, AFV Club and Trumpeter have dominated the field and represent the highest level of technology. Brands from Russia, Central Europe, and Korea have also become prominent recently with companies like Academy Plastic Model. Many smaller companies have also produced plastic models, both in the past and currently. Prior to the rise of plastic models, shaped wood models were offered to model builders. These wood model kits often required extensive work to create results easily obtained by the plastic models. With the development of new technologies, modeling hobby can also be practiced in the virtual world. The Model Builder game, produced by Moonlit studio, available on Steam (service), consists of cutting, assembling, and painting airplanes, helicopters, tanks, cars, and others and making dioramas with them. Transferring the hobby to the game world allows novice modelers and people who do not have space, time, or money to buy multiple models to pursue their interests. Another form of practicing in the virtual world is a 3D modeling with the use of such software like Blender, FreeCAD, Lego Digital Designer (superseded by BrickLink Studio) or LeoCAD, etc. Manufacture While injection-molding is the predominant manufacturing process for plastic models, the high costs of equipment and making molds make it unsuitable for lower-yield production. Thus, models of minor and obscure subjects are often manufactured using alternative processes. Vacuum forming is popular for aircraft models, though assembly is more difficult than for injection-molded kits. Early manufacturers of vacuum formed model kits included Airmodel (the former DDR), Contrail, Airframe (Canada), Formaplane, and Rareplanes (UK). Resin-casting, popular with smaller manufacturers, particularly aftermarket firms (but also producers of full kits), yields a greater degree of detail moulded in situ, but as the moulds used don't last as long, the price of such kits is considerably higher. In recent times, the latest releases from major manufacturers offer unprecedented detail that is a match for the finest resin kits, often including high-quality mixed-media (photo-etched brass, turned aluminum) parts. Variations Many modellers build dioramas as landscaped scenes built around one or more models. They are most common for military vehicles such as tanks, but airfield scenes and 2-3 ships in formation are also popular. Conversions use a kit as a starting point, and modify it to be something else. For instance, kits of the USS Constitution ("Old Ironsides") are readily available, but the Constitution was just one of six sister ships, and an ambitious modeller will modify the kit, by sawing, filing, adding pieces, and so forth, to make a model of one of the others. Scratch building is the creation of a model "from scratch" rather than a manufactured kit. True scratchbuilt models consist of parts made by hand and do not incorporate parts from other kits. These are rare. When parts from other kits are included, the art is technically called "Kit Bashing." Most pieces referred to as "scratchbuilt" are actually a combination of kit bashing and scratchbuilding. Thus, it has become common for either term to be used loosely to refer to these more common hybrid models. Kitbashing is a modelling technique where parts from multiple model kits are combined to create a novel model form. For example, the effects crews on the various Star Trek TV shows frequently kitbashed multiple starship models to quickly create new classes of ship for use in background scenes where details would not be particularly obvious. Issues The demographics of plastic modeling have changed in its half-century of existence, from young boys buying them as toys to older adults building them to assemble large collections. In the United States, as well as some other countries, many modelers are former members of the military who like to recreate the actual equipment they used in service. Technological advances have made model-building more and more sophisticated, and the proliferation of expensive detailing add-ons have raised the bar for competition within modeling clubs. As a result, a kit built "out of the box" on a weekend cannot compare with a kit built over months where a tiny add-on part such as an aircraft seat can cost more than the entire kit itself. Though plastic modeling is generally an uncontroversial hobby, it's not immune to social pressures: In the 1990s, various countries banned Formula One race cars from carrying advertising for tobacco sponsors. In response, manufacturers such as Tamiya removed tobacco logo decals from their race car kits, even those of cars which appeared before the tobacco ban. The Nazi swastika, which appears on World War 2 Luftwaffe aircraft, is illegal to display in Germany, and disappeared from almost all manufacturers' box illustrations in the 1990s. Some makers still include the emblem on the decal sheet, others have "broken" it into two elements which must be reassembled by the builder, while others have omitted it altogether. Aftermarket decal sheets exist that consist entirely of Luftwaffe swastikas. A long lasting legal conflict exists between aerospace corporations and the manufacturers of plastic models. Manufacturers of aircraft have sought royalties from model makers for using their designs and intellectual property in their kits. Hobbyists argue that model kits provide free advertising for the makers of the real vehicles and that any royalties collected would be insignificant compared to the profits made from aircraft construction contracts. They also argue that forcing manufacturers to pay royalties and licensing fees would financially ruin all but the largest model kit makers. Some proponents of the aerospace industry contest that the issue is not of financial damages, but of intellectual property and brand image. In contrast, most of the world's commercial airlines allow their fleet to be modeled, as a form of publicity. Many cottage industry manufacturers, particularly of sci-fi subjects, avoid the issue by selling their products under generic untrademarked names (e.g. selling a figure that clearly depicts Batman as "Bat Hero Figure"). Similarly, automobile manufacturers occasionally make an effort to collect royalties from companies modeling their products. The UK's Ministry of Defence prohibits use of its logos and insignias on commercial products without permission, and according to its licensing policy model and decal manufacturers are required to pay a license fee in order to use Royal Air Force insignia or insignias and logos of any other military unit that was or is a sub-unit of the UK's MOD. See also Model aircraft Ship model Model military vehicle Armor Modeling and Preservation Society International Plastic Modellers' Society (IPMS) List of scale model kit manufacturers List of model aircraft manufacturers Gunpla Frame Arms Girl References Chris Ellis; How to Go Plastic Modelling, Patrick Stephens, 1968 (and subsequent editions). Gerald Scarborough; Plastic Modelling, Airfix Magazine Guide 1, Patrick Stephens, 1974. Robert Schleicher (Author) & Harold A. Edmonson (Editor); Building Plastic Models, Kalmbach, 1991. External links The International Plastic Modellers’ Society, UK (IPMS UK) The International Plastic Modelers’ Society, USA (IPMS USA) The International Plastic Modellers’ Society, Canada (IPMS CANADA) Scale modeling model
Plastic model kit
[ "Physics" ]
3,343
[ "Scale modeling" ]
585,363
https://en.wikipedia.org/wiki/Sanyo
is a Japanese electronics manufacturer founded in 1947 by Toshio Iue, the brother-in-law of Kōnosuke Matsushita, the founder of Matsushita Electric Industrial, now known as Panasonic. Iue left Matsushita Electric to start his own business, acquiring some of its equipment to produce bicycle generator lamps. In 1950, the company was established. Sanyo began to diversify in the 1960s, having launched Japan's first spray-type washing machine in 1953. In the 2000s, it was known as one of the 3S along with Sony and Sharp. Sanyo also focused on solar cell and lithium battery businesses. In 1992, it developed the world's first hybrid solar cell, and in 2002, it had a 41% share of the global lithium-ion battery market. In its heyday in 2003, Sanyo had sales of about ¥2.5 trillion. However, it fell into a financial crisis as a result of its huge investment in the semiconductor business. In 2009, Sanyo was acquired by Panasonic, and in 2011, it was fully consolidated into Panasonic and its brand disappeared. The company still exists as a legal entity for the purpose of winding up its affairs. History Beginnings Sanyo was founded when Toshio Iue, the brother-in-law of Konosuke Matsushita and also a former Matsushita employee, was lent an unused Matsushita plant in 1947 and used it to make bicycle generator lamps. Sanyo was incorporated in 1949; it made Japan's first plastic radio in 1952 and Japan's first pulsator-type washing machine in 1954. The company's name means three oceans in Japanese, referring to the founder's ambition to sell their products worldwide, across the Atlantic, Pacific, and Indian oceans. Sanyo in America In 1969, Howard Ladd became the Executive Vice President and COO of Sanyo Corporation. Ladd introduced the Sanyo brand to the United States in 1970. The ambition to sell Sanyo products worldwide was realized in the mid-1970s after Sanyo introduced home audio equipment, car stereos and other consumer electronics to the North American market. The company embarked on a heavy television-based advertising campaign. Ladd negotiated a purchase of the Fisher Electronics audio equipment manufacturer by Sanyo in May 1975. Under Ladd's leadership, the Fisher Corporation under Sanyo grew to be a multi-million dollar leader in the consumer electronics industry. The new, profitable Fisher Corporation moved its headquarters from New York to Ladd's Los Angeles. Ladd was named president and CEO of the combined Sanyo / Fisher Corporation in 1977, serving until 1987. Ladd was instrumental at Sanyo in promoting Quadraphonic sound audio equipment for the American market, producing 4-channel audio equipment in both discrete and matrix formats. He said "we make all kinds of quadrasonic equipment because this is the business we're in... let the consumer buy the kind of software he prefers and we'll provide him the hardware to play it on". Sanyo realized tremendous growth during Ladd's tenure in the 1970s; annual sales grew from $71.4 million () in 1972 to $855 million () in 1978. After a fairly slow selling line in their own V-Cord video format, Sanyo adopted Sony's Betamax video cassette format around 1977 with initial success, including SuperBeta and Beta Hi-Fi models. From around 1984 onwards, production switched entirely to VHS. In 1976, Sanyo expanded their North American presence with the purchase of Whirlpool Corporation's television business, Warwick Electronics, which manufactured televisions for Sears. In 1986, Sanyo's U.S. affiliate merged with Fisher to become Sanyo Fisher (U.S.A.) Corporation (later renamed Sanyo Fisher Company). The mergers made the entire organization more efficient, but also resulted in the departure of certain key executives, including Ladd, who had first introduced the Sanyo name to the United States in the early 1970s. In 1982, Sanyo started selling the MBC-1000 series of CP/M computers. In 1983, it introduced the MBC-550 PC, the lowest-cost IBM PC compatible personal computer available at the time, but its lack of full compatibility drove Sanyo from the market and no follow-on models were released. 1990s corporate culture An article on "Sanyo Style" written in 1992 described that Sanyo utilizes an extensive socialization process for new employees, so that they will be acclimatized to Sanyo's corporate culture. New employees take a five-month course during which they eat together and sleep together in accommodation. They learn everything from basic job requirements to company expectations for personal grooming and the appropriate way in which to dress for their co-workers and superiors. Technologically, Sanyo has had good ties with Sony, supporting the Betamax video format from invention until the mid-1980s (the best selling video recorder in the UK in 1983 was the Sanyo VTC5000), while producing the VHS video format at the same time for the Fisher brand during the early 1980s, and later being an early adopter of the highly successful Video8 camcorder format. More recently, though, Sanyo decided against supporting Sony's format, the Blu-ray Disc, and instead gave its backing to Toshiba's HD DVD. This was ultimately unsuccessful, however, as Sony's Blu-ray triumphed. In North America, Sanyo manufactured CDMA cellular phones exclusively for Sprint's Sprint PCS brand in the United States and for Bell Mobility in Canada. Acquisition The 2004 Chūetsu earthquake severely damaged Sanyo's semiconductor plant and as a result Sanyo recorded a huge financial loss for that year. The 2005 fiscal year financial results saw a 205 billion yen net income loss. The same year the company announced a restructuring plan called the Sanyo Evolution Project, launching a new corporate vision to make the corporation into an environmental company, plowing investment into strong products like rechargeable batteries, solar photovoltaics, air conditioning, hybrid car batteries and key consumer electronics such as the Xacti camera, projectors and mobile phones. Sanyo posted signs of recovery after the announcement of positive operating income of 2.6 billion yen. Sanyo remains the world number one producer of rechargeable batteries. Recent product innovations in this area include the Eneloop Low self-discharge NiMH battery, a "hybrid" rechargeable NiMH (Nickel-metal hydride battery) which, unlike typical NiMH cells, can be used from-the-package without an initial recharge cycle and retain a charge significantly longer than batteries using standard NiMH battery design. The Eneloop line competes against similar products such as Rayovac's "Hybrid Rechargeable" line. On November 24, 2006, Sanyo announced heavy losses and job cuts. Tomoyo Nonaka, a former NHK anchorwoman who was appointed chairwoman of the company, stepped down in March 2007. The President, Toshimasa Iue, also stepped down in April of that year; Seiichiro Sano was appointed to head the company effective April 2007. In October 2007, Sanyo cancelled a 110 billion yen sale of its semiconductor business, blaming the global credit crisis for the decision and stating that after exploring its other options, it had decided to keep the business and develop it as part of its portfolio. In 2008, Sanyo's mobile phone division was acquired by Kyocera. On November 2, 2008, Sanyo and Panasonic announced that they have agreed on the main points of a proposed buyout that would make Sanyo a subsidiary of Panasonic. They became a subsidiary of Panasonic on December 21, 2009. In 2010, Sanyo sold its semiconductor operations to ON Semiconductor. On July 29, 2010, Panasonic reached an agreement to acquire the remaining shares of Panasonic Electric Works and Sanyo shares for $9.4 billion. By March 2012, parent company Panasonic plans to terminate the Sanyo brand, however it will remain on some of the products where the Sanyo brand still holds value to consumers. In the same month, Sanyo's Southeast Asian unit, responsible for the manufacturing of consumer electric appliances in the region, was announced to be formally acquired by Haier. In August 2013, a 51% majority stake in Chinese company Hefei Royalstar Sanyo, a 2000 joint venture between Sanyo and Chinese government investment company Hefei, was purchased by American multinational manufacturer Whirlpool Corporation for $552 million. Energy Solar cells and plants The Sanyo HIT (Heterojunction with Intrinsic Thin layer) solar cell is composed of a mono thin crystalline silicon wafer surrounded by ultra-thin amorphous silicon layers. Sanyo Energy opened its solar module assembly plants in Hungary and in Mexico in 2004, and in 2006 it produced solar modules worth $213 million. In 2007, Sanyo completed a new unit at its solar module plant in Hungary that was to triple its annual capacity to 720,000 units in 2008. Plans to expand production were based on rising demands for Sanyo Hungary products, whose leading markets are Germany, Italy, Spain and Scandinavia. The plant at Dorog, outside Budapest, became Sanyo's largest solar module production facility in the world. Germany, Italy, Spain and the Scandinavian countries. The plant at Dorog, outside Budapest, will be Sanyo Electric's largest facility producing solar modules in the entire world. In late September 2008, Sanyo announced its decision to build a manufacturing plant for solar ingots and wafers (the building blocks for silicon solar cells) in Inagi, Japan. The plant began operating in October 2009 and was to reach its full production capacity of 70 megawatts (MW) of solar wafers per year by April 2010. Sanyo and Nippon Oil decided to launch a joint company, known as Sanyo Eneos Solar Co., Ltd., for the production and sale of thin-film solar panels. The new joint company began production and sales at an initial scale of 80 MW, while gradually increasing its production capacity. For this joint project, Sanyo drew on its solar cell technologies, based on the technology acquired through the development of the HIT solar cell. Sanyo is also responsible for the construction of the Solar Ark. Rechargeable batteries Sanyo pioneered the production of nickel cadmium batteries in 1964, nickel metal hydride batteries (NiMh) in 1990, lithium-ion batteries in 1994, and lithium polymer batteries in 1999. In 2000, it acquired Toshiba's NiMh business, including the Takasaki factory. Since the acquisition of Sanyo by Panasonic, ownership of the Takasaki factory was transferred to the FDK Corporation. Electric vehicle batteries Sanyo supplies NiMh batteries to Honda, Ford, Volkswagen and PSA Peugeot Citroen. Sanyo is developing NiMH batteries for hybrid electric vehicles with the Volkswagen group, while their lithium-ion batteries for plug-in HEV will also be housed in Suzuki fleet vehicles. Sanyo planned to raise monthly production of NiMH batteries for hybrid vehicles from 1 million units to up to 2.5 million by the end of fiscal 2005. Sanyo India Televisions Panasonic reintroduced the Sanyo brand in India, with the launch of Sanyo LED TV range on August 8, 2016. On July 11, 2017, Sanyo launched its range of smart TVs on Amazon Prime Day. In August 2017, Sanyo unveiled its NXT range of LED televisions exclusively on Flipkart. In December 2017, Sanyo introduced its first 4K smart TV range in India. In September 2019, Sanyo introduced a range of Android TV sets known as the Sanyo Kaizen Series. Air conditioners Sanyo worked with Energy Efficiency Services Limited to develop a 1.5-ton inverter air conditioner (AC) with an Indian Seasonal Energy Efficiency Ratio (ISEER) of 5.2. Distribution of these air conditioners began in September 2017. On April 4, 2019, Sanyo launched a new AC range exclusively on Amazon. Sanyo TV USA Though founded in Japan, Sanyo has sold TVs in America for over 50 years; Sanyo TV USA was headquartered in San Diego, California with facilities located in Tijuana, Mexico. Many of Sanyo's television sets offer MHL compatibility along with Roku-ready branding via HDMI, meaning the TVs are compatible with Roku's MHL-specific streaming stick. Sometimes included with purchase, such as with the Sanyo FVF5044, this stick enables video streaming and other online functions as an affordable alternative to certain smart TVs; the TV's original remote is capable of browsing the service. Multiple models also have USB ports which allow for immediate photo sharing directly off the stick without any additional software/upgrades. Funai Era In October 2014, Panasonic announced its intent to transfer the Sanyo TV unit to Funai in the US market in return for annual royalty payments. Funai is a major Walmart supplier that also supplies Philips and Emerson TV sets to the retail chain. Consumer Reports commented in 2018 that Sanyo TVs "seem to turn up mostly in Walmart stores, almost as a private label for the retailer." Confusion with Sanyo Denki Co. Ltd. Sanyo Electric Co. Ltd. (三洋電機株式会社,) is not affiliated with Sanyo Denki Co. Ltd (山洋電気株式会社), which makes high speed, large airflow, high static pressure DC fans sold under the moniker "San Ace", a product line mainly geared towards the enterprise market. As of October 2020, Sanyo Denki holds the world record for both rotational speed and static pressure of various dimensions and models. Some notable records are: A 12V 31.2W fan released in May 2020, with a rotational speed of 38,000 RPM and a static pressure of . A 12V 37.2W contra-rotating fan released in August 2020, with a rotational speed of 36,200 (inlet) and 32,000 (outlet) RPM in opposite directions, creating a static pressure of . A 12V 57.6W fan able to spin at 18,300 RPM and provide a static pressure of . Sponsorship Sanyo was the primary sponsor of the Penrith Panthers in the National Rugby League in Australia from 2000 to 2012. In Formula One, the company sponsored Benetton from 1989 to 1995, Williams from 1995 to 1997 and Stewart Grand Prix from 1997 to 1999. In football, the company sponsored the Argentinian club River Plate from 1992 until 1995 and the Brazilian Coritiba from 1995 until 1999. References External links Sanyo information on Panasonic website Japanese companies established in 1949 2011 disestablishments in Japan Audio equipment manufacturers of Japan Battery manufacturers Companies based in Osaka Prefecture Companies formerly listed on the Tokyo Stock Exchange Consumer battery manufacturers Consumer electronics brands Defunct semiconductor companies Display technology companies Electronics companies established in 1949 Electric vehicle battery manufacturers Defunct defense companies of Japan Defunct manufacturing companies of Japan Heating, ventilation, and air conditioning companies Japanese brands Mobile phone manufacturers Panasonic Corporation brands Photography equipment manufacturers of Japan Photovoltaics manufacturers Portable audio player manufacturers Solar energy companies Solar energy companies of Japan Thin-film cell manufacturers Video equipment manufacturers Defunct mobile phone manufacturers Defunct computer systems companies 2011 mergers and acquisitions Semiconductor companies of Japan Home appliance brands Radio manufacturers Electronics companies of Japan
Sanyo
[ "Engineering" ]
3,198
[ "Radio electronics", "Photovoltaics manufacturers", "Radio manufacturers", "Engineering companies" ]
585,388
https://en.wikipedia.org/wiki/Homology%20sphere
In algebraic topology, a homology sphere is an n-manifold X having the homology groups of an n-sphere, for some integer . That is, and for all other i. Therefore X is a connected space, with one non-zero higher Betti number, namely, . It does not follow that X is simply connected, only that its fundamental group is perfect (see Hurewicz theorem). A rational homology sphere is defined similarly but using homology with rational coefficients. Poincaré homology sphere The Poincaré homology sphere (also known as Poincaré dodecahedral space) is a particular example of a homology sphere, first constructed by Henri Poincaré. Being a spherical 3-manifold, it is the only homology 3-sphere (besides the 3-sphere itself) with a finite fundamental group. Its fundamental group is known as the binary icosahedral group and has order 120. Since the fundamental group of the 3-sphere is trivial, this shows that there exist 3-manifolds with the same homology groups as the 3-sphere that are not homeomorphic to it. Construction A simple construction of this space begins with a dodecahedron. Each face of the dodecahedron is identified with its opposite face, using the minimal clockwise twist to line up the faces. Gluing each pair of opposite faces together using this identification yields a closed 3-manifold. (See Seifert–Weber space for a similar construction, using more "twist", that results in a hyperbolic 3-manifold.) Alternatively, the Poincaré homology sphere can be constructed as the quotient space SO(3)/I where I is the icosahedral group (i.e., the rotational symmetry group of the regular icosahedron and dodecahedron, isomorphic to the alternating group A5). More intuitively, this means that the Poincaré homology sphere is the space of all geometrically distinguishable positions of an icosahedron (with fixed center and diameter) in Euclidean 3-space. One can also pass instead to the universal cover of SO(3) which can be realized as the group of unit quaternions and is homeomorphic to the 3-sphere. In this case, the Poincaré homology sphere is isomorphic to where is the binary icosahedral group, the perfect double cover of I embedded in . Another approach is by Dehn surgery. The Poincaré homology sphere results from +1 surgery on the right-handed trefoil knot. Cosmology In 2003, lack of structure on the largest scales (above 60 degrees) in the cosmic microwave background as observed for one year by the WMAP spacecraft led to the suggestion, by Jean-Pierre Luminet of the Observatoire de Paris and colleagues, that the shape of the universe is a Poincaré sphere. In 2008, astronomers found the best orientation on the sky for the model and confirmed some of the predictions of the model, using three years of observations by the WMAP spacecraft. Data analysis from the Planck spacecraft suggests that there is no observable non-trivial topology to the universe. Constructions and examples Surgery on a knot in the 3-sphere S3 with framing +1 or −1 gives a homology sphere. More generally, surgery on a link gives a homology sphere whenever the matrix given by intersection numbers (off the diagonal) and framings (on the diagonal) has determinant +1 or −1. If p, q, and r are pairwise relatively prime positive integers then the link of the singularity xp + yq + zr = 0 (in other words, the intersection of a small 3-sphere around 0 with this complex surface) is a Brieskorn manifold that is a homology 3-sphere, called a Brieskorn 3-sphere Σ(p, q, r). It is homeomorphic to the standard 3-sphere if one of p, q, and r is 1, and Σ(2, 3, 5) is the Poincaré sphere. The connected sum of two oriented homology 3-spheres is a homology 3-sphere. A homology 3-sphere that cannot be written as a connected sum of two homology 3-spheres is called irreducible or prime, and every homology 3-sphere can be written as a connected sum of prime homology 3-spheres in an essentially unique way. (See Prime decomposition (3-manifold).) Suppose that are integers all at least 2 such that any two are coprime. Then the Seifert fiber space over the sphere with exceptional fibers of degrees a1, ..., ar is a homology sphere, where the b'''s are chosen so that (There is always a way to choose the b′s, and the homology sphere does not depend (up to isomorphism) on the choice of b′s.) If r is at most 2 this is just the usual 3-sphere; otherwise they are distinct non-trivial homology spheres. If the a′s are 2, 3, and 5 this gives the Poincaré sphere. If there are at least 3 a′s, not 2, 3, 5, then this is an acyclic homology 3-sphere with infinite fundamental group that has a Thurston geometry modeled on the universal cover of SL2(R). Invariants The Rokhlin invariant is a -valued invariant of homology 3-spheres. The Casson invariant is an integer valued invariant of homology 3-spheres, whose reduction mod 2 is the Rokhlin invariant. Applications If A is a homology 3-sphere not homeomorphic to the standard 3-sphere, then the suspension of A is an example of a 4-dimensional homology manifold that is not a topological manifold. The double suspension of A is homeomorphic to the standard 5-sphere, but its triangulation (induced by some triangulation of A) is not a PL manifold. In other words, this gives an example of a finite simplicial complex that is a topological manifold but not a PL manifold. (It is not a PL manifold because the link of a point is not always a 4-sphere.) Galewski and Stern showed that all compact topological manifolds (without boundary) of dimension at least 5 are homeomorphic to simplicial complexes if and only if there is a homology 3 sphere Σ with Rokhlin invariant 1 such that the connected sum Σ#Σ of Σ with itself bounds a smooth acyclic 4-manifold. Ciprian Manolescu showed that there is no such homology sphere with the given property, and therefore, there are 5-manifolds not homeomorphic to simplicial complexes. In particular, the example originally given by Galewski and Stern is not triangulable. See also Eilenberg–MacLane space Moore space (algebraic topology) References Selected reading Robion Kirby, Martin Scharlemann, Eight faces of the Poincaré homology 3-sphere. Geometric topology (Proc. Georgia Topology Conf., Athens, Ga., 1977), pp. 113–146, Academic Press, New York-London, 1979. Nikolai Saveliev, Invariants of Homology 3-Spheres'', Encyclopaedia of Mathematical Sciences, vol 140. Low-Dimensional Topology, I. Springer-Verlag, Berlin, 2002. External links A 16-Vertex Triangulation of the Poincaré Homology 3-Sphere and Non-PL Spheres with Few Vertices by Anders Björner and Frank H. Lutz Lecture by David Gillman on The best picture of Poincare's homology sphere Topological spaces Homology theory 3-manifolds Spheres
Homology sphere
[ "Mathematics" ]
1,607
[ "Topological spaces", "Topology", "Mathematical structures", "Space (mathematics)" ]
326,481
https://en.wikipedia.org/wiki/Irritation
Irritation, in biology and physiology, is a state of inflammation or painful reaction to allergy or cell-lining damage. A stimulus or agent which induces the state of irritation is an irritant. Irritants are typically thought of as chemical agents (for example phenol and capsaicin) but mechanical, thermal (heat), and radiative stimuli (for example ultraviolet light or ionising radiations) can also be irritants. Irritation also has non-clinical usages referring to bothersome physical or psychological pain or discomfort. Irritation can also be induced by some allergic response due to exposure of some allergens for example contact dermatitis, irritation of mucosal membranes and pruritus. Mucosal membrane is the most common site of irritation because it contains secretory glands that release mucus which attracts the allergens due to its sticky nature. Chronic irritation is a medical term signifying that afflictive health conditions have been present for a while. There are many disorders that can cause chronic irritation, the majority involve the skin, vagina, eyes and lungs. Irritation in organisms In higher organisms, an allergic response may be the cause of irritation. An allergen is defined distinctly from an irritant, however, as allergy requires a specific interaction with the immune system and is thus dependent on the (possibly unique) sensitivity of the organism involved while an irritant, classically, acts in a non-specific manner. It is a form of stress, but conversely, if one is stressed by unrelated matters, mild imperfections can cause more irritation than usual: one is irritable; see also sensitivity (human). In more basic organisms, the status of pain is the perception of the being stimulated, which is not observable although it may be shared (see gate control theory of pain). It is not proven that oysters can feel pain, but it is known that they react to irritants. When an irritating object becomes trapped within an oyster's shell, it deposits layers of calcium carbonate (CaCO3), slowly increasing in size and producing a pearl. This is purely a defense mechanism, to trap a potentially threatening irritant such as a parasite inside its shell, or an attack from outside, injuring the mantle tissue. The oyster creates a pearl sac to seal off the irritation. It has also been observed that an amoeba avoids being prodded with a pin, but there is not enough evidence to suggest how much it feels this. Irritation is apparently the only universal sense shared by even single-celled creatures. It is postulated that most such beings also feel pain, but this is a projection – empathy. Some philosophers, notably René Descartes, denied it entirely, even for such higher mammals as dogs or primates like monkeys; Descartes considered intelligence a pre-requisite for the feeling of pain. Types Eye irritation Modern office work with use of office equipment has raised concerns about possible adverse health effects. Since the 1970s, reports have linked mucosal, skin, and general symptoms to work with self-copying paper. Emission of various particulate and volatile substances has been suggested as specific causes. These symptoms have been related to Sick Building Syndrome, which involves symptoms such as irritation to the eyes, skin, and upper airways, headache and fatigue. The eye is also a source of chronic irritation. Disorders like Sjögren's syndrome, where one does not make tears, can cause a dry eye sensation which feels very unpleasant. The condition is difficult to treat and is lifelong. Besides artificial tears, there is a drug called Restasis which may help. Blepharitis is dryness and itching on the upper eyelids. This condition is often seen in young people and can lead to reddish dry eye and scaly eyebrows. To relieve the itching sensation, one may need to apply warm compresses and use topical corticosteroid creams. Skin Eczema is another cause of chronic irritation and affects millions of individuals. Eczema simply means a dry skin which is itchy. The condition usually starts at an early age and continues throughout life. The major complaint of people with eczema is an itchy dry skin. Sometimes, the itching will be associated with a skin rash. The affected areas are always dry, scaly, reddish and may ooze sometimes. Eczema cannot be cured, but its symptoms can be controlled. One should use moisturizers, use cold compresses and avoid frequent hot showers. There are over the counter corticosteroids creams which can be applied. Sometimes, an anti histamine has to be used to prevent the chronic itching sensations. There are also many individuals who have allergies to a whole host of substances like nuts, hair, dander, plants and fabrics. For these individuals, even the minimal exposure can lead to a full blown skin rash, itching, wheezing and coughing. Unfortunately, other than avoidance, there is no other cure. There are allergy shots which can help desensitize against an allergen but often the results are poor and the treatments are expensive. Most of these individuals with chronic irritation from allergens usually need to take anti histamines or use a bronchodilator to relieve symptoms. Another common irritation disorder in females is intertrigo. This disorder is associated with chronic irritation under folds of skin. This is typically seen under large breasts, groins and folds of the abdomen in obese individuals. Candida quickly grows in warm moist areas of these folds and presents as a chronic itch. Over time, the skin becomes red and often oozes. Perspiration is also a chronic type of irritation which can be very annoying. Besides being socially unacceptable, sweat stain the clothes and can present with a foul odor. In some individuals, the warm moist areas often become easily infected. The best way to treat excess sweating is good hygiene, frequent change of clothes and use of deodorants/antiperspirants. Vaginal irritation One of the most common areas of the body associated with irritation is the vagina. Many women complain of an itch, dryness, or discharge in the perineum at some point in their lives. There are several causes of vaginal irritation including fungal vaginitis (like candida) or trichomoniasis. Often, herpes simplex infection of the mouth or genitalia can be recurrent and prove to be extremely irritating. Sometimes, the irritation can be of the chronic type and it can be so intense that it also causes painful intercourse. Aside from infections, chronic irritation of the vagina may be related to the use of contraceptives and condoms made from latex. The majority of contraceptives are made of synthetic chemicals which can induce allergies, rash and itching. Sometimes the lubricant used for intercourse may cause irritation. Another cause of irritation in women is post menopausal vaginitis. The decline in the female sex hormones leads to development of dryness and itching in the vagina. This is often accompanied by painful sexual intercourse. Cracks and tears often develop on outer aspects of the labia which becomes red from chronic scratching. Post menopausal vaginitis can be treated with short term use of vaginal estrogen pessary and use of a moisturizer. Lungs Individuals who smoke or are exposed to smog or other airborne pollutants can develop a condition known as COPD. In this disorder, there is constant irritation of the breathing tubes (trachea) and the small airways. The constant irritation results in excess production of mucus which makes breathing difficult. Frequently, these individuals wake up in the morning with copious amounts of foul smelling mucus and a cough which lasts all day. Wheeze and heavy phlegm are common findings. COPD is a lifelong disorder and there is no cure. Eventually most people develop recurrent pneumonia, lack any type of endurance, and are unable to work productively. One of the ways to avoid chronic bronchitis is to stop or not smoke. Stomach Gastritis or stomach upset is a common irritating disorder affecting millions of people. Gastritis is basically inflammation of the stomach wall lining and has many causes. Smoking, excess alcohol consumption and the use of non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen, account for the majority of causes of gastritis. In some cases, gastritis may develop after surgery, a major burn, infection or emotional stress. The most common symptoms of gastritis include sharp abdominal pain which may radiate to the back. This may be associated with nausea, vomiting, abdominal bloating and a lack of appetite. When the condition is severe it may even result in loss of blood on the stools. The condition often comes and goes for years because most people continue to drink alcohol or use NSAIDs. Treatment includes the use of antacids or acid neutralizing drugs, antibiotics, and avoiding spicy food and alcohol. See also Allergy Irritability (psychology) Itch Stimulus (physiology) References Physiology Inflammations Occupational hazards
Irritation
[ "Biology" ]
1,887
[ "Physiology" ]
326,483
https://en.wikipedia.org/wiki/Brahmagupta%E2%80%93Fibonacci%20identity
In algebra, the Brahmagupta–Fibonacci identity expresses the product of two sums of two squares as a sum of two squares in two different ways. Hence the set of all sums of two squares is closed under multiplication. Specifically, the identity says For example, The identity is also known as the Diophantus identity, as it was first proved by Diophantus of Alexandria. It is a special case of Euler's four-square identity, and also of Lagrange's identity. Brahmagupta proved and used a more general Brahmagupta identity, stating This shows that, for any fixed A, the set of all numbers of the form x2 + Ay2 is closed under multiplication. These identities hold for all integers, as well as all rational numbers; more generally, they are true in any commutative ring. All four forms of the identity can be verified by expanding each side of the equation. Also, (2) can be obtained from (1), or (1) from (2), by changing b to −b, and likewise with (3) and (4). History The identity first appeared in Diophantus' Arithmetica (III, 19), of the third century A.D. It was rediscovered by Brahmagupta (598–668), an Indian mathematician and astronomer, who generalized it to Brahmagupta's identity, and used it in his study of what is now called Pell's equation. His Brahmasphutasiddhanta was translated from Sanskrit into Arabic by Mohammad al-Fazari, and was subsequently translated into Latin in 1126. The identity was introduced in western Europe in 1225 by Fibonacci, in The Book of Squares, and, therefore, the identity has been often attributed to him. Related identities Analogous identities are Euler's four-square related to quaternions, and Degen's eight-square derived from the octonions which has connections to Bott periodicity. There is also Pfister's sixteen-square identity, though it is no longer bilinear. These identities are strongly related with Hurwitz's classification of composition algebras. The Brahmagupta–Fibonacci identity is a special form of Lagrange's identity, which is itself a special form of Binet–Cauchy identity, in turn a special form of the Cauchy–Binet formula for matrix determinants. Multiplication of complex numbers If a, b, c, and d are real numbers, the Brahmagupta–Fibonacci identity is equivalent to the multiplicative property for absolute values of complex numbers: This can be seen as follows: expanding the right side and squaring both sides, the multiplication property is equivalent to and by the definition of absolute value this is in turn equivalent to An equivalent calculation in the case that the variables a, b, c, and d are rational numbers shows the identity may be interpreted as the statement that the norm in the field Q(i) is multiplicative: the norm is given by and the multiplicativity calculation is the same as the preceding one. Application to Pell's equation In its original context, Brahmagupta applied his discovery of this identity to the solution of Pell's equation x2 − Ay2 = 1. Using the identity in the more general form he was able to "compose" triples (x1, y1, k1) and (x2, y2, k2) that were solutions of x2 − Ay2 = k, to generate the new triple Not only did this give a way to generate infinitely many solutions to x2 − Ay2 = 1 starting with one solution, but also, by dividing such a composition by k1k2, integer or "nearly integer" solutions could often be obtained. The general method for solving the Pell equation given by Bhaskara II in 1150, namely the chakravala (cyclic) method, was also based on this identity. Writing integers as a sum of two squares When used in conjunction with one of Fermat's theorems, the Brahmagupta–Fibonacci identity proves that the product of a square and any number of primes of the form 4n + 1 is a sum of two squares. See also Brahmagupta matrix Indian mathematics Brahmagupta polynomials List of Indian mathematicians List of Italian mathematicians Sum of two squares theorem Notes References External links Brahmagupta's identity at PlanetMath Brahmagupta Identity on MathWorld A Collection of Algebraic Identities Brahmagupta Algebraic identities Squares in number theory
Brahmagupta–Fibonacci identity
[ "Mathematics" ]
979
[ "Mathematical identities", "Squares in number theory", "Number theory", "Algebraic identities" ]
326,533
https://en.wikipedia.org/wiki/Ramsey%20cardinal
In mathematics, a Ramsey cardinal is a certain kind of large cardinal number introduced by and named after Frank P. Ramsey, whose theorem, called Ramsey's theorem establishes that ω enjoys a certain property that Ramsey cardinals generalize to the uncountable case. Let [κ]<ω denote the set of all finite subsets of κ. A cardinal number κ is called Ramsey if, for every function f: [κ]<ω → {0, 1} there is a set A of cardinality κ that is homogeneous for f. That is, for every n, the function f is constant on the subsets of cardinality n from A. A cardinal κ is called ineffably Ramsey if A can be chosen to be a stationary subset of κ. A cardinal κ is called virtually Ramsey if for every function f: [κ]<ω → {0, 1} there is C, a closed and unbounded subset of κ, so that for every λ in C of uncountable cofinality, there is an unbounded subset of λ that is homogenous for f; slightly weaker is the notion of almost Ramsey where homogenous sets for f are required of order type λ, for every λ < κ. The existence of any of these kinds of Ramsey cardinal is sufficient to prove the existence of 0#, or indeed that every set with rank less than κ has a sharp. This in turn implies the falsity of the Axiom of Constructibility of Kurt Gödel. Every measurable cardinal is a Ramsey cardinal, and every Ramsey cardinal is a Rowbottom cardinal. A property intermediate in strength between Ramseyness and measurability is existence of a κ-complete normal non-principal ideal I on κ such that for every and for every function f: [κ]<ω → {0, 1} there is a set B ⊂ A not in I that is homogeneous for f. This is strictly stronger than κ being ineffably Ramsey. Definition by κ-models A regular cardinal κ is Ramsey if and only if for any set A ⊂ κ, there is a transitive set M ⊨ ZFC− (i.e. ZFC without the axiom of powerset) of size κ with A ∈ M, and a nonprincipal ultrafilter U on the Boolean algebra P(κ) ∩ M such that: U is an M-ultrafilter: for any sequence ⟨Xβ : β < κ⟩ ∈ M of members of U, the diagonal intersection ΔXβ = {α < κ : ∀β < α(α ∈ Xβ)} ∈ U, U is weakly amenable: for any sequence ⟨Xβ : β < κ⟩ ∈ M of subsets of κ, the set {β < κ : Xβ ∈ U} ∈ M, and U is σ-complete: the intersection of any countable family of members of U is again in U. References Bibliography Large cardinals Ramsey theory
Ramsey cardinal
[ "Mathematics" ]
611
[ "Mathematical objects", "Infinity", "Combinatorics", "Large cardinals", "Ramsey theory" ]
326,540
https://en.wikipedia.org/wiki/Erd%C5%91s%20cardinal
In mathematics, an Erdős cardinal, also called a partition cardinal is a certain kind of large cardinal number introduced by . A cardinal is called -Erdős if for every function , there is a set of order type that is homogeneous for . In the notation of the partition calculus, is -Erdős if . The existence of zero sharp implies that the constructible universe satisfies "for every countable ordinal , there is an -Erdős cardinal". In fact, for every indiscernible , satisfies "for every ordinal , there is an -Erdős cardinal in " (the Lévy collapse to make countable). However, the existence of an -Erdős cardinal implies existence of zero sharp. If is the satisfaction relation for (using ordinal parameters), then the existence of zero sharp is equivalent to there being an -Erdős ordinal with respect to . Thus, the existence of an -Erdős cardinal implies that the axiom of constructibility is false. The least -Erdős cardinal is not weakly compact,p. 39. nor is the least -Erdős cardinal.p. 39 If is -Erdős, then it is -Erdős in every transitive model satisfying " is countable." See also List of large cardinal properties References Citations Large cardinals Cardinal
Erdős cardinal
[ "Mathematics" ]
272
[ "Large cardinals", "Mathematical objects", "Infinity" ]
326,545
https://en.wikipedia.org/wiki/Electricity%20delivery
Electricity delivery is the process that starts after generation of electricity in the power station, up to the use by the consumer. The main processes in electricity delivery are, by order: Transmission Distribution Retailing See also Electrical grid Electricity supply References Electric power
Electricity delivery
[ "Physics", "Engineering" ]
50
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
326,550
https://en.wikipedia.org/wiki/Subtle%20cardinal
In mathematics, subtle cardinals and ethereal cardinals are closely related kinds of large cardinal number. A cardinal is called subtle if for every closed and unbounded and for every sequence of length such that for all (where is the th element), there exist , belonging to , with , such that . A cardinal is called ethereal if for every closed and unbounded and for every sequence of length such that and has the same cardinality as for arbitrary , there exist , belonging to , with , such that . Subtle cardinals were introduced by . Ethereal cardinals were introduced by . Any subtle cardinal is ethereal,p. 388 and any strongly inaccessible ethereal cardinal is subtle.p. 391 Characterizations Some equivalent properties to subtlety are known. Relationship to Vopěnka's Principle Subtle cardinals are equivalent to a weak form of Vopěnka cardinals. Namely, an inaccessible cardinal is subtle if and only if in , any logic has stationarily many weak compactness cardinals. Vopenka's principle itself may be stated as the existence of a strong compactness cardinal for each logic. Chains in transitive sets There is a subtle cardinal if and only if every transitive set of cardinality contains and such that is a proper subset of and and .Corollary 2.6 An infinite ordinal is subtle if and only if for every , every transitive set of cardinality includes a chain (under inclusion) of order type . Extensions A hypersubtle cardinal is a subtle cardinal which has a stationary set of subtle cardinals below it.p.1014 See also List of large cardinal properties References Citations Large cardinals
Subtle cardinal
[ "Mathematics" ]
332
[ "Large cardinals", "Mathematical objects", "Infinity" ]
326,566
https://en.wikipedia.org/wiki/Concertina%20wire
Concertina wire or Dannert wire is a type of barbed wire or razor wire that is formed in large coils which can be expanded like a concertina. In conjunction with plain barbed wire (and/or razor wire/tape) and steel pickets, it is most often used to form military-style wire obstacles. It is also used in non-military settings, such as when used in prison barriers, detention camps, riot control, or at international borders. During World War I, soldiers manufactured concertina wire themselves, using ordinary barbed wire. Today, it is factory made. Origins In World War I, barbed wire obstacles were made by stretching lengths of barbed wire between stakes of wood or iron. At its simplest, such a barrier would resemble a fence as might be used for agricultural purposes. The double apron fence comprised a line of pickets with wires running diagonally down to points on the ground either side of the fence. Horizontal wires were attached to these diagonals. More elaborate and formidable obstructions could be formed with multiple lines of stakes connected with wire running from side-to-side, back-to-front, and diagonally in many directions. Effective as these obstacles were, their construction took considerable time. Barbed wire obstacles were vulnerable to being pushed about by artillery shells; in World War I, this frequently resulted in a mass of randomly entangled wires that could be even more daunting than a carefully constructed obstacle. Learning this lesson, World War I soldiers would deploy barbed wire in so-called concertinas that were relatively loose. Barbed wire concertinas could be prepared in the trenches and then deployed in no-man's-land relatively quickly under cover of darkness. Concertina wire packs flat for ease of transport and can then be deployed as an obstacle much more quickly than ordinary barbed wire, since the flattened coil of wire can easily be stretched out, forming an instant obstacle that will at least slow enemy passage. Several such coils with a few stakes to secure them in place are just as effective as an ordinary barbed wire fence, which must be built by driving stakes and running multiple wires between them. A platoon of soldiers can deploy a single concertina fence at a rate of about a kilometre ( mile) per hour. Such an obstacle is not very effective by itself (although it will still hinder an enemy advance under the guns of the defenders), and concertinas are normally built up into more elaborate patterns as time permits. Today, concertina wire is factory made and is available in forms that can be deployed very rapidly from the back of a vehicle or trailer. Dannert wire Oil-tempered barbed wire was developed during World War I; it was much harder to cut than ordinary barbed wire. During the 1930s, German Horst Dannert developed concertina wire of this high-grade steel wire. The result was entirely self-supporting; it did not require any vertical posts. An individual Dannert wire concertina could be compressed into a compact coil that could be carried by one man and then stretched out along its axis to make a barrier long and each coil could be held in place with just three staples hammered into the ground. Dannert wire was imported into Britain from Germany before World War II. During the invasion crisis of 1940–1941, the demand for Dannert wire was so great that some was produced with low manganese steel wire which was easier to cut. This material was known as "Yellow Dannert" after the identifying yellow paint on the concertina handles. To compensate for the reduced effectiveness of Yellow Dannert, an extra supply of pickets were issued in lieu of screw pickets. Triple concertina wire A barrier known as a triple concertina wire fence consists of two parallel concertinas joined by twists of wire and topped by a third concertina similarly attached. The result is an extremely effective barrier with many of the desirable properties of a random entanglement. A triple concertina fence could be deployed very quickly: it is possible for a party of five men to deploy of triple concertina fence in just 15 minutes. Optionally, triple concertina fence could be strengthened with uprights, but this increases the construction time significantly. "Constantine" wire Concertina wire is sometimes mistakenly called "constantine" wire. Constantine probably came from a corruption/misunderstanding of concertina and led to confusion with the Roman Emperor Constantine. This, in turn, has led to some people trying to differentiate between concertina wire and constantine wire by assigning the term constantine wire to what is commonly known as razor wire. In contrast to the helical construction of concertina wire, razor wire consists of a single wire with teeth that project periodically along its length. See also Slinky References Citations Works cited Further reading External links Engineering barrages Fortification (obstacles) Area denial weapons Wire
Concertina wire
[ "Engineering" ]
974
[ "Area denial weapons", "Military engineering", "Engineering barrages" ]
326,634
https://en.wikipedia.org/wiki/Value%20engineering
Value engineering (VE) is a systematic analysis of the functions of various components and materials to lower the cost of goods, products and services with a tolerable loss of performance or functionality. Value, as defined, is the ratio of function to cost. Value can therefore be manipulated by either improving the function or reducing the cost. It is a primary tenet of value engineering that basic functions be preserved and not be reduced as a consequence of pursuing value improvements. The term "value management" is sometimes used as a synonym of "value engineering", and both promote the planning and delivery of projects with improved performance. The reasoning behind value engineering is as follows: if marketers expect a product to become practically or stylistically obsolete within a specific length of time, they can design it to only last for that specific lifetime. The products could be built with higher-grade components, but with value engineering they are not because this would impose an unnecessary cost on the manufacturer, and to a limited extent also an increased cost on the purchaser. Value engineering will reduce these costs. A company will typically use the least expensive components that satisfy the product's lifetime projections at a risk of product and company reputation. Due to the very short life spans, however, which is often a result of this "value engineering technique", planned obsolescence has become associated with product deterioration and inferior quality. Vance Packard once claimed this practice gave engineering as a whole a bad name, as it directed creative engineering energies toward short-term market ends. Philosophers such as Herbert Marcuse and Jacque Fresco have also criticized the economic and societal implications of this model. History Value engineering began at General Electric Co. during World War II. Because of the war, there were shortages of skilled labour, raw materials, and component parts. Lawrence Miles, Jerry Leftow, and Harry Erlicher at G.E. looked for acceptable substitutes. They noticed that these substitutions often reduced costs, improved the product, or both. What started out as an accident of necessity was turned into a systematic process. They called their technique "value analysis" or "value control". The U S Navy's Bureau of Ships established a formal program of value engineering, overseen by Miles and Raymond Fountain, also from G.E., in 1957. Since the 1970's the US Government's General Accounting Office (GAO) has recognised the benefit of value engineering. In a 1992 statement by L. Nye Stevens, Director of Government Business Operations Issues within the GAO, referred to "considerable work" done by the GAO on value engineering and the office's recommendation that VE should be adopted by "all federal construction agencies". Dr. Paul Collopy, UAH Professor, ISEEM Department, has recommended an improvement to value engineering known as Value-Driven Design. Description Value engineering is sometimes taught within the project management, industrial engineering or architecture body of knowledge as a technique in which the value of a system's outputs is superficially optimized by distorting a mix of performance (function) and costs. It is based on an analysis investigating systems, equipment, facilities, services, and supplies for providing necessary functions at superficial low life cycle cost while meeting the misunderstood requirement targets in performance, reliability, quality, and safety. In most cases this practice identifies and removes necessary functions of value expenditures, thereby decreasing the capabilities of the manufacturer and/or their customers. What this practice disregards in providing necessary functions of value are expenditures such as equipment maintenance and relationships between employee, equipment, and materials. For example, a machinist is unable to complete their quota because the drill press is temporarily inoperable due to lack of maintenance and the material handler is not doing their daily checklist, tally, log, invoice, and accounting of maintenance and materials each machinist needs to maintain the required productivity and adherence to section 4306. VE follows a structured thought process that is based exclusively on "function", i.e. what something "does", not what it "is". For example, a screwdriver that is being used to stir a can of paint has a "function" of mixing the contents of a paint can and not the original connotation of securing a screw into a screw-hole. In value engineering "functions" are always described in a two word abridgment consisting of an active verb and measurable noun (what is being done – the verb – and what it is being done to – the noun) and to do so in the most non-descriptive way possible. In the screwdriver and can of paint example, the most basic function would be "blend liquid" which is less descriptive than "stir paint" which can be seen to limit the action (by stirring) and to limit the application (only considers paint). Value engineering uses rational logic (a unique "how" – "why" questioning technique) and an irrational analysis of function to identify relationships that increase value. It is considered a quantitative method similar to the scientific method, which focuses on hypothesis-conclusion approaches to test relationships, and operations research, which uses model building to identify predictive relationships. Legal terminology In the United States, value engineering is specifically mandated for federal agencies by section 4306 of the National Defense Authorization Act for Fiscal Year 1996, which amended the Office of Federal Procurement Policy Act (41 U.S.C. 401 et seq.): "Each executive agency shall establish and maintain cost-effective value engineering procedures and processes." "As used in this section, the term 'value engineering' means an analysis of the functions of a program, project, system, product, item of equipment, building, facility, service, or supply of an executive agency, performed by qualified agency or contractor personnel, directed at improving performance, reliability, quality, safety, and life cycle costs." An earlier bill, HR 281, the "Systematic Approach for Value Engineering Act" was proposed in 1990, which would have mandated the use of VE in major federally-sponsored construction, design or IT system contracts. This bill identified the objective of a value engineering review as "reducing all costs (including initial and long-term costs) and improving quality, performance, productivity, efficiency, promptness, reliability, maintainability, and aesthetics". Federal Acquisition Regulation (FAR) part 48 provides direction to federal agencies on the use of VE techniques. The FAR provides for an incentive approach, under which a contractor's participation in VE is voluntary; under this approach a contractor may at its own expense develop and submit a value engineering change proposal (VECP) for agency consideration, or a mandatory program, where the agency directs and funds for a specific VE project. In the United Kingdom In the United Kingdom, the lawfulness of undertaking value engineering discussions with a supplier in advance of contract award is one of the issues which was highlighted during the inquiry into the Grenfell Tower fire of 2017. The inquiry report was highly sceptical of the whole endeavour of value engineering: Professional association The Society of American Value Engineers (SAVE) was established in 1959. Since 1996, it has been known as SAVE International. See also Benefits realisation management Cost Cost engineering Cost overrun ISO 15686 Muntzing Overengineering Value theory References Further reading Cooper, R. and Slagmulder, R. (1997): Target Costing and Value Engineering "Value Optimization for Project and Performance Management by Robert B. Stewart, CVS-Life, FSAVE, PMP" External links Lawrence D. Miles Value Foundation Society of Valuemanagers www.valuemanagers.org SAVE International – American Value engineering society wertanalyse.com – Many links regarding VE organizations and publications The Canadian Society of Value Analysis – Value Engineering in Canada Value Engineering's History in Construction- American Institute of Architects – AIA The Institute of Value Management, UK The APTE method Catalan Association of Value Analysis – ACAV Industrial engineering Design for X Cost engineering
Value engineering
[ "Engineering" ]
1,617
[ "Cost engineering", "Design", "Industrial engineering", "Design for X" ]
326,647
https://en.wikipedia.org/wiki/Differential%20amplifier
A differential amplifier is a type of electronic amplifier that amplifies the difference between two input voltages but suppresses any voltage common to the two inputs. It is an analog circuit with two inputs and and one output , in which the output is ideally proportional to the difference between the two voltages: where is the gain of the amplifier. Single amplifiers are usually implemented by either adding the appropriate feedback resistors to a standard op-amp, or with a dedicated integrated circuit containing internal feedback resistors. It is also a common sub-component of larger integrated circuits handling analog signals. Mathematics of the amplifier where and are the input voltages, and is the differential gain. In practice, however, the gain is not quite equal for the two inputs. This means, for instance, that if and are equal, the output will not be zero, as it would be in the ideal case. A more realistic expression for the output of a differential amplifier thus includes a second term: where is called the common-mode gain of the amplifier. As differential amplifiers are often used to null out noise or bias voltages that appear at both inputs, a low common-mode gain is usually desired. The common-mode rejection ratio (CMRR), usually defined as the ratio between differential-mode gain and common-mode gain, indicates the ability of the amplifier to accurately cancel voltages that are common to both inputs. The common-mode rejection ratio is defined as In a perfectly symmetric differential amplifier, is zero, and the CMRR is infinite. Note that a differential amplifier is a more general form of amplifier than one with a single input; by grounding one input of a differential amplifier, a single-ended amplifier results. Long-tailed pair Historical background Modern differential amplifiers are usually implemented with a basic two-transistor circuit called a “long-tailed” pair or differential pair. This circuit was originally implemented using a pair of vacuum tubes. The circuit works the same way for all three-terminal devices with current gain. The bias points of “long-tail” resistor circuit are largely determined by Ohm's law and less so by active-component characteristics. The long-tailed pair was developed from earlier knowledge of push–pull circuit techniques and measurement bridges. An early circuit which closely resembles a long-tailed pair was published by British neurophysiologist Bryan Matthews in 1934, and it seems likely that this was intended to be a true long-tailed pair but was published with a drawing error. The earliest definite long-tailed pair circuit appears in a patent submitted by Alan Blumlein in 1936. By the end of the 1930s the topology was well established and had been described by various authors, including Frank Offner (1937), Otto Schmitt (1937) and Jan Friedrich Toennies (1938), and it was particularly used for detection and measurement of physiological impulses. The long-tailed pair was very successfully used in early British computing, most notably the Pilot ACE model and descendants, Maurice Wilkes’ EDSAC, and probably others designed by people who worked with Blumlein or his peers. The long-tailed pair has many favorable attributes if used as a switch: largely immune to tube (transistor) variations (of great importance when machines contained 1,000 tubes or more), high gain, gain stability, high input impedance, medium/low output impedance, good clipper (with a not-too-long tail), non-inverting (EDSAC contained no inverters!) and large output voltage swings. One disadvantage is that the output voltage swing (typically ±10–20 V) was imposed upon a high DC voltage (200 V or so), requiring care in signal coupling, usually some form of wide-band DC coupling. Many computers of this time tried to avoid this problem by using only AC-coupled pulse logic, which made them very large and overly complex (ENIAC: 18,000 tubes for a 20-digit calculator) or unreliable. DC-coupled circuitry became the norm after the first generation of vacuum-tube computers. Configurations A differential (long-tailed, emitter-coupled) pair amplifier consists of two amplifying stages with common (emitter, source or cathode) degeneration. Differential output With two inputs and two outputs, this forms a differential amplifier stage (Figure 2). The two bases (or grids or gates) are inputs which are differentially amplified (subtracted and multiplied) by the transistor pair; they can be fed with a differential (balanced) input signal, or one input could be grounded to form a phase splitter circuit. An amplifier with differential output can drive a floating load or another stage with differential input. Single-ended output If the differential output is not desired, then only one output can be used (taken from just one of the collectors (or anodes or drains), disregarding the other output; this configuration is referred to as single-ended output. The gain is half that of the stage with differential output. To avoid sacrificing gain, a differential to single-ended converter can be utilized. This is often implemented as a current mirror (Figure 3, below). Single-ended input The differential pair can be used as an amplifier with a single-ended input if one of the inputs is grounded or fixed to a reference voltage (usually, the other collector is used as a single-ended output) This arrangement can be thought of as cascaded common-collector and common-base stages or as a buffered common-base stage. The emitter-coupled amplifier is compensated for temperature drifts, VBE is cancelled, and the Miller effect and transistor saturation are avoided. That is why it is used to form emitter-coupled amplifiers (avoiding Miller effect), phase splitter circuits (obtaining two inverse voltages), ECL gates and switches (avoiding transistor saturation), etc. Operation To explain the circuit operation, four particular modes are isolated below although, in practice, some of them act simultaneously and their effects are superimposed. Biasing In contrast with classic amplifying stages that are biased from the side of the base (and so they are highly β-dependent), the differential pair is directly biased from the side of the emitters by sinking/injecting the total quiescent current. The series negative feedback (the emitter degeneration) makes the transistors act as voltage stabilizers; it forces them to adjust their VBE voltages (base currents) to pass the quiescent current through their collector-emitter junctions. So, due to the negative feedback, the quiescent current depends only slightly on the transistor's β. The biasing base currents needed to evoke the quiescent collector currents usually come from the ground, pass through the input sources and enter the bases. So, the sources have to be galvanic (DC) to ensure paths for the biasing current and low resistive enough to not create significant voltage drops across them. Otherwise, additional DC elements should be connected between the bases and the ground (or the positive power supply). Common mode In common mode (the two input voltages change in the same directions), the two voltage (emitter) followers cooperate with each other working together on the common high-resistive emitter load (the "long tail"). They all together increase or decrease the voltage of the common emitter point (figuratively speaking, they together "pull up" or "pull down" it so that it moves). In addition, the dynamic load "helps" them by changing its instant ohmic resistance in the same direction as the input voltages (it increases when the voltage increases and vice versa.) thus keeping up constant total resistance between the two supply rails. There is a full (100%) negative feedback; the two input base voltages and the emitter voltage change simultaneously while the collector currents and the total current do not change. As a result, the output collector voltages do not change as well. Differential mode Normal. In differential mode (the two input voltages change in opposite directions), the two voltage (emitter) followers oppose each other—while one of them tries to increase the voltage of the common emitter point, the other tries to decrease it (figuratively speaking, one of them "pulls up" the common point while the other "pulls down" it so that it stays immovable) and vice versa. So, the common point does not change its voltage; it behaves like a virtual ground with a magnitude determined by the common-mode input voltages. The high-resistance emitter element does not play any role—it is shunted by the other low-resistance emitter follower. There is no negative feedback, since the emitter voltage does not change at all when the input base voltages change. The common quiescent current vigorously steers between the two transistors and the output collector voltages vigorously change. The two transistors mutually ground their emitters; so, although they are common-collector stages, they actually act as common-emitter stages with maximum gain. Bias stability and independence from variations in device parameters can be improved by negative feedback introduced via cathode/emitter resistors with relatively small resistances. Overdriven. If the input differential voltage changes significantly (more than about a hundred millivolts), the transistor driven by the lower input voltage turns off and its collector voltage reaches the positive supply rail. At high overdrive the base-emitter junction gets reversed. The other transistor (driven by the higher input voltage) drives all the current. If the resistor at the collector is relatively large, the transistor will saturate. With relatively small collector resistor and moderate overdrive, the emitter can still follow the input signal without saturation. This mode is used in differential switches and ECL gates. Breakdown. If the input voltage continues increasing and exceeds the base-emitter breakdown voltage, the base-emitter junction of the transistor driven by the lower input voltage breaks down. If the input sources are low resistive, an unlimited current will flow directly through the "diode bridge" between the two input sources and will damage them. In common mode, the emitter voltage follows the input voltage variations; there is a full negative feedback and the gain is minimum. In differential mode, the emitter voltage is fixed (equal to the instant common input voltage); there is no negative feedback and the gain is maximum. Differential amplifier improvements Collector current mirror The collector resistors can be replaced by a current mirror (the top blue section in Fig. 3), whose output part acts as an active load. Thus the differential collector current signal is converted to a single-ended voltage signal without the intrinsic 50% losses, so the gain is doubled. This is achieved by copying the input collector current from the left to the right side, where the magnitudes of the two input signals add. For this purpose, the input of the current mirror is connected to the left output, and the output of the current mirror is connected to the right output of the differential amplifier. The current mirror copies the left collector current and passes it through the right transistor that produces the right collector current. At this right output of the differential amplifier, the two signal currents (pos. and neg. current changes) are subtracted. In this case (differential input signal), they are equal and opposite. Thus, the difference is twice the individual signal currents (ΔI − (−ΔI) = 2ΔI), and the differential to single-ended conversion is completed without gain losses. Fig. 4 shows the transmission characteristic of this circuit. Emitter constant current source The quiescent current has to be constant to ensure constant collector voltages at common mode. This requirement is not so important in the case of a differential output, since although their two collector voltages will vary simultaneously their difference (the output voltage) will not vary. But in the case of a single-ended output, it is extremely important to keep a constant current since the output collector voltage will vary. Thus the higher the resistance of the current source in the original circuit of Fig. 2, the lower (better) is the common-mode gain . The constant current needed could be produced by connecting an element (resistor) with very high resistance between the shared emitter node and the supply rail (negative for NPN and positive for PNP transistors), but that requires a high supply voltage. So in more sophisticated designs, an element with high differential (dynamic) resistance approximating a constant current source/sink (the bottom of Fig. 3) is substituted for the “long tail”. It is usually implemented by a current mirror because of its high compliance voltage (small voltage drop across the output transistor). Interfacing considerations Floating input source It is possible to connect a floating source between the two bases, but it is necessary to ensure paths for the biasing base currents. In the case of galvanic source, only one resistor has to be connected between one of the bases and the ground. The biasing current will enter directly this base and indirectly (through the input source) the other one. If the source is capacitive, two resistors have to be connected between the two bases and the ground to ensure different paths for the base currents. Input/output impedance The input impedance of the differential pair highly depends on the input mode. At common mode, the two parts behave as common-collector stages with high emitter loads; so, the input impedances are extremely high. At differential mode, they behave as common-emitter stages with grounded emitters; so, the input impedances are low. The output impedance of the differential pair is high (especially for the improved differential pair with a current mirror as shown in Figure 3). Input/output range The common-mode input voltage can vary between the two supply rails but cannot closely reach them since some voltage drops (minimum 1 volt) have to remain across the output transistors of the two current mirrors. Operational amplifier as differential amplifier An operational amplifier, or op-amp, is a differential amplifier with very high differential-mode gain, very high input impedance, and low output impedance. An op-amp differential amplifier can be built with predictable and stable gain by applying negative feedback (Figure 5). Some kinds of differential amplifier usually include several simpler differential amplifiers. For example, a fully differential amplifier, an instrumentation amplifier, or an isolation amplifier are often built from a combination of several op-amps. Applications Differential amplifiers are found in many circuits that utilize series negative feedback (op-amp follower, non-inverting amplifier, etc.), where one input is used for the input signal, the other for the feedback signal (usually implemented by operational amplifiers). For comparison, the old-fashioned inverting single-ended op-amps from the early 1940s could realize only parallel negative feedback by connecting additional resistor networks (an op-amp inverting amplifier is the most popular example). A common application is for the control of motors or servos, as well as for signal amplification applications. In discrete electronics, a common arrangement for implementing a differential amplifier is the long-tailed pair, which is also usually found as the differential element in most op-amp integrated circuits. A long-tailed pair can be used as an analog multiplier with the differential voltage as one input and the biasing current as another. A differential amplifier is used as the input stage emitter coupled logic gates and as switch. When used as a switch, the "left" base/grid is used as signal input and the "right" base/grid is grounded; output is taken from the right collector/plate. When the input is zero or negative, the output is close to zero (but can be not saturated); when the input is positive, the output is most-positive, dynamic operation being the same as the amplifier use described above. The differential amplifier is used in the cathode follower oscillator. The advantages are high impedance of the differential amplifier input and output and small phase shift between input and output. This application uses only one input and one output of the differential amplifier. Symmetrical feedback network eliminates common-mode gain and common-mode bias In case the operational amplifier's (non-ideal) input bias current or differential input impedance are a significant effect, one can select a feedback network that improves the effect of common-mode input signal and bias. In Figure 6, current generators model the input bias current at each terminal; I+b and I−b represent the input bias current at terminals V+ and V− respectively. The Thévenin equivalent for the network driving the V+ terminal has a voltage V+' and impedance R+': while for the network driving the V− terminal: The output of the op-amp is just the open-loop gain Aol times the differential input current i times the differential input impedance 2Rd, therefore where R|| is the average of R+|| and R−||. These equations undergo a great simplification if resulting in the relation which implies that the closed-loop gain for the differential signal is V+in − V−in, but the common-mode gain is identically zero. It also implies that the common-mode input bias current has cancelled out, leaving only the input offset current IΔb = I+b − I−b still present, and with a coefficient of Ri. It is as if the input offset current is equivalent to an input offset voltage acting across an input resistance Ri, which is the source resistance of the feedback network into the input terminals. Finally, as long as the open-loop voltage gain Aol is much larger than unity, the closed-loop voltage gain is Rf/Ri, the value one would obtain through the rule-of-thumb analysis known as "virtual ground". Footnotes See also Gilbert cell Instrumentation amplifier Op-amp differential configuration Emitter-coupled logic References External links BJT Differential Amplifier – Circuit and explanation A testbench for differential circuits Application Note: Analog Devices – AN-0990 : Terminating a Differential Amplifier in Single-Ended Input Applications Electronic amplifiers fr:Amplificateur de mesure#Amplificateur différentiel
Differential amplifier
[ "Technology" ]
3,797
[ "Electronic amplifiers", "Amplifiers" ]
326,696
https://en.wikipedia.org/wiki/Bayer%20process
The Bayer process is the principal industrial means of refining bauxite to produce alumina (aluminium oxide) and was developed by Carl Josef Bayer. Bauxite, the most important ore of aluminium, contains only 30–60% aluminium oxide (Al2O3), the rest being a mixture of silica, various iron oxides, and titanium dioxide. The aluminium oxide must be further purified before it can be refined into aluminium. The Bayer process is also the main source of gallium as a byproduct despite low extraction yields. Process Bauxite ore is a mixture of hydrated aluminium oxides and compounds of other elements such as iron. The aluminium compounds in the bauxite may be present as gibbsite (Al(OH)3), böhmite (γ-AlO(OH)) or diaspore (α-AlO(OH)); the different forms of the aluminium component and the impurities dictate the extraction conditions. Aluminium oxides and hydroxides are amphoteric, meaning that they are both acidic and basic. The solubility of Al(III) in water is very low but increases substantially at either high or low pH. In the Bayer process, bauxite ore is heated in a pressure vessel along with a sodium hydroxide solution (caustic soda) at a temperature of . At these temperatures, the aluminium is dissolved as sodium aluminate (primarily [Al(OH)4]−) in an extraction process. After separation of the residue by filtering, gibbsite is precipitated when the liquid is cooled and then seeded with fine-grained aluminium hydroxide crystals from previous extractions. The precipitation may take several days without addition of seed crystals. The extraction process (digestion) converts the aluminium oxide in the ore to soluble sodium aluminate, NaAlO2, according to the chemical equation: Al(OH)3+ NaOH → NaAlO2 + 2 H2O This treatment also dissolves silica, forming sodium silicate : 2 NaOH + SiO2 → Na2SiO3 + H2O The other components of Bauxite, however, do not dissolve. Sometimes lime is added at this stage to precipitate the silica as calcium silicate. The solution is clarified by filtering off the solid impurities, commonly with a rotary sand trap and with the aid of a flocculant such as starch, to remove the fine particles. The undissolved waste after the aluminium compounds are extracted, bauxite tailings, contains iron oxides, silica, calcia, titania and some unreacted alumina. The original process was that the alkaline solution was cooled and treated by bubbling carbon dioxide through it, a method by which aluminium hydroxide precipitates: 2 NaAlO2 + 3 H2O + CO2 → 2 Al(OH)3 + Na2CO3 But later, this gave way to seeding the supersaturated solution with high-purity aluminium hydroxide (Al(OH)3) crystal, which eliminated the need for cooling the liquid and was more economically feasible: 2 H2O + NaAlO2 → Al(OH)3 + NaOH Some of the aluminium hydroxide produced is used in the manufacture of water treatment chemicals such as aluminium sulfate, PAC (Polyaluminium chloride) or sodium aluminate; a significant amount is also used as a filler in rubber and plastics as a fire retardant. Some 90% of the gibbsite produced is converted into aluminium oxide, Al2O3, by heating in rotary kilns or fluid flash calciners to a temperature of about . 2 Al(OH)3 → Al2O3 + 3 H2O The left-over, 'spent' sodium aluminate solution is then recycled. Apart from improving the economy of the process, recycling accumulates gallium and vanadium impurities in the liquors, so that they can be extracted profitably. Organic impurities that accumulate during the precipitation of gibbsite may cause various problems, for example high levels of undesirable materials in the gibbsite, discoloration of the liquor and of the gibbsite, losses of the caustic material, and increased viscosity and density of the working fluid. For bauxites having more than 10% silica, the Bayer process becomes uneconomic because of the formation of insoluble sodium aluminium silicate, which reduces yield, so another process must be chosen. of bauxite (corresponding to about 90% of the alumina content of the bauxite) is required to produce of aluminium oxide. This is due to a majority of the aluminium in the ore being dissolved in the process. Energy consumption is between (depending on process), of which most is thermal energy. Over 90% (95-96%) of the aluminium oxide produced is used in the Hall–Héroult process to produce aluminium. Waste Red mud is the waste product that is produced in the digestion of bauxite with sodium hydroxide. It has high calcium and sodium hydroxide content with a complex chemical composition, and accordingly is very caustic and a potential source of pollution. The amount of red mud produced is considerable, and this has led scientists and refiners to seek uses for it. It has received attention as a possible source of vanadium. Due to the low extraction yield much of the gallium ends up in the aluminium oxide as an impurity and in the red mud. One use of red mud is in ceramic production. Red mud dries into a fine powder that contains iron, aluminium, calcium and sodium. It becomes a health risk when some plants use the waste to produce aluminium oxides. In the United States, the waste is disposed in large impoundments, a sort of reservoir created by a dam. The impoundments are typically lined with clay or synthetic liners. The US does not approve of the use of the waste due to the danger it poses to the environment. The EPA identified high levels of arsenic and chromium in some red mud samples. Ajka alumina plant accident On October 4, 2010, the Ajka alumina plant in Hungary had an incident where the western dam of its red mud reservoir collapsed. The reservoir was filled with of a mixture of red mud and water with a pH of 12. The mixture was released into the valley of Torna river and flooded parts of the city of Devecser and the villages of Kolontár and Somlóvásárhely. The incident resulted in 10 deaths, more than a hundred injuries, and contamination in lakes and rivers. History In 1859, Henri Étienne Sainte-Claire Deville in France developed a method for making alumina by heating bauxite in sodium carbonate, , at , leaching the sodium aluminate formed with water, then precipitating aluminium hydroxide by carbon dioxide, , which was then filtered and dried. This process is known as the Deville–Pechiney process. In 1886, the Hall–Héroult electrolytic aluminium process was invented, and the cyanidation process was invented in 1887. The Bayer process was invented in 1888 by Carl Josef Bayer. Working in Saint Petersburg, Russia to develop a method for supplying alumina to the textile industry (it was used as a mordant in dyeing cotton), Bayer discovered in 1887 that the aluminium hydroxide that precipitated from alkaline solution was crystalline and could be easily filtered and washed, while that precipitated from acid medium by neutralization was gelatinous and difficult to wash. The industrial success of this process caused it to replace the Deville–Pechiney process, marking the birth of the modern field of hydrometallurgy. The engineering aspects of the process were improved upon to decrease the cost starting in 1967 in Germany and Czechoslovakia. This was done by increasing the heat recovery and using large autoclaves and precipitation tanks. To more effectively use energy, heat exchangers and flash tanks were used and larger reactors decreased the amount of heat lost. Efficiency was increased by connecting the autoclaves to make operation more efficient. Today, the process produces nearly all the world's alumina supply as an intermediate step in aluminium production. See also Ajka alumina plant accident Deville process Hall–Héroult process History of aluminium References Chemical processes Aluminium industry Metallurgical processes
Bayer process
[ "Chemistry", "Materials_science" ]
1,743
[ "Metallurgical processes", "Metallurgy", "Chemical processes", "nan", "Chemical process engineering" ]
326,707
https://en.wikipedia.org/wiki/Protein%20kinase%20A
In cell biology, protein kinase A (PKA) is a family of serine-threonine kinase whose activity is dependent on cellular levels of cyclic AMP (cAMP). PKA is also known as cAMP-dependent protein kinase (). PKA has several functions in the cell, including regulation of glycogen, sugar, and lipid metabolism. It should not be confused with 5'-AMP-activated protein kinase (AMP-activated protein kinase). History Protein kinase A, more precisely known as adenosine 3',5'-monophosphate (cyclic AMP)-dependent protein kinase, abbreviated to PKA, was discovered by chemists Edmond H. Fischer and Edwin G. Krebs in 1968. They won the Nobel Prize in Physiology or Medicine in 1992 for their work on phosphorylation and dephosphorylation and how it relates to PKA activity. PKA is one of the most widely researched protein kinases, in part because of its uniqueness; out of 540 different protein kinase genes that make up the human kinome, only one other protein kinase, casein kinase 2, is known to exist in a physiological tetrameric complex, meaning it consists of four subunits. The diversity of mammalian PKA subunits was realized after Dr. Stan McKnight and others identified four possible catalytic subunit genes and four regulatory subunit genes. In 1991, Susan Taylor and colleagues crystallized the PKA Cα subunit, which revealed the bi-lobe structure of the protein kinase core for the very first time, providing a blueprint for all the other protein kinases in a genome (the kinome). Structure When inactive, the PKA apoenzyme exists as a tetramer which consists of two regulatory subunits and two catalytic subunits. The catalytic subunit contains the active site, a series of canonical residues found in protein kinases that bind and hydrolyse ATP, and a domain to bind the regulatory subunit. The regulatory subunit has domains to bind to cyclic AMP, a domain that interacts with catalytic subunit, and an auto inhibitory domain. There are two major forms of regulatory subunit; RI and RII. Mammalian cells have at least two types of PKAs: type I is mainly in the cytosol, whereas type II is bound via its regulatory subunits and special anchoring proteins, described in the anchorage section, to the plasma membrane, nuclear membrane, mitochondrial outer membrane, and microtubules. In both types, once the catalytic subunits are freed and active, they can migrate into the nucleus (where they can phosphorylate transcription regulatory proteins), while the regulatory subunits remain in the cytoplasm. The following human genes encode PKA subunits: catalytic subunit – PRKACA, PRKACB, PRKACG regulatory subunit type I - PRKAR1A, PRKAR1B regulatory subunit type II - PRKAR2A, PRKAR2B Mechanism Activation PKA is also commonly known as cAMP-dependent protein kinase, because it has traditionally been thought to be activated through release of the catalytic subunits when levels of the second messenger called cyclic adenosine monophosphate, or cAMP, rise in response to a variety of signals. However, recent studies evaluating the intact holoenzyme complexes, including regulatory AKAP-bound signalling complexes, have suggested that the local sub cellular activation of the catalytic activity of PKA might proceed without physical separation of the regulatory and catalytic components, especially at physiological concentrations of cAMP. In contrast, experimentally induced supra physiological concentrations of cAMP, meaning higher than normally observed in cells, are able to cause separation of the holoenzymes, and release of the catalytic subunits. Extracellular hormones, such as glucagon and epinephrine, begin an intracellular signalling cascade that triggers protein kinase A activation by first binding to a G protein–coupled receptor (GPCR) on the target cell. When a GPCR is activated by its extracellular ligand, a conformational change is induced in the receptor that is transmitted to an attached intracellular heterotrimeric G protein complex by protein domain dynamics. The Gs alpha subunit of the stimulated G protein complex exchanges GDP for GTP in a reaction catalyzed by the GPCR and is released from the complex. The activated Gs alpha subunit binds to and activates an enzyme called adenylyl cyclase, which, in turn, catalyzes the conversion of ATP into cAMP, directly increasing the cAMP level. Four cAMP molecules are able to bind to the two regulatory subunits. This is done by two cAMP molecules binding to each of the two cAMP binding sites (CNB-B and CNB-A) which induces a conformational change in the regulatory subunits of PKA, causing the subunits to detach and unleash the two, now activated, catalytic subunits. Once released from inhibitory regulatory subunit, the catalytic subunits can go on to phosphorylate a number of other proteins in the minimal substrate context Arg-Arg-X-Ser/Thr., although they are still subject to other layers of regulation, including modulation by the heat stable pseudosubstrate inhibitor of PKA, termed PKI. Below is a list of the steps involved in PKA activation: Cytosolic cAMP increases Two cAMP molecules bind to each PKA regulatory subunit The regulatory subunits move out of the active sites of the catalytic subunits and the R2C2 complex dissociates The free catalytic subunits interact with proteins to phosphorylate Ser or Thr residues. Catalysis The liberated catalytic subunits can then catalyze the transfer of ATP terminal phosphates to protein substrates at serine, or threonine residues. This phosphorylation usually results in a change in activity of the substrate. Since PKAs are present in a variety of cells and act on different substrates, PKA regulation and cAMP regulation are involved in many different pathways. The mechanisms of further effects may be divided into direct protein phosphorylation and protein synthesis: In direct protein phosphorylation, PKA directly either increases or decreases the activity of a protein. In protein synthesis, PKA first directly activates CREB, which binds the cAMP response element (CRE), altering the transcription and therefore the synthesis of the protein. In general, this mechanism takes more time (hours to days). Phosphorylation mechanism The Serine/Threonine residue of the substrate peptide is orientated in such a way that the hydroxyl group faces the gamma phosphate group of the bound ATP molecule. Both the substrate, ATP, and two Mg2+ ions form intensive contacts with the catalytic subunit of PKA. In the active conformation, the C helix packs against the N-terminal lobe and the Aspartate residue of the conserved DFG motif chelates the Mg2+ ions, assisting in positioning the ATP substrate. The triphosphate group of ATP points out of the adenosine pocket for the transfer of gamma-phosphate to the Serine/Threonine of the peptide substrate. There are several conserved residues, include Glutamate (E) 91 and Lysine (K) 72, that mediate the positioning of alpha- and beta-phosphate groups. The hydroxyl group of the peptide substrate's Serine/Threonine attacks the gamma phosphate group at the phosphorus via an SN2 nucleophilic reaction, which results in the transfer of the terminal phosphate to the peptide substrate and cleavage of the phosphodiester bond between the beta-phosphate and the gamma-phosphate groups. PKA acts as a model for understanding protein kinase biology, with the position of the conserved residues helping to distinguish the active protein kinase and inactive pseudokinase members of the human kinome. Inactivation Downregulation of protein kinase A occurs by a feedback mechanism and uses a number of cAMP hydrolyzing phosphodiesterase (PDE) enzymes, which belong to the substrates activated by PKA. Phosphodiesterase quickly converts cAMP to AMP, thus reducing the amount of cAMP that can activate protein kinase A. PKA is also regulated by a complex series of phosphorylation events, which can include modification by autophosphorylation and phosphorylation by regulatory kinases, such as PDK1. Thus, PKA is controlled, in part, by the levels of cAMP. Also, the catalytic subunit itself can be down-regulated by phosphorylation. Anchorage The regulatory subunit dimer of PKA is important for localizing the kinase inside the cell. The dimerization and docking (D/D) domain of the dimer binds to the A-kinase binding (AKB) domain of A-kinase anchor protein (AKAP). The AKAPs localize PKA to various locations (e.g., plasma membrane, mitochondria, etc.) within the cell. AKAPs bind many other signaling proteins, creating a very efficient signaling hub at a certain location within the cell. For example, an AKAP located near the nucleus of a heart muscle cell would bind both PKA and phosphodiesterase (hydrolyzes cAMP), which allows the cell to limit the productivity of PKA, since the catalytic subunit is activated once cAMP binds to the regulatory subunits. Function PKA phosphorylates proteins that have the motif Arginine-Arginine-X-Serine exposed, in turn (de)activating the proteins. Many possible substrates of PKA exist; a list of such substrates is available and maintained by the NIH. As protein expression varies from cell type to cell type, the proteins that are available for phosphorylation will depend upon the cell in which PKA is present. Thus, the effects of PKA activation vary with cell type: Overview table In adipocytes and hepatocytes Epinephrine and glucagon affect the activity of protein kinase A by changing the levels of cAMP in a cell via the G-protein mechanism, using adenylate cyclase. Protein kinase A acts to phosphorylate many enzymes important in metabolism. For example, protein kinase A phosphorylates acetyl-CoA carboxylase and pyruvate dehydrogenase. Such covalent modification has an inhibitory effect on these enzymes, thus inhibiting lipogenesis and promoting net gluconeogenesis. Insulin, on the other hand, decreases the level of phosphorylation of these enzymes, which instead promotes lipogenesis. Recall that gluconeogenesis does not occur in myocytes. In nucleus accumbens neurons PKA helps transfer/translate the dopamine signal into cells in the nucleus accumbens, which mediates reward, motivation, and task salience. The vast majority of reward perception involves neuronal activation in the nucleus accumbens, some examples of which include sex, recreational drugs, and food. Protein Kinase A signal transduction pathway helps in modulation of ethanol consumption and its sedative effects. A mouse study reports that mice with genetically reduced cAMP-PKA signalling results into less consumption of ethanol and are more sensitive to its sedative effects. In skeletal muscle PKA is directed to specific sub-cellular locations after tethering to AKAPs. Ryanodine receptor (RyR) co-localizes with the muscle AKAP and RyR phosphorylation and efflux of Ca2+ is increased by localization of PKA at RyR by AKAPs. In cardiac muscle In a cascade mediated by a GPCR known as β1 adrenoceptor, activated by catecholamines (notably norepinephrine), PKA gets activated and phosphorylates numerous targets, namely: L-type calcium channels, phospholamban, troponin I, myosin binding protein C, and potassium channels. This increases inotropy as well as lusitropy, increasing contraction force as well as enabling the muscles to relax faster. In memory formation PKA has always been considered important in formation of a memory. In the fruit fly, reductions in expression activity of DCO (PKA catalytic subunit encoding gene) can cause severe learning disabilities, middle term memory and short term memory. Long term memory is dependent on the CREB transcription factor, regulated by PKA. A study done on drosophila reported that an increase in PKA activity can affect short term memory. However, a decrease in PKA activity by 24% inhibited learning abilities and a decrease by 16% affected both learning ability and memory retention. Formation of a normal memory is highly sensitive to PKA levels. See also Protein kinase Signal transduction G protein-coupled receptor Serine/threonine-specific protein kinase Myosin light-chain kinase cAMP-dependent pathway References External links Drosophila cAMP-dependent protein kinase 1 - The Interactive Fly cAMP-dependent protein kinase: PDB Molecule of the Month Notes Signal transduction Protein kinases EC 2.7.11
Protein kinase A
[ "Chemistry", "Biology" ]
2,722
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
326,762
https://en.wikipedia.org/wiki/Inline%20linking
Inline linking (also known as hotlinking, piggy-backing, direct linking, offsite image grabs, bandwidth theft, and leeching) is the use of a linked object, often an image, on one site by a web page belonging to a second site. One site is said to have an inline link to the other site where the object is located. Inline linking and HTTP The technology behind the World Wide Web, the Hypertext Transfer Protocol (HTTP), does not make any distinction of types of links—all links are functionally equal. Resources may be located on any server at any location. When a website is visited, the browser first downloads the textual content in the form of an HTML document. The downloaded HTML document may call for other HTML files, images, scripts and/or stylesheet files to be processed. These files may contain <img> tags which supply the URLs which allow images to display on the page. The HTML code generally does not specify a server, meaning that the web browser should use the same server as the parent code (<img src="picture.jpg" />). It also permits absolute URLs that refer to images hosted on other servers (<img src="http://www.example.com/picture.jpg" />). When a browser downloads an HTML page containing such an image, the browser will contact the remote server to request the image content. Common uses of linked content The ability to display content from one site within another is part of the original design of the Web's hypertext medium. Common uses include: It is copyright infringement to make copies of a work for which the person making copies has no license, but there is no infringement when the re-user provides a simple text link within an HTML document that points to the location of the original image or file (simply called a "link"). Web architects may deliberately segregate the images of a site on one server or a group of servers. Hosting images on separate servers allows the site to divide the bandwidth requirements between servers. As an example, the high-volume site Slashdot stores its "front page" at slashdot.org, individual stories on servers such as games.slashdot.org or it.slashdot.org, and serves images for each host from images.slashdot.org. An article on one site may choose to refer to copyrighted images or content on another site via inline linking, which may avoid rights and ownership issues that copying the original files could raise. However, this practice is generally discouraged due to resulting bandwidth loading of the source, and the source provider is often offended because the viewer is not seeing the whole original page, which provides the intended context of the image. Many web pages include banner ads. Banner ads are images hosted by a company that acts as middleman between the advertisers and the websites on which the ads appear. The <img> tag may specify a URL to a CGI script on the ad server, including a string uniquely identifying the site producing the traffic, and possibly other information about the person viewing the ad, previously collected and associated with a cookie. The CGI script determines which image to send in response to the request. Some websites hotlink from a faster server to increase client loading speed. Hit counters or Web counters show how many times a page has been loaded. Several companies provide hit counters that are maintained off site and displayed with an inline link. Controversial uses of inline linking The blurring of boundaries between sites can lead to other problems when the site violates users' expectations. Other times, inline linking can be done for malicious purposes. Content sites where the object is stored and from which it is retrieved may not like the new placement. Inline linking to an image stored on another site increases the bandwidth use of that site even though the site is not being viewed as intended. The complaint may be the loss of ad revenue or changing the perceived meaning through an unapproved context. Cross-site scripting and phishing attacks may include inline links to a legitimate site to gain the confidence of a victim. Pay-per-content services may attempt to restrict access to their content through complex scripting and inline linking techniques. Inline objects can be used to perform drive-by attacks on the client, exploiting faults in the code that interprets the objects. When an object is stored on an external server, the referring site has no control over if and when an originally beneficial object's content is replaced by malicious content. The requests for inline objects usually contain the referrer information. This leaks information about the browsed pages to the servers hosting the objects (see web visitor tracking). Prevention Client side Most web browsers will blindly follow the URL for inline links, even though it is a frequent security complaint. Embedded images may be used as a web bug to track users or to relay information to a third party. Many ad filtering browser tools will restrict this behavior to varying degrees. Server side Some servers are programmed to use the HTTP referer header to detect hotlinking and return a condemnatory message, commonly in the same format, in place of the expected image or media clip. Most servers can be configured to partially protect hosted media from inline linking, usually by not serving the media or by serving a different file. URL rewriting is often used (e.g., mod_rewrite with Apache HTTP Server) to reject or redirect attempted hotlinks to images and media to an alternative resource. Most types of electronic media can be redirected this way, including video files, music files, and animations (such as Flash). Other solutions usually combine URL rewriting with some custom complex server side scripting to allow hotlinking for a short time, or in more complex setups, to allow the hotlinking but return an alternative image with reduced quality and size and thus reduce the bandwidth load when requested from a remote server. All hotlink prevention measures risk deteriorating the user experience on the third-party website. Copyright issues raised by inline linking The most significant legal fact about inline linking, relative to copyright law considerations, is that the inline linker does not place a copy of the image file on its own Internet server. Rather, the inline linker places a pointer on its Internet server that points to the server on which the proprietor of the image has placed the image file. This pointer causes a user's browser to jump to the proprietor's server and fetch the image file to the user's computer. US courts have considered this a decisive fact in copyright analysis. Thus, in Perfect 10, Inc. v. Amazon.com, Inc., the United States Court of Appeals for the Ninth Circuit explained why inline linking did not violate US copyright law: Google does not...display a copy of full-size infringing photographic images for purposes of the Copyright Act when Google frames in-line linked images that appear on a user's computer screen. Because Google's computers do not store the photographic images, Google does not have a copy of the images for purposes of the Copyright Act. In other words, Google does not have any "material objects...in which a work is fixed...and from which the work can be perceived, reproduced, or otherwise communicated" and thus cannot communicate a copy. Instead of communicating a copy of the image, Google provides HTML instructions that direct a user's browser to a website publisher's computer that stores the full-size photographic image. Providing these HTML instructions is not equivalent to showing a copy. First, the HTML instructions are lines of text, not a photographic image. Second, HTML instructions do not themselves cause infringing images to appear on the user's computer screen. The HTML merely gives the address of the image to the user's browser. The browser then interacts with the computer that stores the infringing image. It is this interaction that causes an infringing image to appear on the user's computer screen. Google may facilitate the user's access to infringing images. However, such assistance raised only contributory liability issues and does not constitute direct infringement of the copyright owner's display rights. ...While in-line linking and framing may cause some computer users to believe they are viewing a single Google webpage, the Copyright Act...does not protect a copyright holder against [such] acts.... See also Copyright aspects of hyperlinking and framing Deep linking References Internet terminology File sharing Hypertext Internet ethics
Inline linking
[ "Technology" ]
1,751
[ "Computing terminology", "Internet ethics", "Internet terminology", "Ethics of science and technology" ]
326,776
https://en.wikipedia.org/wiki/Online%20service%20provider
An online service provider (OSP) can, for example, be an Internet service provider, an email provider, a news provider (press), an entertainment provider (music, movies), a search engine, an e-commerce site, an online banking site, a health site, an official government site, social media, a wiki, or a Usenet newsgroup. In its original more limited definition, it referred only to a commercial computer communication service in which paid members could dial via a computer modem the service's private computer network and access various services and information resources such as bulletin board systems, downloadable files and programs, news articles, chat rooms, and electronic mail services. The term "online service" was also used in references to these dial-up services. The traditional dial-up online service differed from the modern Internet service provider in that they provided a large degree of content that was only accessible by those who subscribed to the online service, while ISP mostly serves to provide access to the Internet and generally provides little if any exclusive content of its own. In the U.S., the Online Copyright Infringement Liability Limitation Act (OCILLA) portion of the U.S. Digital Millennium Copyright Act has expanded the legal definition of online service in two different ways for different portions of the law. It states in section 512(k)(1): (A) As used in subsection (a), the term "service provider" means an entity offering the transmission, routing, or providing of connections for digital online communications, between or among points specified by a user, of material of the user's choosing, without modification to the content of the material as sent or received. (B) As used in this section, other than subsection (a), the term "service provider" means a provider of online services or network access, or the operator of facilities therefore, and includes an entity described in subparagraph (A). These broad definitions make it possible for numerous web businesses to benefit from the OCILLA. History The first commercial online services went live in 1979. CompuServe (owned in the 1980s and 1990s by H&R Block) and The Source (for a time owned by The Reader's Digest) are considered the first major online services created to serve the market of personal computer users. Utilizing text-based interfaces and menus, these services allowed anyone with a modem and communications software to use email, chat, news, financial and stock information, bulletin boards, special interest groups (SIGs), forums and general information. Subscribers could exchange email only with other subscribers of the same service. (For a time a service called DASnet carried mail among several online services, and CompuServe, MCI Mail, and other services experimented with X.400 protocols to exchange email until the Internet rendered these outmoded.) Other text-based online services followed such as Delphi, GEnie and MCI Mail. The 1980s also saw the rise of independent Computer Bulletin Boards, or BBSes. (Online services are not BBSes. An online service may contain an electronic bulletin board, but the term "BBS" is reserved for independent dialup, microcomputer-based services that are usually single-user systems.) The commercial services used pre-existing packet-switched (X.25) data communications networks, or the services' own networks (as with CompuServe). In either case, users dialed into local access points and were connected to remote computer centers where information and services were located. As with telephone service, subscribers paid by the minute, with separate day-time and evening/weekend rates. As the use of computers that supported color and graphics, such the Atari 8-bit computers, Commodore 64, TI-99/4A, Apple II, and early IBM PC compatibles, increased, online services gradually developed framed or partially graphical information displays. Early services such as CompuServe added increasingly sophisticated graphics-based front end software to present their information, though they continued to offer text-based access for those who needed or preferred it. In 1985 Viewtron, which began as a Videotex service requiring a dedicated terminal, introduced software allowing home computer owners access. Beginning in the mid-1980s graphics based online services such as PlayNET, Prodigy, and Quantum Link (aka Q-Link) were developed. Quantum Link, which was based on Commodore-only Playnet software, later developed AppleLink Personal Edition, PC-Link (based on Tandy's DeskMate), and Promenade (for IBM), all of which (including Q-Link) were later combined as America Online. These online services presaged the web browser that would change global online life 10 years later. Before Quantum Link, Apple computer had developed its own service, called AppleLink, which was mostly a support network targeted at Apple dealers and developers. Later, Apple offered the short-lived eWorld, targeted at Mac consumers and based on the Mac version of the America Online software. Beginning in 1992, the Internet, which had previously been limited to government, academic, and corporate research settings, was opened to commercial entities. The first online service to offer Internet access was DELPHI, which had developed TCP/IP access much earlier, in connection with an environmental group that rated Internet access. The explosion of popularity of the World Wide Web in 1994 accelerated the development of the Internet as an information and communication resource for consumers and businesses. The sudden availability of low- to no-cost email and appearance of free independent web sites broke the business model that had supported the rise of the early online service industry. CompuServe, BIX, AOL, DELPHI, and Prodigy gradually added access to Internet e-mail, Usenet newsgroups, ftp, and to web sites. At the same time, they moved from usage-based billing to monthly subscriptions. Similarly, companies that paid to have AOL host their information or early online stores began to develop their own web sites, putting further stress on the economics of the online industry. Only the largest services like AOL (which later acquired CompuServe, just as CompuServe acquired The Source) were able to make the transition to the Internet-centric world. A new class of online service provider arose to provide access to the Internet, the internet service provider or ISP. Internet-only service providers like UUNET, The Pipeline, Panix, Netcom, the World, EarthLink, and MindSpring provided no content of their own, concentrating their efforts on making it easy for nontechnical users to install the various software required to "get online" before consumer operating systems came internet-enabled out of the box. In contrast to the online services' multitiered per-minute or per-hour rates, many ISPs offered flat-fee, unlimited access plans. Independent companies sprang up to offer access and packages to compete with the big networks (eg, the-wire.com, 1994 in Toronto and bway.net 1995 in New York). These providers first offered access through telephone and modem, just as did the early online services providers. By the early 2000s, these independent ISPs had largely been supplanted by high speed and broadband access through cable and phone companies, as well as wireless access. The importance of the online services industry was vital in "paving the road" for the information superhighway. When Mosaic and Netscape were released in 1994, they had a ready audience of more than 10 million people who were able to download their first web browser through an online service. Though ISPs quickly began offering software packages with setup to their customers, this brief period gave many users their first online experience. Two online services in particular, Prodigy and AOL, are often confused with the Internet, or the origins of the Internet. Prodigy's Chief Technical Officer said in 1999: "Eleven years ago, the Internet was just an intangible dream that Prodigy brought to life. Now it is a force to be reckoned with." Despite that statement, neither service provided the back bone for the Internet, nor did either start the Internet. Online service interfaces The first online service used a simple text-based interface in which content was largely text only and users made choices via a command prompt. This allowed just about any computer with a modem and terminal communications program the ability to access these text-based online services. CompuServe would later offer, with the advent of the Apple Macintosh and Microsoft Windows-based PCs, a GUI interface program for their service. This provided a very rudimentary GUI interface. CompuServe continued to offer text-only access for those needing it. Online services like Prodigy and AOL developed their online service around a GUI and thus unlike CompuServe's early GUI-based software, these online services provided a more robust GUI interface. Early GUI-based online service interfaces offered little in the way of detailed graphics such as photographs or pictures. Largely they were limited to simple icons and buttons and text. As modem speed increased it became more feasible to offer images and other more complicated graphics to users thus providing a nicer look to their services Common resources provided by online services Some of the resources and services online services have provided access to include message boards, chat services, electronic mail, file archives, current news and weather, online encyclopedias, airline reservations, and online games. Major online service providers like Compuserve also served as a way for software and hardware manufacturers to provide online support for their products via forums and file download areas within the online service provider's network. Prior to the advent of the web, such support had to be done either via an online service or a private bulletin board system run by the company and accessed over a direct phone line. Responsibility Depending on the jurisdiction there may be rules exempting an OSP from responsibility for content provided by users, but with a ' notice and take down (NTD) obligation to remove unacceptable content as soon as it is noticed. See also Videotex Online service provider law Terminal emulator :Category:Pre-World Wide Web online services Service provider NSFNet Shell account Connect Business Information Network References External links Online services history Information on aluminum and glass contracting services for online infrastructure Computer-mediated communication Network access Providers
Online service provider
[ "Technology", "Engineering" ]
2,113
[ "Network access", "Information systems", "Electronic engineering", "Computing and society", "Computer-mediated communication" ]
326,789
https://en.wikipedia.org/wiki/Chondrostei
Chondrostei is a group of non-neopterygian ray-finned fish. While the term originally referred to the paraphyletic grouping of all non-neopterygian ray-finned fish, it was redefined by Patterson in 1982 to be a clade comprising the Acipenseriformes (which includes sturgeon and paddlefish) and their extinct relatives. Taxa commonly suggested to represent relatives of the Acipenseriformes include the Triassic marine fish Birgeria and the Saurichthyiformes, but their relationship with the Acipenseriformes has been strongly challenged on cladistic grounds. Coccolepididae, a group of small weakly ossified Jurassic and Cretaceous fish found in both marine and freshwater environments, have also been suggested to be close relatives of the Acipenseriformes. However, this has never been subject to cladistic analysis. Near & Thacker (2024) also recovered the ptycholepiform Boreosomus as a stem-acipenseriform. The following taxa are known: Subclass Chondrostei Genus †Eochondrosteus Order †Chondrosteiformes Family †Chondrosteidae Order Acipenseriformes Family †Peipiaosteidae Suborder Acipenseroidei Family Acipenseridae Family Polyodontidae Disputed taxa include: Genus †Birgeria Family †Coccolepididae Order †Saurichthyiformes Order †Ptycholepiformes References Extant Silurian first appearances Paraphyletic groups
Chondrostei
[ "Biology" ]
332
[ "Phylogenetics", "Paraphyletic groups" ]
326,821
https://en.wikipedia.org/wiki/Appliance%20plug
An appliance plug is a three-conductor power connector originally developed for kettles, toasters and similar small appliances. It was common in the United Kingdom, New Zealand, Australia, Germany, the Netherlands and Sweden. It has largely been made obsolete and replaced by IEC 60320 C15 and C16 connectors, or proprietary connectors to base plates for cordless kettles. It still occurs on some traditional ceramic electric jugs. It is also used for some laboratory water stills. On some models of the classical ceramic electric jug, the appliance plug prevents the lid from being raised while the connector is inserted. This is important as during operation of these jugs, the water it contains is connected to the electric mains and is an electric shock risk. Appliance plugs were also used to supply power to electric toasters, electric coffee percolators, electric frypans, and many other appliances. An appliance plug is to some degree heat resistant, but the maximum working temperature varied from manufacturer to manufacturer and even from batch to batch. The mains connectors of the appliance plug are two rounded sockets that accept two rounded pins from the appliance. They are unpolarised. The third connection, earth, is a large metal contact on each side of the plug body which makes contact with the sides of the plug receptacle, grounding the appliance body. Some appliances using these connectors incorporate a spring and plunger mechanism with a temperature-sensitive release system; if the temperature rises significantly above a preset limit - for example, if a kettle boils dry - the spring is released and (if all goes well) the plunger pushes the plug and socket apart. It must then be allowed to cool and then reset manually by forcing the connector back into the appliance. A plug of same design but probably different dimensions was in use in former USSR for powering electric kettles and electric samovars. References Mains power connectors Home appliances
Appliance plug
[ "Physics", "Technology" ]
411
[ "Physical systems", "Machines", "Home appliances" ]
326,971
https://en.wikipedia.org/wiki/Fluid%20ounce
A fluid ounce (abbreviated fl oz, fl. oz. or oz. fl., old forms ℥, fl ℥, f℥, ƒ ℥) is a unit of volume (also called capacity) typically used for measuring liquids. The British Imperial, the United States customary, and the United States food labeling fluid ounce are the three that are still in common use, although various definitions have been used throughout history. An imperial fluid ounce is of an imperial pint, of an imperial gallon or exactly 28.4130625 mL. A US customary fluid ounce is of a US liquid pint and of a US liquid gallon or exactly 29.5735295625 mL, making it about 4.08% larger than the imperial fluid ounce. A US food labeling fluid ounce is exactly 30 mL. Comparison to the ounce The fluid ounce is distinct from the (international avoirdupois) ounce as a unit of weight or mass, although it is sometimes referred to simply as an "ounce" where context makes the meaning clear (e.g., "ounces in a bottle"). A volume of pure water measuring one imperial fluid ounce has a mass of almost exactly one ounce. Definitions and equivalences Imperial fluid ounce {| |- |height=120%|1 imperial fluid ounce  |=  |align=right|||imperial gallon |- |||=  |align=right|||imperial quart |- |||=  |align=right|||imperial pint |- |||=  |align=right|||imperial cup |- |||=  |align=right|||imperial gill |- |||=  |align=right|8||imperial fluid drams |- |||=  |align=right|||millilitres |- |||≈  |align=right|||cubic inches |- |||≈  |align=right|||US fluid ounces |- |||≈  |align=right colspan=2|the volume of 1 avoirdupois ounce of water |} US customary fluid ounce {| |- |1 US fluid ounce ||=  |align=right|||US gallon |- |||=  |align=right|||US quart |- |||=  |align=right|||US pint |- |||=  |align=right|||US cup |- |||=  |align=right|||US gill |- |||=  |align=right|2||US tablespoons |- |||=  |align=right|6||US teaspoons |- |||=  |align=right|8||US fluid drams |- |||=  |align=right|||cubic inches |- |||≈  |align≈right|||millilitres |- |||≈  |align≈right|||imperial fluid ounces |} US food labeling fluid ounce For serving sizes on nutrition labels in the US, regulation 21 CFR §101.9(b) requires the use of "common household measures", and 21 CFR §101.9(b)(5)(viii) defines a "common household" fluid ounce as exactly 30 milliliters. This applies to the serving size but not the package size; package sizes use the US customary fluid ounce. {| |- |30 millilitres ||≈  |align=right|||imperial fluid ounces |- |||≈  |align=right|||US customary fluid ounces |- |||≈  |align=right|||cubic inches |} History The fluid ounce was originally the volume occupied by one ounce of some substance, for example wine (in England) or water (in Scotland). The ounce in question also varied depending on the system of fluid measure, such as that used for wine versus ale. Various ounces were used over the centuries, including the Tower ounce, troy ounce, avoirdupois ounce, and ounces used in international trade, such as Paris troy, a situation further complicated by the medieval practice of "allowances", whereby a unit of measure was not necessarily equal to the sum of its parts. For example, the had a for the weight of the sack and other packaging materials. In 1824, the British Parliament defined the imperial gallon as the volume of ten pounds of water at standard temperature. The gallon was divided into four quarts, the quart into two pints, the pint into four gills, and the gill into five ounces; thus, there were 160 imperial fluid ounces to the gallon. This made the mass of a fluid ounce of water one avoirdupois ounce (28.35 g), a relationship which remains approximately valid today despite the imperial gallon's definition being slightly revised to be 4.54609 litres (thus making the imperial fluid ounce exactly 28.4130625 mL). The US fluid ounce is based on the US gallon, which in turn is based on the wine gallon of 231 cubic inches that was used in the United Kingdom prior to 1824. With the adoption of the international inch, the US fluid ounce became gal × 231 in/gal × (2.54 cm/in) = 29.5735295625 mL exactly, or about 4% larger than the imperial unit. In the U.K., the use of the fluid ounce as a measurement in trade, public health, and public administration was circumscribed to a few specific uses (the labelling of beer, cider, water, lemonade and fruit juice in returnable containers) in 1995, and abolished entirely in 2000, by The Units of Measurement Regulations 1994. References and notes Ounce Customary units of measurement in the United States Imperial units Alcohol measurement Cooking weights and measures
Fluid ounce
[ "Mathematics" ]
1,221
[ "Units of volume", "Quantity", "Units of measurement" ]
327,003
https://en.wikipedia.org/wiki/Chebyshev%20filter
Chebyshev filters are analog or digital filters that have a steeper roll-off than Butterworth filters, and have either passband ripple (type I) or stopband ripple (type II). Chebyshev filters have the property that they minimize the error between the idealized and the actual filter characteristic over the operating frequency range of the filter, but they achieve this with ripples in the passband. This type of filter is named after Pafnuty Chebyshev because its mathematical characteristics are derived from Chebyshev polynomials. Type I Chebyshev filters are usually referred to as "Chebyshev filters", while type II filters are usually called "inverse Chebyshev filters". Because of the passband ripple inherent in Chebyshev filters, filters with a smoother response in the passband but a more irregular response in the stopband are preferred for certain applications. Type I Chebyshev filters (Chebyshev filters) Type I Chebyshev filters are the most common types of Chebyshev filters. The gain (or amplitude) response, , as a function of angular frequency of the th-order low-pass filter is equal to the absolute value of the transfer function evaluated at : where is the ripple factor, is the cutoff frequency and is a Chebyshev polynomial of the th order. The passband exhibits equiripple behavior, with the ripple determined by the ripple factor . In the passband, the Chebyshev polynomial alternates between -1 and 1 so the filter gain alternate between maxima at and minima at . The ripple factor ε is thus related to the passband ripple δ in decibels by: At the cutoff frequency the gain again has the value but continues to drop into the stopband as the frequency increases. This behavior is shown in the diagram on the right. The common practice of defining the cutoff frequency at −3 dB is usually not applied to Chebyshev filters; instead the cutoff is taken as the point at which the gain falls to the value of the ripple for the final time. The 3 dB frequency is related to by: The order of a Chebyshev filter is equal to the number of reactive components (for example, inductors) needed to realize the filter using analog electronics. An even steeper roll-off can be obtained if ripple is allowed in the stopband, by allowing zeros on the -axis in the complex plane. While this produces near-infinite suppression at and near these zeros (limited by the quality factor of the components, parasitics, and related factors), overall suppression in the stopband is reduced. The result is called an elliptic filter, also known as a Cauer filter. Poles and zeroes For simplicity, it is assumed that the cutoff frequency is equal to unity. The poles of the gain function of the Chebyshev filter are the zeroes of the denominator of the gain function. Using the complex frequency , these occur when: Defining and using the trigonometric definition of the Chebyshev polynomials yields: Solving for where the multiple values of the arc cosine function are made explicit using the integer index . The poles of the Chebyshev gain function are then: Using the properties of the trigonometric and hyperbolic functions, this may be written in explicitly complex form: where  and This may be viewed as an equation parametric in and it demonstrates that the poles lie on an ellipse in -space centered at with a real semi-axis of length and an imaginary semi-axis of length of The transfer function The above expression yields the poles of the gain . For each complex pole, there is another which is the complex conjugate, and for each conjugate pair there are two more that are the negatives of the pair. The transfer function must be stable, so that its poles are those of the gain that have negative real parts and therefore lie in the left half plane of complex frequency space. The transfer function is then given by where are only those poles of the gain with a negative sign in front of the real term, obtained from the above equation. The group delay The group delay is defined as the derivative of the phase with respect to angular frequency: The gain and the group delay for a 5th-order type I Chebyshev filter with ε=0.5 are plotted in the graph on the left. Its stop band has no ripples. But the ripples of group delay in its passband indicate that different frequency components have different delay, which along with the ripples of gain in its passband results in distortion of the waveform's shape. Even order modifications Even order Chebyshev filters implemented with passive elements, typically inductors, capacitors, and transmission lines, with terminations of equal value on each side cannot be implemented with the traditional Chebyshev transfer function without the use of coupled coils, which may not be desirable or feasible, particularly at the higher frequencies. This is due to the physical inability to accommodate the even order Chebyshev reflection zeros that result in a scattering matrix S12 values that exceed the S12 value at . If it is not feasible to design the filter with one of the terminations increased or decreased to accommodate the pass band S12, then the Chebyshev transfer function must be modified so as to move the lowest even order reflection zero to while maintaining the equi-ripple response of the pass band. The needed modification involves mapping each pole of the Chebyshev transfer function in a manner that maps the lowest frequency reflection zero to zero and the remaining poles as needed to maintain the equi-ripple pass band. The lowest frequency reflection zero may be found from the Chebyshev Nodes, . The complete Chebyshev pole mapping function is shown below. Where: n is the order of the filter (must be even) P is a traditional Chebyshev transfer function pole P' is the mapped pole for the modified even order transfer function. "Left Half Plane" indicates to use the square root containing a negative real value. When complete, a replacement equi-ripple transfer function is created with reflection zero scattering matrix values for S12 of one and S11 of zero when implemented with equally terminated passive networks. The illustration below shows an 8th order Chebyshev filter modified to support even order equally terminated passive networks by relocating the lowest frequency reflection zero from a finite frequency to 0 while maintaining an equi-ripple pass band frequency response. The LC element value formulas in the Cauer topology are not applicable to the even order modified Chebyshev transfer function, and cannot be used. It is therefore necessary to calculate the LC values from traditional continued fractions of the impedance function, which may be derived from the reflection coefficient, which in turn may be derived from the transfer function. Minimum order To design a Chebyshev filter using the minimum required number of elements, the minimum order of the Chebyshev filter may be calculated as follows. The equations account for standard low pass Chebyshev filters, only. Even order modifications and finite stop band transmission zeros will introduce error that the equations do not account for. where: and are the pass band ripple frequency and maximum ripple attenuation in dB and are the stop band frequency and attenuation at that frequency in dB is the minimum number of poles, the order of the filter. ceil[] is a round up to next integer function. Setting the cutoff attenuation Pass band cutoff attenuation for Chebyshev filters is usually the same as the pass band ripple attenuation, set by the computation above. However, many applications such as diplexers and triplexers, require a cutoff attenuation of -3.0103 dB in order to obtain the needed reflections. Other specialized applications may require other specific values for cutoff attenuation for various reasons. It is therefore useful to have a means available of setting the Chebyshev pass band cutoff attenuation independently of the pass band ripple attenuation, such as -1 dB, -10 dB, etc. The cutoff attenuation may be set by frequency scaling the poles of the transfer function. The scaling factor may be determined by direct algebraic manipulation of the defining Chebyshev filter function, , including and . The general definition of the Chebyshev function, is required, which may be derived from the Chebyshev Polynomials equations, and the inverse Chebyshev function, . To keep the numbers real for values of , complex hyperbolic identities may be used to rewrite the equations as, and . Using simple algebra on the above equations and references, the expression to scale each Chebyshev poles is: Where: is the relocated pole positioned to set the desired cutoff attenuation. is a ripple cutoff pole that lies on the oval. is the passband attenuation ripple in dB (.05 dB, 1 dB, etc.)). is the desired passband attenuation at the cutoff frequency in dB (1 dB, 3 dB, 10 dB, etc.) is the number of poles (the order of the filter). A quick sanity check on the above equation using passband ripple attenuation for the passband cutoff attenuation reveals that the pole adjustment will be 1.0 for this case, which is what is expected. Even order modified cutoff attenuation adjustment For Chebyshev filters being designed with modified for even order pass band ripple for passive equally terminated filters, the attenuation frequency computation needs to include the even order adjustment by performing the even order adjustment operation on the computed attenuation frequency. This makes the even order adjustment arithmetic slightly simpler, since frequency can be treated as a real variable, in this case . Where: is the relocated pole positioned to set the desired cutoff attenuation. is a ripple cutoff pole that has been modified for even order pass bands. is the passband attenuation ripple in dB (.05 dB, 1 dB, etc.)). is the desired passband attenuation at the cutoff frequency in dB (1 dB, 3 dB, 10 dB, etc.) is the number of poles (the order of the filter). is the smallest even order Chebyshev Node Type II Chebyshev filters (inverse Chebyshev filters) Also known as inverse Chebyshev filters, the Type II Chebyshev filter type is less common because it does not roll off as fast as Type I, and requires more components. It has no ripple in the passband, but does have equiripple in the stopband. The gain is: In the stopband, the Chebyshev polynomial oscillates between -1 and 1 so that the gain will oscillate between zero and and the smallest frequency at which this maximum is attained is the cutoff frequency . The parameter ε is thus related to the stopband attenuation γ in decibels by: For a stopband attenuation of 5 dB, ε = 0.6801; for an attenuation of 10 dB, ε = 0.3333. The frequency f0 = ω0/2π is the cutoff frequency. The 3 dB frequency fH is related to f0 by: Poles and zeroes Assuming that the cutoff frequency is equal to unity, the poles of the gain of the Chebyshev filter are the zeroes of the denominator of the gain: The poles of gain of the type II Chebyshev filter are the inverse of the poles of the type I filter: where . The zeroes of the type II Chebyshev filter are the zeroes of the numerator of the gain: The zeroes of the type II Chebyshev filter are therefore the inverse of the zeroes of the Chebyshev polynomial. for . The transfer function The transfer function is given by the poles in the left half plane of the gain function, and has the same zeroes but these zeroes are single rather than double zeroes. The group delay The gain and the group delay for a fifth-order type II Chebyshev filter with ε=0.1 are plotted in the graph on the left. It can be seen that there are ripples in the gain in the stopband but not in the pass band. Even order modifications Just like Chebyshev filter even order filters, the standard Chebyshev II even order filter cannot be implemented with equally terminated passive elements without the use of coupled coils, which may not be desirable or feasible. In the Chebyshev Ii case, this is due to finite attenuation of S12 in the stop band. However, even order Chebyshev II filters may be modified by translating the highest frequency finite transmission zero to infinity, while maintaining the equi-ripple functions of the Chebyshev II stop band. To do this translation, an even order modified Chebyshev function is used in place of the standard Chebyshev function to define the Chebyshev II poles needed to create the even order modified Chebyshev II transfer function. Zeros are created using the roots of the even order modified Chebyshev polynomial, which are the even order modified Chebyshev nodes. The illustration below shows an 8th order Inverse Chebyshev filter modified to support even order equally terminated passive networks by relocating the highest frequency transmission zero from a finite frequency to while maintaining an equi-ripple stop band frequency response. Minimum order To design an Inverse Chebyshev filter using the minimum required number of elements, the minimum order of the Inverse Chebyshev filter may be calculated as follows. The equations account for standard low pass Inverse Chebyshev filters, only. Even order modifications will introduce error that the equations do not account for. The equations is identical to that used for Chebyshev filter minimum order, with a slightly different variable definitions. where: and are the pass band frequency and attenuation at that frequency in dB and are the stop band frequency and minimum stop band attenuation in dB is the minimum number of poles, the order of the filter. ceil[] is a round up to next integer function. Setting the cutoff attenuation The standard cutoff attenuation as described is the same at the pass band ripple attenuation. However, just as in Chebyshev filters, it is useful to set the cutoff attenuation to a desired value, and for the same reasons. Setting the Chebyshev II cutoff attenuation is the same as for Chebyshev cutoff attenuation, except the arithmetic attenuation and ripple entries are inverted in the equation and the poles and zeros are multiplied by the result, as opposed to divided by in the Chebyshev case.. Even order modified cutoff attenuation adjustment The same even order adjustment to the poles and zeros that was used for the Chebyshev even order modified cutoff attenuation may also be used for the Chebyshev II case, except the poles are multiplied by the result. Implementation Cauer topology A passive LC Chebyshev low-pass filter may be realized using a Cauer topology. The inductor or capacitor values of an th-order Chebyshev prototype filter may be calculated from the following equations: G1, Gk are the capacitor or inductor element values. fH, the 3 dB frequency is calculated with: The coefficients A, γ, β, Ak, and Bk may be calculated from the following equations: where is the passband ripple in decibels. The number is rounded from the exact value . The calculated Gk values may then be converted into shunt capacitors and series inductors as shown on the right, or they may be converted into series capacitors and shunt inductors. For example, C1 shunt = G1, L2 series = G2, ... or L1 shunt = G1, C1 series = G2, ... Note that when G1 is a shunt capacitor or series inductor, G0 corresponds to the input resistance or conductance, respectively. The same relationship holds for Gn+1 and Gn. The resulting circuit is a normalized low-pass filter. Using frequency transformations and impedance scaling, the normalized low-pass filter may be transformed into high-pass, band-pass, and band-stop filters of any desired cutoff frequency or bandwidth. Digital As with most analog filters, the Chebyshev may be converted to a digital (discrete-time) recursive form via the bilinear transform. However, as digital filters have a finite bandwidth, the response shape of the transformed Chebyshev is warped. Alternatively, the Matched Z-transform method may be used, which does not warp the response. Comparison with other linear filters The following illustration shows the Chebyshev filters next to other common filter types obtained with the same number of coefficients (fifth order): Chebyshev filters are sharper than the Butterworth filter; they are not as sharp as the elliptic one, but they show fewer ripples over the bandwidth. Advanced Topics in Chebyshev Filters Chebyshev filter design flexibility may be augmented by more advanced design methods documented in this section. Transmission zeros may be inserted into the stop band to neutralize specific undesired frequencies or increase the cut-off attenuation, or may be inserted off-axis to obtain a more desirable group delay. Asymmetric Chebyshev band pass filters may be created that contain differing number of poles on each side of the pass band to meet frequency asymmetric design requirements more efficiently. The equi-ripple pass bands and that Chebyshev filters are known for may be restricted to a percentage of the pass band to meet design requirements more efficiently that only call for a portion of the pass band to be equi-ripple. Chebyshev transmission zeros Chebyshev filters may be designed with arbitrarily placed finite transmission zeros in the stop band while retaining an equi-ripple pass band. Stop band zeros along the axis are generally used to eliminate unwanted frequencies. Stop band zeros along the real axis or quadruplet stop band zeros in the complex plane may be used to modify the group delay to a more desirable shape. The transmission zeros design utilizes characteristic polynomials, K(S), to place the transmission and reflection zeros, which in turn are used to create the transfer function, , The calculation of K(S) relies upon the following observed equality. for all , imaginary conjugate pairs, quadruplet conjugate pairs, or real opposing signed pairs. Given the magnitude is always one in the pass bane () the rational and irrational terms must vary between 0 and 1. Therefore, if only the rational term is used to create the characteristic function, an equi-ripple response is expected in the pass band, and characteristic poles (transmission zeros) are expected at all . The design process for K(S) using the above expression is below. Use the positive solution for real and imaginary pairs. Use the positive real and conjugate imaginary solution for quadruplet complex pairs. should be normalized such that , if needed. The, "rational terms only" indicates to keep the rational part of the product, and to discard the irrational part. The rational term may be obtained by manually performing the polynomial arithmetic, or with the short cut below which is a solution derived from polynomial arithmetic and uses binomial coefficients. The algorithm is extremely efficient if the Binomial coefficients are implemented from a look-up table of pre-calculated values. When all M values are set to one, then will be the standard Chebyshev equation, which is expected since the all transmission zeros are it . Even order finite transmission zero Chebyshev filters have the same limitation as the all-pole case in that they cannot be constructed using equally terminated passive networks. The same even order modification may be made to the even order characteristic polynomials, , to make equally terminated passive network implementations possible. However, the even order modification will also move the finite transmission zeros slightly. This movement may be significantly mitigated by propositioning the transmission zeros with the inverse of the even order modification using the lowest Chebyshev node, . Simple transmission zeros example Design a 3 pole Chebyshev filter with a 1 dB pass band, a transmission zero at 2 rad/sec, and a transmission zero at : To find the transfer function, do the following. To obtain from the left half plane, factor the numerator and denominator to obtain the roots. Discard all roots from the right half plane of the denominator, half the repeated roots in the numerator, and rebuild with the remaining roots. Generally, normalize to 1 at . To confirm that the example is correct, the plot of along is shown below with a pass band ripple of 1 dB, a cut off frequency of 1 rad/sec, and a stop band zero at 2 rad/sec. Asymmetric band pass filter Chebyshev band pass filters may be designed with a geometrically asymmetric frequency response by placing the desired number of transmission zeros at zero and infinity with the use of the more generalized form of the Chebyshev transmission zeros equation above, and shown below. The equations below consider a frequency normalized pass band from 1 to . If the number of transmission zeros at 0 is not the same as the number of transmission zeros at , the filter will be geometrically asymmetric. The filter will also be asymmetric if finite transmission zeros are not place symmetrically about the geometric center frequency, which in this case is . There is a restriction in that he filter must be net even order, that is the sum of all the poles must be even, to make the asymmetric equation produce usable results. Real and complex quadruplet transmission zeros may also be created using this technique and are useful to modify the group delay response, just as in the low pass case. The derivation of the characteristic equation, , to create an asymmetric Chebyshev band pass filter is shown below. should be normalized such that , if needed. Simple asymmetric example Design an asymmetric Chebyshev filter with 1dB pass band ripple from 1 to 2 rad/sec, one transmission zero at , and three transmission zeros at 0. By applying the numeral values to the equations above, the characteristic polynomials, , may be calculated as follows. Discarding the irrational part and normalizing to 1 at s=j: Use the same process as in the low pass case to find from , using constant to scale the magnitude. When reconstructing the denominator from the left half plane poles, it will be necessary to set the magnitude such that the reflection zeros occur at 0dB. To do this, should be scaled such that = -1dB at the pass band corner frequencies, and . Once accomplished, the final transfer function for the designed asymmetric Chebyshev filter is shown below. Evaluating at s=j and at s=2j produces a value of -1dB in both cases, yielding an assurance that the example has been synthesized correctly. The frequency response is below, showing a Chebyshev 1dB equi-ripple pass band response for , cutoff attenuation of -1dB at the pass band edges, -60dB / decade attenuation toward , -20dB / decade attenuation toward , and Chebyshev style steepened slopes near the pass band edges. Constricting the pass band ripple Standard low pass Chebyshev filter design creates an equi-ripple pass band beginning from 0 rad/sec to a frequency normalized value of 1 rad/sec. However, some design requirements do not need an equi-ripple pass band at the low frequencies. A standard full-equi-ripple Chebyshev filter for this application would result in an over designed filter. Constricting the equi-ripple to a defined percentage of the pass band creates a more efficient design, reducing the size of the filter and potentially eliminating one or two components, which is useful in maximizing board space efficiency and minimizing production costs for mass produced items. Constricted pass band ripple can be achieved by designing an asymmetric Chebyshev band pass filter using the techniques described above in this article with a 0 order asymmetric high pass side (no transmission zeros at 0) and an set to the constricted ripple frequency. The order of the low pass side is N-1 for odd order filters, N-2 for even order modified filters, and N for standard even order filters. This results in a less than unity S12 at , which is typical of even order standard Chebyshev design, so for standard even order Chebyshev designs, the process is complete at this step. It will be necessary to insert a single reflection zero at for odd order designs, and two reflection zeros at for even order modified designs. Added reflection zeros introduces a noticeable error in the pass band that is likely to be objectionable. This error may be removed quickly and accurately by repositioning the finite reflection zeros with the use of Newton's method for systems of equations. Application of Newton's method Positioning the reflection zeros with Newton's method requires three pieces of information: The location of each pass band ripple minima that exists at frequencies higher than the constricted ripple frequency. The value of the magnitude normalized , that is , at the constriction frequency and at each minima above the constriction frequency. Future references to this function will be noted as or The Jacobian matrix of partial derivative of for the constriction frequency and at each minima above the constriction frequency. with respect to each reflection zero. Since the Chebyshev characteristic equations, , have all reflection zeros located on the axis, and all the transmission zeros either on the axis or symmetric bout the axis (required for passive element implementation), the locations of the pass band ripple minima may be obtained by factoring the numerator of the derivative of , , with the use of a root finding algorithm. The roots of this polynomial will be the pass band minima frequencies. is obtainable from standard polynomial derivative definitions, and is . The partial derivatives may be calculated digitally with , however, the continuous partial derivative generally provides greater accuracy and less convergence time, and is recommended. To obtain the continuous partial derivatives of with respect to the reflections zeros, a continuous expression for needs to be obtained that forces at all times. This may be achieved by expressing as a function of its conjugate root pairs, as shown below. Where includes finite reflection and transmission zeros, only, and refer to the number of reflection and transmission zero conjugate pairs, and and are the reflection and transmission zero conjugate pairs. The odd term accounts for the single reflection zero at 0 that occurs in odd order Chebyshev filters. Note that if quadruplet transmission zeros are employed, the expression must be modified to accommodate quadruplet terms. It is seen by inspection that whenever in the above expression. Since only movement of the reflection zeros is needed to shape the Chebyshev pass band, the partial derivative expression only needs to be made on the terms, and the terms are treated as a constant. To aid in the determination of the partial derivative expression for each , the expression above may be rewritten, as shown below. Where designates a specific reflection zero conjugate pair. This derivative of this expression with respect to may be easily computed following standard derivative rules. The constant requires the dividing out of the terms to maintain the integrity of the function. The easiest way to do this is to multiply by the inverse of the terms that were moved to the front. The differentiable expression may be rewritten as follows. The partial derivative may then be determined by applying standard derivative procedures to and then simplifying. The result is below. Since the only frequencies of relevance are the frequencies at the constriction point and the roots of , the Jacobian matrix may be constructed as follows. Where is the constriction limit frequency, and are the magnitude of the roots of the remaining pass band minima, , and are the reflection zeros. Assuming that the filter cut-off attenuation is the same as the ripple magnitude, the value of is 1 at all , so the solution vector entries are all 1, and the iterative equations to solve for Newton's method is Convergence is achieved when the sum of all and is sufficiently small for the application, typically between 1.e-05 and 1.e-16. For larger filters, it may be necessary to restrict the size of each to prevent excessive swings early in the convergence, and to restrict the size of each to keep their values inside the constricted ripple range during convergence. Constricted pass band example Design a 7 pole Chebyshev filter with a 1 dB equi-ripple pass band constricted to 55% of the pass band. Step 1: Design the characteristic polynomials for an asymmetric frequency response from .45 to 1 with 6 low pass poles at ,and 0 high pass poles using the asymmetric synthesis process above (use corner frequency = 0.45) . Step 2: Insert a single reflection zero into the from step 1. (two reflection zero additions would be required for even order modified filters) Step 3: Determine from the pass band zero derivative frequencies by computing the positive real or imaginary values of the roots of , and substitute the lowest root with the constriction frequency of 0.45 for . Step 4: Determine the value of at each constricted and derivative zero point. Step 5: Create the B vector for the linear equations by subtracting the target values at each frequency, which in this case are all 1 due to the cutoff attenuation being equal to the pass band ripple attenuation in this specific example. at the cut-off frequency of . Step 6: Determine the Jacobian matrix of partial derivative of for each with respect to each reflection zero, , Step 7: Get the reflection zeros movements by solving for the linear set of equations using the B vector from step 5. Step 8: Compute new reflection zero locations by subtracting the calculated above from the past iteration of reflection zero positions. Repeat steps 3 through 8 until the application convergence criteria, , has been met, which for this example is chosen to be 1.e-12. When complete, the final may be constructed from the final reflection zeros positions, +/-j0.5278143, +/-J0.80460874, +/-J0.97721056, and 0. When amplitude normalized such that , the constructed is shown below. The synthesis process may be validated by doing a quick check of for each from step 3 to insure a 1 dB attenuation at those frequencies, and that the cut-off attenuation at is also 1dB. The summary of the computation below validates the example synthesis process. The final magnitude frequency response of the forward transfer function, , is shown below. Chebyshev II stop band ripple constricting Standard low pass Inverse Chebyshev filter design creates an equi-ripple stop band beginning from a normalized value of 1 rad/sec to . However, some design requirements do not need an equi-ripple pass band at the high frequencies. A standard full-equi-ripple Inverse Chebyshev filter for this application would result in an over designed filter. Constricting the equi-ripple to a defined percentage of the stop band creates a more efficient design, reducing the size of the filter and potentially eliminating one or two components, which is useful in maximizing board space efficiency and minimizing production costs for mass produced items. Inverse Chebyshev filters with constricted stop band ripple are synthesized in exactly the same process as standard a inverse Chebyshev. A constricted ripple Chebyshev is designed with an inverted , where is the stop band attenuation in dB, the poles and zeros of the designed constricted ripple Chebyshev filter are inverted, and the cut-off attenuation is set. Since standard Chebyshev equations will not work with constricted ripple design, the cut-off attenuation must be set using the process described in the Elliptic Hourglass design. Below are the |S11| and |S12| scattering parameters for a 7 pole constricted ripple Inverse Chebyshev filter with 3dB cut-off attenuation. Non-standard cut-off attenuation and transmission zeros The constricted ripple example above is intentionally kept simple by keeping the cut-off attenuation equal to the pass band ripple attenuation, omitting optional transmission zeros, and using an odd order that does not potentially require even order modification. However, non-standard cutoff attenuations may be accommodated by calculating the target values in step 5 to be offset from the required 1 that exists at the cut-off frequency of , including a denominator as part of the derivative constant that includes transmission zeros, and inserting two reflection zeros instead of one in to the original in step 2. When including stop band transmission zeros, it is import to remember that the roots of will include stop band maxima with . These roots should not be included in the pass band minima used in the computations.. Since may be used to set the cut-off attenuation in , the step 5 target values may be made with respect to 1. The target values in step 5 may be calculated using the expression for obtainable from the equations above. Consider a filter design of %constriction = 55, order = 8, single transmission zero at 1.1, pass band ripple attenuation = 0.043648054 (equivalent of S12 = 20dB attenuation based on the relation for lossless networks), and pass band cut-off attenuation = 20dB. The target value in step 5 is .01010101, and the to compute is 99. When complete, the characteristic polynomials ,, and forward transfer function, , are below. The validation consists of calculating scattering parameters ( and respectively) for the constriction frequency, the cutoff frequency, the remaining pass band minima frequencies in between, and the transmission zero frequency and as shown below. The final magnitude frequency response of are shown below. See also Bessel filter Butterworth filter Chebyshev nodes Chebyshev polynomial Comb filter Elliptic filter Filter design References External links Linear filters Network synthesis filters Electronic design
Chebyshev filter
[ "Engineering" ]
7,247
[ "Electronic design", "Electronic engineering", "Design" ]
327,054
https://en.wikipedia.org/wiki/Web%20syndication
Web syndication is making content available from one website to other sites. Most commonly, websites are made available to provide either summaries or full renditions of a website's recently added content. The term may also describe other kinds of content licensing for reuse. Motivation For the subscribing sites, syndication is an effective way of adding greater depth and immediacy of information to their pages, making them more attractive to users. For the provider site, syndication increases exposure. This generates new traffic for the provider site—making syndication an easy and relatively cheap, or even free, form of advertisement. Content syndication has become an effective strategy for link building, as search engine optimization has become an increasingly important topic among website owners and online marketers. Links embedded within the syndicated content are typically optimized around anchor terms that will point an optimized link back to the website that the content author is trying to promote. These links tell the algorithms of the search engines that the website being linked to is an authority for the keyword that is being used as the anchor text. However the rollout of Google Panda's algorithm may not reflect this authority in its SERP rankings based on quality scores generated by the sites linking to the authority. The prevalence of web syndication is also of note to online marketers, since web surfers are becoming increasingly wary of providing personal information for marketing materials (such as signing up for a newsletter) and expect the ability to subscribe to a feed instead. Although the format could be anything transported over HTTP, such as HTML or JavaScript, it is more commonly XML. Web syndication formats include RSS, Atom, and JSON Feed. History Syndication first arose in earlier media such as print, radio, and television, allowing content creators to reach a wider audience. In the case of radio, the United States Federal government proposed a syndicate in 1924 so that the country's executives could quickly and efficiently reach the entire population. In the case of television, it is often said that "Syndication is where the real money is." Additionally, syndication accounts for the bulk of TV programming. One predecessor of web syndication is the Meta Content Framework (MCF), developed in 1996 by Ramanathan V. Guha and others in Apple Computer's Advanced Technology Group. Today, millions of online publishers, including newspapers, commercial websites, and blogs, distribute their news headlines, product offers, and blog postings in the news feed. As a commercial model Conventional syndication businesses such as Reuters and Associated Press thrive on the internet by offering their content to media partners on a subscription basis, using business models established in earlier media forms. Commercial web syndication can be categorized in three ways: by business models by types of content by methods for selecting distribution partners Commercial web syndication involves partnerships between content producers and distribution outlets. There are different structures of partnership agreements. One such structure is licensing content, in which distribution partners pay a fee to the content creators for the right to publish the content. Another structure is ad-supported content, in which publishers share revenues derived from advertising on syndicated content with that content's producer. A third structure is free, or barter syndication, in which no currency changes hands between publishers and content producers. This requires the content producers to generate revenue from another source, such as embedded advertising or subscriptions. Alternatively, they could distribute content without remuneration. Typically, those who create and distribute content free are promotional entities, vanity publishers, or government entities. Types of content syndicated include RSS or Atom Feeds and full content. With RSS feeds, headlines, summaries, and sometimes a modified version of the original full content is displayed on users' feed readers. With full content, the entire content—which might be text, audio, video, applications/widgets, or user-generated content—appears unaltered on the publisher's site. There are two methods for selecting distribution partners. The content creator can hand-pick syndication partners based on specific criteria, such as the size or quality of their audiences. Alternatively, the content creator can allow publisher sites or users to opt into carrying the content through an automated system. Some of these automated "content marketplace" systems involve careful screening of potential publishers by the content creator to ensure that the material does not end up in an inappropriate environment. Just as syndication is a source of profit for TV producers and radio producers, it also functions to maximize profit for Internet content producers. As the Internet has increased in size it has become increasingly difficult for content producers to aggregate a sufficiently large audience to support the creation of high-quality content. Syndication enables content creators to amortize the cost of producing content by licensing it across multiple publishers or by maximizing the distribution of advertising-supported content. A potential drawback for content creators, however, is that they can lose control over the presentation of their content when they syndicate it to other parties. Distribution partners benefit by receiving content either at a discounted price, or free. One potential drawback for publishers, however, is that because the content is duplicated at other publisher sites, they cannot have an "exclusive" on the content. For users, the fact that syndication enables the production and maintenance of content allows them to find and consume content on the Internet. One potential drawback for them is that they may run into duplicate content, which could be an annoyance. E-commerce Web syndication has been used to distribute product content such as feature descriptions, images, and specifications. As manufacturers are regarded as authorities and most sales are not achieved on manufacturer websites, manufacturers allow retailers or dealers to publish the information on their sites. Through syndication, manufacturers may pass relevant information to channel partners. Such web syndication has been shown to increase sales. Web syndication has also been found effective as a search engine optimization technique. See also RSS Atom (web standard) Broadcast syndication Content delivery platform Feed icon hAtom List of comic strip syndicates List of streaming media systems Print syndication Protection of Broadcasts and Broadcasting Organizations Treaty Push technology Software as a service Usenet References External links Web development
Web syndication
[ "Engineering" ]
1,226
[ "Software engineering", "Web development" ]
327,121
https://en.wikipedia.org/wiki/Quicksilver%20%28novel%29
Quicksilver is a historical novel by Neal Stephenson, published in 2003. It is the first volume of The Baroque Cycle, his late Baroque historical fiction series, succeeded by The Confusion and The System of the World (both published in 2004). Quicksilver won the Arthur C. Clarke Award and was nominated for the Locus Award in 2004. Stephenson organized the structure of Quicksilver such that chapters have been incorporated into three internal books titled "Quicksilver", "The King of the Vagabonds", and "Odalisque". In 2006, each internal book was released in separate paperback editions, to make the 900 pages more approachable for readers. These internal books were originally independent novels within the greater cycle during composition. The novel Quicksilver is written in various narrative styles, such as theatrical staging and epistolary, and follows a large group of characters. Though mostly set in England, France, and the United Provinces in the period 1655 through 1673, the first book includes a frame story set in late 1713 Massachusetts. In order to write the novel, Stephenson researched the period extensively and integrates events and historical themes important to historical scholarship throughout the novel. However, Stephenson alters details such as the members of the Cabal ministry, the historical cabinet of Charles II of England, to facilitate the incorporation of his fictional characters. Within the historical context, Stephenson also deals with many themes which pervade his other works, including the exploration of knowledge, communication and cryptography. The plot of the first and third books focus on Daniel Waterhouse's exploits as a natural philosopher and friend to the young Isaac Newton and his later observations of English politics and religion, respectively. The second book introduces the vagabond Jack Shaftoe ("King of the Vagabonds") and Eliza (a former member of a Turkish harem) as they cross Europe, eventually landing in the Netherlands, where Eliza becomes entangled in commerce and politics. Quicksilver operates in the same fictional universe as Stephenson's earlier novel Cryptonomicon, in which descendants of Quicksilver characters Shaftoe and Waterhouse appear prominently. Background and development During the period in which he wrote Cryptonomicon, Stephenson read George Dyson's Darwin Amongst the Machines, which led him to Gottfried Leibniz's interest in a computing machine, the Leibniz–Newton feud, and Newton's work at the Royal Treasury. He considered this "striking when [he] was already working on a book about money and a book about computers," and became inspired to write about the period. Originally intended to be included in Cryptonomicon, Stephenson instead used the material as the foundation for Quicksilver, the first volume of the Baroque Cycle. The research for the sprawling historical novel created what Stephenson called "data management problems", and he resorted to a system of notebooks to record research, track characters, and find material during the writing process. Historicity In Quicksilver, Stephenson places the ancestors of the Cryptonomicons characters in Enlightenment Europe alongside a cast of historical individuals from Restoration England and the Enlightenment. Amongst the cast are some of the most prominent natural philosophers, mathematicians and scientists (Newton and Leibniz), and politicians (William of Orange and Nassau) of the age. In an interview, Stephenson explained he deliberately depicted both the historical and fictional characters as authentic representatives of historical classes of people, such as the Vagabonds as personified by Jack, and the Barbary slaves as personified by Eliza. In his research for the characters, he explored the major scholarship about the period. Stephenson did extensive research on the Age of Enlightenment, noting that it is accessible for English speaking researchers because of the many well documented figures such as Leibniz, Newton and Samuel Pepys. In the course of his research he noted historiographic inconsistencies regarding characters of the period which he had to reconcile. Especially prominent was the deification of Newton, Locke and Boyle and their scientific method by Enlightenment and Victorian scholars. He considers the scientific work done during the Baroque period as crucial to the Enlightenment. From his research he concluded that the Enlightenment in general "is and should be a controversial event because although it led to the flourishing of the sciences and political liberties and a lot of good stuff like that, one can also argue that it played a role in the French Revolution and some of the negative events of the time as well." The portrayal of a confusing and uncertain era develops throughout the book. Some reviewers commented that Stephenson seems to carry his understanding of the period a little too far at times, delving into too much detail. Nick Hasted of The Independent wrote that this research made "descriptions of Restoration London feel leaden, and intellectual discourses between Newton and his contemporaries textbook-dry." Despite the thorough examination of the period, however, Stephenson does take liberty in depicting the Enlightenment. Both main and secondary fictional characters become prominent members of society who advise the most important figures of the period and affect everything from politics to economics and science. For example, he repopulates the real Cabal Ministry with fictional characters. Style Quicksilver is a historical fiction novel that occasionally uses fantasy and science fiction techniques. The book is written in "an omniscient modern presence occasionally given to wisecracks, with extensive use of the continuous present". Mark Sanderson of The Daily Telegraph and Steven Poole of The Guardian both describe the novel as in the picaresque genre, a genre common to 17th- and 18th-century Europe. Humor permeates the text, both situational and in the language itself, which emulates the picaresque style. The narrative often presents protracted digressions. These digressions follow a multitude of events and subjects related to history, philosophy and scientific subjects. For example, USA Today commented on the length of discussion of Newton's interest in the nature of gravity. With these digressions, the narrative also rapidly changes between multiple perspectives, first and third person, as well as using multiple writing techniques, both those familiar to the modern reader and those popular during the Early Modern period. These techniques include letters, drama, cryptographic messaging, genealogies and "more interesting footnotes than found in many academic papers." Stephenson incorporates 17th-century sentence structure and orthography throughout Quicksilver, most apparent in his use of italicization and capitalization. He adapts a combination of period and anachronistic language throughout the books, mostly to good effect, while allowing diction from modern usage, such as "canal rage" an allusion to road rage. Stephenson chose not to adapt period language for the entire text; instead he allowed such language to enter his writing when it was appropriate, often turning to modern English and modern labels for ideas familiar to modern readers. Stephenson said "I never tried to entertain the illusion that I was going to write something that had no trace of the 20th or the 21st century in it." Plot Quicksilver The first book is a series of flashbacks from 1713 to the earlier life of Daniel Waterhouse. It begins as Enoch Root arrives in Boston in October 1713 to deliver a letter to Daniel containing a summons from Princess Caroline. She wants Daniel to return to England and attempt to repair the feud between Isaac Newton and Gottfried Leibniz. While following Daniel's decision to return to England and board a Dutch ship (the Minerva) to cross the Atlantic, the book flashes back to when Enoch and Daniel each first met Newton. During the flashbacks, the book refocuses on Daniel's life between 1661 and 1673. While attending school at Trinity College, Cambridge, Daniel becomes Newton's companion, ensuring that Newton does not harm his health and assisting in his experiments (including rebuffing a clumsy sexual advance from Newton, exactly as Daniel's descendent Lawrence will rebuff Alan Turing in Cryptonomicon). However, the plague of 1665 forces them apart: Newton returns to his family manor and Daniel to the outskirts of London. Daniel quickly tires of the radical Puritan rhetoric of his father, Drake Waterhouse, and decides to join Reverend John Wilkins and Robert Hooke at John Comstock's Epsom estate. There Daniel takes part in a number of experiments, including the exploration of the diminishing effects of gravity with changes in elevation, the transfusion of blood between dogs and Wilkins' attempts to create a philosophical language. Daniel soon becomes disgusted with some of the practices of the older natural philosophers (which include vivisection of animals) and visits Newton during his experiments with color and white light. They attempt to return to Cambridge, but again plague expels the students. Daniel returns to his father; however, his arrival on the outskirts of London coincides with the second day of the Fire of London. Drake, taken by religious fervour, dies atop his house as the King blows it up to create a fire break to prevent further spread of the fire. Soon after Drake's death, Newton and Daniel return to Cambridge and begin lecturing. A flashforward finds Daniel's ship under attack by the fleet of Edward Teach (Blackbeard) in 1713. Then the story returns to the past as Daniel and Newton return to London: Newton is under the patronage of Louis Anglesey, the Earl of Upnor, and Daniel becomes secretary of the Royal Society when Henry Oldenburg is detained by the King for his active foreign correspondence. During his stint in London, Daniel encounters a number of important people from the period. Daniel remains one of the more prominent people in the Royal Society, close to Royal Society members involved in court life and politics. By 1672 both Daniel and Newton become fellows at Trinity College where they build an extensive alchemical laboratory which attracts other significant alchemists including John Locke and Robert Boyle. Daniel convinces Newton to present his work on calculus to the Royal Society. In 1673, Daniel meets Leibniz in England and acts as his escort, leading him to meetings with important members of British society. Soon, Daniel gains the patronage of Roger Comstock as his architect. While under Roger's patronage, the actress Tess becomes Daniel's mistress both at court and in bed. Finally the book returns to 1713, where Daniel's ship fends off several of Teach's pirate ships. Soon they find out that Teach is after Daniel alone; however, with the application of trigonometry, the ship is able to escape the bay and the pirate band. The King of the Vagabonds The King of Vagabonds focuses on the travels of "Half-Cocked" Jack Shaftoe. It begins by recounting Jack's childhood in the slums outside London where he pursued many disreputable jobs, including hanging from the legs of hanged men to speed their demise. The book then jumps to 1683, when Jack travels to the Battle of Vienna to participate in the European expulsion of the Turks. While attacking the camp, Jack encounters Eliza, a European slave in the sultan's harem, about to be killed by janissaries. He kills the janissaries and loots the area, taking ostrich feathers and acquiring a Turkish warhorse which he calls Turk. The two depart from the camp of the victorious European army and travel through Bohemia into the Palatinate. To sell the ostrich feathers at a high price, they decide to wait until the spring fair in Leipzig. Jack and Eliza spend the winter near a cave warmed by a hot water spring. In the springtime, they travel to the fair dressed as a noblewoman and her bodyguard where they meet Doctor Leibniz. They quickly sell their goods with the help of Leibniz, and agree to accompany him to his silver mine in the Harz Mountains. Once they arrive at the mine, Jack wanders into the local town where he has a brief encounter with Enoch Root in an apothecary's shop. Jack leaves town but gets lost in the woods, encountering pagan worshippers and witch hunters. He successfully escapes them by finding safe passage through a mine connecting to Leibniz's. Eliza and Jack move on to Amsterdam, where Eliza quickly becomes embroiled in the trade of commodities. Jack goes to Paris to sell the ostrich feathers and Turk, leaving Eliza behind. When he arrives in Paris, he meets and befriends St. George, a professional rat-killer and tamer, who helps him find lodging. While there, he becomes a messenger for bankers between Paris and Marseilles. However, during an attempt to sell Turk Jack is captured by nobles. Luckily, the presence of Jack's former employer, John Churchill, ensures that he is not immediately killed. With Churchill's help, Jack escapes from the barn where he has been held prisoner. During the escape, he rides Turk into a masquerade at the Hotel d'Arcachon in a costume similar to that of King Louis. With the aid of St. George's rats he escapes without injury but destroys the ballroom and removes the hand of Etienne d'Arcachon. Meanwhile, Eliza becomes heavily involved in the politics of Amsterdam, helping Knott Bolstrood and the Duke of Monmouth manipulate the trade of VOC stock. This causes a panic from which they profit. Afterwards, the French Ambassador in Amsterdam persuades Eliza to go to Versailles and supply him information about the French court. Eliza agrees after a brief encounter and falling-out with Jack. William of Orange learns of Eliza's mission and intercepts her, forcing her to become a double agent for his benefit and to give him oral sex. Meanwhile, Jack, with an injury caused by Eliza, departs on the slaving trip. The ship is captured by Barbary pirates, and the end of the book has Jack as a captured galley-slave. Odalisque This book returns to Daniel Waterhouse, who in 1685, has become a courtier to Charles II because of his role as Secretary of the Royal Society. He warns James II, still Duke of York, of his brother Charles' impending death, following which, Daniel quickly becomes an advisor to James II. He continues to be deeply involved with the English court, ensuring the passage of several bills which reduce restrictions on non-conformists despite his detraction from the Francophile court. Meanwhile, Eliza becomes the governess of a widowers' two children in Versailles. She catches the eye of the king and becomes the broker of the French nobility. With her help, the French court, supported by King Louis, creates several market trends from which they profit extensively. Her active involvement in the French court gains her a title of nobility: Countess of Zeur. Daniel and Eliza finally meet during a visit to the Netherlands where Daniel acts as an intermediary between William of Orange and the detracting English nobility. Daniel realizes Eliza's importance during a meeting at the house of Christiaan Huygens. Eliza woos Daniel and uses this connection to gain entrance into the English court and the Royal Society. Daniel also meets Nicholas Fatio while in Amsterdam. Soon after this meeting, Fatio and Eliza prevent the attempted kidnapping of William of Orange by an ambitious French courtier. Upon his return, Daniel is arrested by the notorious judge George Jeffreys, and later imprisoned in the Tower of London. Daniel escapes with the help of Jack Shaftoe's brother Bob, whose infantry unit is stationed there. After a brief return to Versailles, Eliza joins Elizabeth Charlotte of the Palatinate at her estate before the invasion of the Palatinate in her name. Eliza informs William of Orange of the troop movements caused by the French invasion which frees his forces along the border of the Spanish Netherlands, a region of stalemate between France and the Dutch Republic. During her flight from the Electorate of the Palatinate, Eliza becomes pregnant by Louis's cryptographer, though popular knowledge suggested it was the French nobleman Etienne D'Arcachon's child. Meanwhile, William takes the free troops from the border on the Spanish Netherlands to England, precipitating the Glorious Revolution, including the expulsion of James II. James flees London and Daniel Waterhouse soon encounters him in a bar. Convinced that the Stuart monarchy has collapsed, Daniel returns to London and takes revenge on Jeffreys by inciting a crowd to capture him for trial and later execution. Though he plans to depart for Massachusetts, Daniel's case of bladder stones increasingly worsens during this period. The Royal Society and other family friends are very aware of this and force Daniel to get the stone removed by Robert Hooke at Bedlam. Major themes A 2003 interview in Newsweek quotes Stephenson's belief that "science fiction... is fiction in which ideas play an important part." Central to Quicksilver is the importance of the Enlightenment. By placing the reader among a world of ideas that change the course of science, Stephenson explores the development of the scientific method. One theme Stephenson explores in Quicksilver is the advancement of mathematical sciences which in turn led to important applications: Leibniz's theory of binary mathematics became the foundation upon which to develop computers. As he did in Cryptonomicon, Stephenson highlights the importance of networks and codes, which in Quicksilver occur against a "backdrop of staggering diversity and detail", writes Mark Sanderson in his review of the book for the Daily Telegraph. Also, returning to his cyberpunk roots, Stephenson emphasizes the manner in which information and ideas are dispersed in complex societies. Quicksilver uses the "interactions of philosophy, court intrigue, economics, wars, plagues and natural disasters" of the late 17th and early 18th century to create a historical backdrop. From one perspective, the characters are most useful in their roles as "carriers of information". Although the characters use various techniques to disseminate information, the most prominent is cryptography. Elizabeth Weisse writes in USA Today that the use of cryptography is "Stephenson's literary calling card", as she compares Quicksilver to Cryptonomicon. In Quicksilver Stephenson presents the importance of freedom of thought, the diversity required for new ideas to develop, and the manner in which new ideas are expressed. To explore or accept an idea such as the theory of gravity often resulted in dire consequences or even "grotesque punishment" in the early 17th century. Stephenson also points out that research, particularly as conducted at the Royal Society, resulted in a changing of views in some cases: If you read the records of the Royal Society and what they were doing in the 1660s, it's clear that at a certain point, some of these people – and I think Hooke was one of them – became a little bit disgusted with themselves and began excusing themselves when one of these vivisections was going to happen. I certainly don't think they turned into hardcore animal rights campaigners, or anything close to that, but I think after a while, they got a little bit sick of it and started to feel conflicted about what they were doing. So I've tried to show that ambivalence and complication in the book. How to exist during a "time of dualities" is another important theme in Quicksilver, especially in their effects on Daniel Waterhouse, who is torn between "reason versus faith, freedom versus destiny, matter versus math." Frequent mention of alchemy indicates the shift from an earlier age to a newer transformative age. Newton was an alchemist, and one character compares finance to alchemy: "all goods—silk, coins, shares in mines—lose their hard dull gross forms and liquefy, and give up their true nature, as ores in an alchemist's furnace sweat mercury". The book focuses on a period of social and scientific transmutations, expanding upon the symbolism of the book's title, Quicksilver, because it is a period in which the "principles governing transformation" are investigated and established. A commerce of different goods rapidly changing from one into another is a recurrent theme throughout the book. Also, the title Quicksilver connects the book to the method alchemists used to distill quicksilver, "the pure living essence of God's power and presence in the world", from, as one character put it, "the base, dark, cold, essentially fecal matter of which the world was made." Characters Main characters In order of appearance: Enoch Root – an elusive and mysterious alchemist who first appears at the beginning of the book and recurs throughout often in the company of Alchemists such as Newton and Locke. Daniel Waterhouse – son of prominent Puritan Drake Waterhouse, roommate of Isaac Newton, friend of Gottfried Leibniz, and prominent member of the Royal Society. Waterhouse is both a savant and a strict Puritan. As Quicksilver progresses he becomes more and more involved in the inner workings of British politics. "Half-Cocked" Jack Shaftoe – an English vagabond, known as "The King of the Vagabonds", who rescues Eliza and becomes the enemy of the Duke d'Arcachon. Eliza – a former harem slave who becomes a French countess, investor, and spy for William of Orange and Gottfried Leibniz. She originally became a slave when she and her mother were kidnapped from their homeland of Qwghlm by a European pirate with breath that smelled of rotten fish. Historical characters Robert Boyle, Irish natural philosopher Caroline of Ansbach, an inquisitive child who loses her mother to smallpox John Churchill, former employer of Jack and a prominent British politician William Curtius, German Fellow of the Royal Society, and diplomat for the House of Stuart. Nicolas Fatio de Duillier Judge Jeffreys, Lord Chancellor of England Robert Hooke, English natural philosopher and biologist Christiaan Huygens, continental natural philosopher Gottfried Leibniz Louis XIV, King of France Isaac Newton Henry Oldenburg, founding member and secretary of the Royal Society Bonaventure Rossignol, a French cryptologist James Scott, 1st Duke of Monmouth James Stuart, as the Duke of York and as James II, King of England Edward Teach, aka Blackbeard John Wilkins, Bishop of Chester, founding member of the Royal Society, and advocate of religious tolerance in Britain William III of England, as William, Prince of Orange Benjamin Franklin Samuel Pepys Critical reception The reception to Quicksilver was generally positive. Some reviewers found the length cumbersome; however, others found the length impressive in its quality and entertainment value. Paul Boutin at Slate Magazine comments that Quicksilver offers an insight into how advanced and complicated science was during the age of "alchemists and microscope-makers"; and that the scientists of the period were "the forerunners of the biotech and nanotech researchers who are today's IT Geeks". Entertainment Weekly rates Quicksilver an A−, stating that the book "makes you ponder concepts and theories you initially thought you'll never understand". The critic finds a parallel between Stephenson's approach and a passage from the book describing an effort to put "all human knowledge ... in a vast Encyclopedia that will be a sort of machine, not only for finding old knowledge but for making new". The Independent places emphasis on the comparisons between the story that evolves in Quicksilver and Stephenson's earlier novel Cryptonomicon, with the former "shaping up to be a far more impressive literary endeavour than most so-called 'serious' fiction. And it ends on a hell of a cliffhanger. No scholarly, and intellectually provocative, historical novel has been this much fun since The Name of the Rose". Patrick Ness considers Quicksilver to be "entertaining over an impossible distance. This isn't a book; it's a place to move into and raise a family." His review focuses on the scope of the material and humour inherent in Quicksilver. Mark Sanderson calls the novel an "astonishing achievement", and compares Quicksilver to "Thomas Pynchon's Mason & Dixon and Lawrence Norfolk's Lempriere's Dictionary." Although full of historical description and incredibly lengthy, Quicksilver is noticeably full of what Sanderson called "more sex and violence ... than any Tarantino movie". Stephenson balances his desire to respect the period with a need to develop a novel which entertains modern readers. In The Guardian, Steven Poole commented that 'Quicksilver was: "" Polly Shulman of The New York Times finds Quicksilver hard to follow and amazingly complex but a good read. However she notes that the complicated and clunky dialogue between the characters is a distraction. She thinks a full appreciation of the work is only possible within the context of the remaining novels of The Baroque Cycle, and compares the novel to works by Dorothy Dunnett, William Gibson and Bruce Sterling, calling it "history-of-science fiction". In the post-publication review for The New York Times, Edward Rothstein remarks that the scope of the novel is at times detrimental: "Unfortunately, in this novelistic cauldron it can sometimes seem as if mercury's vapors had overtaken the author himself, as if every detail he had learned had to be anxiously crammed into his text, while still leaving the boundaries between fact and invention ambiguous". He considers the novel to be an "experiment in progress", although the historical background is compelling. Deborah Friedell disliked Quicksilver. She mentions Stephenson's poor writing and his lack of knowledge of the literary tradition, which she considers to be because "the greatest influences upon Stephenson's work have been comic books and cartoons". She dislikes his use of anachronism, his failure to be literary and his general approach to historical fiction. She writes of Stephenson and the reviewers who reviewed the work in a positive manner: Stephenson is decidedly not a prodigy; but his babe-in-the-woods routine has proved irresistible for some, who are hailing his seemingly innate ability to meld the products of exhaustive historical research with what they see as a brilliant, idiosyncratic sense of humor and adventure. Times critic has declared that Stephenson has a "once-in-a-generation gift", and that Quicksilver "will defy any category, genre, precedent or label—except for genius". This is promotional copy disguised as literary criticism. There is nothing category-defying about this ridiculous book. From the foreign press, the review in the Frankfurter Allgemeine points out the historical period of Quicksilver is one of the birth of science which corresponds with a period of language shift as English became the language of science. Moreover, the review focuses on Leibniz's principles of mathematics which Stephenson claims established the framework for modern computing. Publication history Based on the success of Cryptonomicon, a New York Times bestseller with sales of about 300,000 copies, the initial print-run for Quicksilver was 250,000 copies. Five months before the release date, a web campaign was initiated to advertise the work. The novel was originally published in a single volume; in 2006 HarperCollins republished the books in three separate paperback volumes. Editions September 23, 2003, US, William Morrow (), hardback (first edition), 944 pages October 2, 2003, UK, Willian Heinemann (), hardback 2003, UK, Willian Heinemann (), paperback June 2004, US, William Morrow (), hardback (Special Edition), 968 pages September 21, 2004, US, HarperCollins Perennial (), trade edition, 927 pages October 2004, US HarperColllins (), CD, abridged audiobook, 22 hours 1 minute, narrated by Simon Prebble and Stina Nielson November 2004, US, HarperCollins (), MP3 release of the abridged audio CD Split into 3 volumes in 2006 Quicksilver, January 2006, US, HarperCollins (), mass market, 480 pages The King of the Vagabonds, February 2006, US, HarperCollins (), mass market paperback, 400 pages Odalisque, March 2006, US, HarperCollins (), mass market paperback, 464 pages See also The Age of Unreason cycle by Gregory Keyes has a similar approach to the period. References External links The Metawebwas once an extensive Quicksilver wiki, including many pages written by Stephenson, about the historical and fictional persons and events of this book. The old data is mothballed; the website is now the corporate site for a startup spun out of Applied Minds. However, are still viewable via the Internet Archive's Wayback Machine. Quicksilver at Complete Review; contains an archive of links to all major newspaper reviews of the book. 2003 American novels 2003 science fiction novels Novels about alchemy Fiction set in 1713 Novels set in the 1710s HarperCollins books Historical novels Fiction about mining Novels about cryptography Novels set in Early Modern England Novels set in Early Modern France Novels set in the 1650s Novels set in the 1660s Novels set in the 1670s Novels set in the Netherlands The Baroque Cycle Cultural depictions of James II of England Cultural depictions of Blackbeard Great Plague of London sv:Quicksilver (bok)
Quicksilver (novel)
[ "Astronomy" ]
6,007
[ "Cultural depictions of Isaac Newton", "Cultural depictions of astronomers" ]
327,393
https://en.wikipedia.org/wiki/Decagon
In geometry, a decagon (from the Greek δέκα déka and γωνία gonía, "ten angles") is a ten-sided polygon or 10-gon. The total sum of the interior angles of a simple decagon is 1440°. Regular decagon A regular decagon has all sides of equal length and each internal angle will always be equal to 144°. Its Schläfli symbol is {10} and can also be constructed as a truncated pentagon, t{5}, a quasiregular decagon alternating two types of edges. Side length The picture shows a regular decagon with side length and radius of the circumscribed circle. The triangle has two equally long legs with length and a base with length The circle around with radius intersects in a point (not designated in the picture). Now the triangle is an isosceles triangle with vertex and with base angles . Therefore . So and hence is also an isosceles triangle with vertex . The length of its legs is , so the length of is . The isosceles triangles and have equal angles of 36° at the vertex, and so they are similar, hence: Multiplication with the denominators leads to the quadratic equation: This equation for the side length has one positive solution: So the regular decagon can be constructed with ruler and compass. Further conclusions and the base height of (i.e. the length of ) is and the triangle has the area: . Area The area of a regular decagon of side length a is given by: In terms of the apothem r (see also inscribed figure), the area is: In terms of the circumradius R, the area is: An alternative formula is where d is the distance between parallel sides, or the height when the decagon stands on one side as base, or the diameter of the decagon's inscribed circle. By simple trigonometry, and it can be written algebraically as Construction As 10 = 2 × 5, a power of two times a Fermat prime, it follows that a regular decagon is constructible using compass and straightedge, or by an edge-bisection of a regular pentagon. An alternative (but similar) method is as follows: Construct a pentagon in a circle by one of the methods shown in constructing a pentagon. Extend a line from each vertex of the pentagon through the center of the circle to the opposite side of that same circle. Where each line cuts the circle is a vertex of the decagon.  In other words,  the image of a regular pentagon under a point reflection with respect of its center is a concentric congruent pentagon,  and the two pentagons have in total the vertices of a concentric regular decagon. The five corners of the pentagon constitute alternate corners of the decagon. Join these points to the adjacent new points to form the decagon. The golden ratio in decagon Both in the construction with given circumcircle as well as with given side length is the golden ratio dividing a line segment by exterior division the determining construction element. In the construction with given circumcircle the circular arc around G with radius produces the segment , whose division corresponds to the golden ratio. In the construction with given side length the circular arc around D with radius produces the segment , whose division corresponds to the golden ratio. Symmetry The regular decagon has Dih10 symmetry, order 20. There are 3 subgroup dihedral symmetries: Dih5, Dih2, and Dih1, and 4 cyclic group symmetries: Z10, Z5, Z2, and Z1. These 8 symmetries can be seen in 10 distinct symmetries on the decagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order. Full symmetry of the regular form is r20 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders. Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g10 subgroup has no degrees of freedom but can be seen as directed edges. The highest symmetry irregular decagons are d10, an isogonal decagon constructed by five mirrors which can alternate long and short edges, and p10, an isotoxal decagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular decagon. Dissection Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms. In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular decagon, m=5, and it can be divided into 10 rhombs, with examples shown below. This decomposition can be seen as 10 of 80 faces in a Petrie polygon projection plane of the 5-cube. A dissection is based on 10 of 30 faces of the rhombic triacontahedron. The list defines the number of solutions as 62, with 2 orientations for the first symmetric form, and 10 orientations for the other 6. Skew decagon A skew decagon is a skew polygon with 10 vertices and edges but not existing on the same plane. The interior of such a decagon is not generally defined. A skew zig-zag decagon has vertices alternating between two parallel planes. A regular skew decagon is vertex-transitive with equal edge lengths. In 3-dimensions it will be a zig-zag skew decagon and can be seen in the vertices and side edges of a pentagonal antiprism, pentagrammic antiprism, and pentagrammic crossed-antiprism with the same D5d, [2+,10] symmetry, order 20. These can also be seen in these four convex polyhedra with icosahedral symmetry. The polygons on the perimeter of these projections are regular skew decagons. Petrie polygons The regular skew decagon is the Petrie polygon for many higher-dimensional polytopes, shown in these orthogonal projections in various Coxeter planes: The number of sides in the Petrie polygon is equal to the Coxeter number, h, for each symmetry family. See also Decagonal number and centered decagonal number, figurate numbers modeled on the decagon Decagram, a star polygon with the same vertex positions as the regular decagon References External links Definition and properties of a decagon With interactive animation 10 (number) Constructible polygons Polygons by the number of sides Elementary shapes
Decagon
[ "Mathematics" ]
1,505
[ "Constructible polygons", "Planes (geometry)", "Euclidean plane geometry" ]
327,443
https://en.wikipedia.org/wiki/Compassion
Compassion is a social feeling that motivates people to go out of their way to relieve the physical, mental, or emotional pains of others and themselves. Compassion is sensitivity to the emotional aspects of the suffering of others. When based on notions such as fairness, justice, and interdependence, it may be considered partially rational in nature. Compassion involves "feeling for another" and is a precursor to empathy, the "feeling as another" capacity (as opposed to sympathy, the "feeling towards another"). In common parlance, active compassion is the desire to alleviate another's suffering. Compassion involves allowing ourselves to be moved by suffering to help alleviate and prevent it. An act of compassion is one that is intended to be helpful. Other virtues that harmonize with compassion include patience, wisdom, kindness, perseverance, warmth, and resolve. It is often, though not inevitably, the key component in altruism. The difference between sympathy and compassion is that the former responds to others' suffering with sorrow and concern whereas the latter responds with warmth and care. An article in Clinical Psychology Review suggests that "compassion consists of three facets: noticing, feeling, and responding". Etymology The English noun compassion, meaning "to suffer together with", comes from Latin. Its prefix com- comes directly from , an archaic version of the Latin preposition and affix (= with); the -passion segment is derived from , past participle of the deponent verb . Compassion is thus related in origin, form and meaning to the English noun patient (= one who suffers), from , present participle of the same , and is akin to the Greek verb (, to suffer) and to its cognate noun (= ). Ranked a great virtue in numerous philosophies, compassion is considered in almost all the major religious traditions as among the greatest of virtues. Theories on conceptualizing compassion Theoretical perspectives show contrasts in their approaches to compassion. Compassion is simply a variation of love or sadness, not a distinct emotion. From the perspective of evolutionary psychology, compassion can be viewed as a distinct emotional state, which can be differentiated from distress, sadness, and love. Compassion is, however, a synonym of empathic distress, which is characterized by the feeling of distress in connection with another person's suffering. This perspective of compassion is based on the finding that people sometimes emulate and feel the emotions of people around them. According to Thupten Jinpa, compassion is a sense of concern that arises in us in the face of someone who is in need or someone who is in pain. It is accompanied by a kind of a wishing (i.e. desire) to see the relief or end of that situation, along with wanting (i.e. motivation) to do something about it. Compassion is, however, not pity, neither an attachment, nor the same as empathetic feeling, nor even just simply wishful thinking. Compassion is basically a variation of love. To further this variation of love, Skalski and Aanstoos, in their article The Phenomenology of Change Beyond Tolerating, describe compassion with the definition of alleviate in mind. In the definition for alleviate there is no mention of taking, stopping, or fixing someone's suffering. It is simply trying to make it less severe. This has a connotation of desperation of sorts. Desiring so little from such a dire situation can be described as inspiring feelings to help with another's suffering in any way. Emma Seppala distinguishes compassion from empathy and altruism as follows: "... The definition of compassion is often confused with that of empathy. Empathy, as defined by researchers, is the visceral or emotional experience of another person's feelings. It is, in a sense, an automatic mirroring of another's emotion, like tearing up at a friend's sadness. Altruism is an action that benefits someone else. It may or may not be accompanied by empathy or compassion, for example, in the case of making a donation for tax purposes. Although these terms are related to compassion, they are not identical. Compassion often involves an empathic response and altruistic behavior; however, compassion is defined as the emotional response when perceiving suffering which involves an authentic desire to help." In addition, the more a person knows about the human condition and human experiences, the more vivid the route to identification with suffering becomes. Identifying with another person is an essential process for human beings, something that is even illustrated by infants who begin to mirror the facial expressions and body movements of their mother as early as the first days of their lives. Compassion is recognized through identifying with other people (i.e. perspective-taking), the knowledge of human behavior, the perception of suffering, the transfer of feelings, and the knowledge of goal and purpose-changes in sufferers which leads to the decline of their suffering. Personality psychology agrees that human suffering is always individual and unique. Suffering can result from psychological, social, and physical trauma which happens in acute and chronic forms. Suffering has been defined as the perception of a person's impending destruction or loss of integrity, which continues until the threat is vanquished or the person's integrity can be restored. Compassion therefore has three major requirements: The compassionate person must feel that the troubles that evoke their feelings are serious; the belief that the sufferers' troubles are not self-inflicted; and the ability to picture oneself with the same problems in a non-blaming, non-shaming manner. Because the compassion process is highly related to identifying with another person and is possible among people from other countries, cultures, locations, etc., compassion is characteristic of democratic societies. The role of compassion as a factor contributing to individual or societal behavior has been the topic of continuous debate. In contrast to the process of identifying with other people, a complete absence of compassion may require ignoring or disapproving identification with other people or groups. Earlier studies established the links between interpersonal violence and cruelty which leads to indifference. Compassion may induce feelings of kindness and forgiveness, which could give people the ability to stop situations that have the potential to be distressing and occasionally lead to violence. This concept has been illustrated throughout history: The Holocaust, genocide, European colonization of the Americas, etc. The seemingly essential step in these atrocities could be the definition of the victims as "not human" or "not us". The atrocities committed throughout human history are thus claimed to have only been relieved, minimized, or overcome in their damaging effects through the presence of compassion, although recently, drawing on empirical research in evolutionary theory, developmental psychology, social neuroscience, and psychopathy, it has been counterargued that compassion or empathy and morality are neither systematically opposed to one another, nor inevitably complementary, since over the course of history, mankind has created social structures for upholding universal moral principles, such as Human Rights and the International Criminal Court. On one hand, Thomas Nagel, for instance, critiques Joshua Greene by suggesting that he is too quick to conclude utilitarianism specifically from the general goal of constructing an impartial morality; for example, he says, Immanuel Kant and John Rawls offer other impartial approaches to ethical questions. In his defense against the possible destructive nature of passions, Plato compared the human soul to a chariot: the intellect is the driver and the emotions are the horses, and life is a continual struggle to keep the emotions under control. In his defense of a solid universal morality, Immanuel Kant saw compassion as a weak and misguided sentiment. "Such benevolence is called soft-heartedness and should not occur at all among human beings", he said of it. Psychology Compassion has become associated with and researched in the fields of positive psychology and social psychology. Compassion is a process of connecting by identifying with another person. This identification with others through compassion can lead to increased motivation to do something in an effort to relieve the suffering of others. Compassion is an evolved function from the harmony of a three grid internal system: contentment-and-peace system, goals-and-drives system, and threat-and-safety system. Paul Gilbert defines these collectively as necessary regulated systems for compassion. Paul Ekman describes a "taxonomy of compassion" including: emotional recognition (knowing how another person feels), emotional resonance (feeling emotions another person feels), familial connection (care-giver-offspring), global compassion (extending compassion to everyone in the world), sentient compassion (extended compassion to other species), and heroic compassion (compassion that comes with a risk). Ekman also distinguishes proximal (i.e. in the moment) from distal compassion (i.e. predicting the future; affective forecasting): "...it has implications in terms of how we go about encouraging compassion. We are all familiar with proximal compassion: Someone falls down in the street, and we help him get up. That's proximal compassion: where we see someone in need, and we help them. But, when I used to tell my kids, 'Wear a helmet,' that's distal compassion: trying to prevent harm before it occurs. And that requires a different set of skills: It requires social forecasting, anticipating harm before it occurs, and trying to prevent it. Distal compassion is much more amenable to educational influences, I think, and it's our real hope." Distal compassion also requires perspective-taking. Compassion is associated with psychological outcomes including increases in mindfulness and emotion regulation. While empathy plays an important role in motivating caring for others and in guiding moral behavior, Jean Decety's research demonstrates that this is far from being systematic or irrespective to the social identity of the targets, interpersonal relationships, and social context. He proposes that empathic concern (compassion) has evolved to favor kin and members of one own social group, can bias social decision-making by valuing one single individual over a group of others, and this can frontally conflict with principles of fairness and justice. Compassion fatigue People with a higher capacity or responsibility to empathize with others may be at risk for "compassion fatigue", also called "secondary traumatic stress". Examples of people at risk for compassion fatigue are those who spend significant time responding to information related to suffering. However, newer research by Singer and Ricard suggests that it is lack of suitable distress tolerance that gets people fatigued from compassion activities. Individuals at risk for compassion fatigue usually display these four key attributes: diminished endurance and/or energy, declined empathic ability, helplessness and/or hopelessness, and emotional exhaustion. Negative coping skills can also increase the risk of developing compassion fatigue. People can alleviate sorrow and distress by doing self-care activities on a regular basis. helps to guide people to recognize the impact and circumstances of past events. After people , they are able to find the causes of compassion fatigue in their daily life. Practice of nonjudgmental compassion can prevent fatigue and burnout. Some methods that can help people to heal compassion fatigue include physical activity, eating healthy food with every meal, good relations with others, enjoying interacting with others in the community, writing a journal frequently, and sleeping enough every day. The practice of mindfulness and self-awareness also helps with compassion fatigue. Conditions that influence compassion Psychologist Paul Gilbert provides factors that can reduce the likelihood of someone being willing to be compassionate to another. These include (less): likability, competence, deservedness, empathic-capacity; (more) self-focused competitiveness, anxiety-depression, overwhelmed; and inhibitors in social structures and systems. Compassion fade Compassion fade is the tendency of people to experience a decrease in empathy as the number of people in need of aid increases. The term was coined by psychologist Paul Slovic. It is a type of cognitive bias that people use to justify their decision to help or not to help, and to ignore certain information. To turn compassion into compassionate behavior requires . In an examination of the motivated regulation of compassion in the context of large-scale crises, such as natural disasters and genocides, research established that people tend to feel more compassion for single identifiable victims than single anonymous victims or large masses of victims (the Identifiable victim effect). People only show less compassion for many victims than for single victims of disasters when they expect to incur a financial cost upon helping. This collapse of compassion depends on having the motivation and ability to regulate emotions. People are more apt to offer help to a certain number of needy people if that number is closer to the whole number of people in need. People feel more compassionate towards members of another species the more recently our species and theirs had a common ancestor. In laboratory research, psychologists are exploring how concerns about becoming emotionally exhausted may motivate people to curb their compassion for—and dehumanize—members of stigmatized social groups, such as homeless individuals and drug addicts. Neurobiology Olga Klimecki (et al.), found differential (non-overlapping) fMRI brain activation areas in respect to compassion and empathy: compassion was associated with the mOFC, pregenual ACC, and ventral striatum. Empathy, in contrast, was associated with the anterior insula and the anterior midcingulate cortex (aMCC). In one study conducted by James Rilling and Gregory Berns, neuroscientists at Emory University, subjects' brain activities were recorded while they helped someone in need. It was found that while the subjects were performing compassionate acts, the caudate nucleus and anterior cingulate regions of the brain were activated, the same areas of the brain associated with pleasure and reward. One brain region, the subgenual anterior cingulate cortex/basal forebrain, contributes to learning altruistic behavior, especially in those with trait empathy. The same study showed a connection between giving to charity and the promotion of social bonding and personal reputation. True compassion, if it exists at all, is thus inherently motivated (at least to some degree) by self-interest. In a 2009 small fMRI experiment, researchers at the Brain and Creativity Institute studied strong feelings of compassion for and physical pain in others. Both feelings involved an expected change in activity in the anterior insula, anterior cingulate, hypothalamus, and midbrain, but they also found a previously undescribed pattern of cortical activity on the posterior medial surface of each brain hemisphere, a region involved in the default mode of brain function, and implicated in . Compassion for social pain in others was associated with strong activation in the interoceptive, inferior/posterior portion of this region, while compassion for physical pain in others involved heightened activity in the exteroceptive, superior/anterior portion. Compassion for social pain activated this superior/anterior section, to a lesser extent. Activity in the anterior insula related to compassion for social pain peaked later and endured longer than that associated with compassion for physical pain. Compassionate emotions toward others affect the prefrontal cortex, inferior frontal cortex, and the midbrain. Feelings and acts of compassion stimulate areas known to regulate homeostasis, such as the anterior insula, the anterior cingulate, the mesencephalon, the insular cortex and the hypothalamus, supporting the hypothesis that social emotions use some of the same basic devices involved in other, primary emotions. Compassion in practice Medicine Compassion is one of the most important attributes for physicians practicing medical services. Compassion brings about the desire to do something to help the sufferer. That desire to be helpful is not compassion, but it does suggest that compassion is similar to other emotions in that it motivates behaviors to reduce the tension brought on by the emotion. Physicians generally identify their central duties as the responsibility to put the patient's interests first, including the duty not to harm, to deliver proper care, and to maintain confidentiality. Compassion is seen in each of those duties because of its direct relation to the recognition and treatment of suffering. Physicians who use compassion understand the effects of sickness and suffering on human behavior. Compassion may be closely related to love and the emotions evoked in sickness and suffering. This is illustrated by the relationship between patients and physicians in medical institutions. The relationship between suffering patients and their caregivers provides evidence that compassion is a social emotion that is the closeness and cooperation between individuals. Psychotherapy Compassion-focused therapy, created by clinical psychologist Professor Paul Gilbert, focuses on the evolutionary psychology behind compassion: balancing of affect regulation systems (e.g. using affiliative emotions from the care-and-contentment system to soothe and reduce painful emotions from the threat-detection system). Self-compassion Self-compassion is being kind to oneself and accepting suffering as a quality of being human. It has positive effects on subjective happiness, optimism, wisdom, curiosity, agreeableness, and extroversion. Kristin Neff and Christopher Germer identified three levels of activities that thwart self-compassion: self-criticism, self-isolation, and self-absorption; they equate this to fight, flight, and freeze responses. Parenting practices contribute to the development of self-compassion in children. Maternal support, secure attachment, and harmonious family functioning all create an environment where self-compassion can develop. On the other hand, certain developmental factors (i.e., personal fable) can hinder the development of self-compassion in children. Authentic leadership centered on humanism and on nourishing quality interconnectedness increase compassion in the workplace to self and others. Judith Jordan's concept of self-empathy is similar to self-compassion, it implies the capacity to notice, care, and respond towards one's own felt needs. Strategies of self-care involve valuing oneself, thinking about one's compassionately, and connecting with others in order renewal, support, and validation. Research indicates that self-compassionate individuals experience greater psychological health than those who lack self-compassion. Religion and philosophy Abrahamic religions Christianity The Christian Bible's Second Epistle to the Corinthians is but one place where God is spoken of as the "Father of mercies" (or "compassion") and the "God of all comfort." Jesus embodies the essence of compassion and relational care. Christ challenges Christians to forsake their own desires and to act compassionately towards others, particularly those in need or distress. One of his most well-known teachings about compassion is the Parable of the Good Samaritan (), in which a Samaritan traveler "was moved with compassion" at the sight of a man who was beaten. Jesus also demonstrated compassion to those his society had condemned—tax collectors, prostitutes, and criminals—by saying "just because you received a loaf of bread, does not mean you were more conscientious about it, or more caring about your fellow man". An interpretation of the incarnation and crucifixion of Jesus is that it was undertaken from a compassionate desire to feel the suffering of and effect the salvation of mankind; this was also a compassionate sacrifice by God of his own son ("For God so loved the world, that he gave his only begotten Son..."). A 2012 study of the historical Jesus claimed that he sought to elevate Judaic compassion as the supreme human virtue, capable of reducing suffering and fulfilling our God-ordained purpose of transforming the world into something more worthy of its creator. Islam In the Muslim tradition, foremost among God's attributes are mercy and compassion, or, in the canonical language of Arabic, and . Each of the 114 chapters of the Quran, with one exception, begins with the verse, "In the name of Allah the Compassionate, the Merciful." The Arabic word for compassion is . Its roots abound in the Quran. A good Muslim is to commence each day, each prayer, and each significant action by invoking Allah the Merciful and Compassionate, i.e., by reciting . The womb and family ties are characterized by compassion and named after the exalted attribute of Allah "" (The Compassionate). Judaism In the Jewish tradition, God is the Compassionate and is invoked as the Father of Compassion: hence or Compassionate becomes the usual designation for His revealed word. (Compare, above, the frequent use of in the Quran). Sorrow and pity for one in distress, creating a desire to relieve it, is a feeling ascribed alike to man and God: in Biblical Hebrew, (, from , the mother, womb), "to pity" or "to show mercy" in view of the sufferer's helplessness, hence also "to forgive" (), "to forbear" (; ; ). The Rabbis speak of the "thirteen attributes of compassion". The Biblical conception of compassion is the feeling of the parent for the child. Hence the prophet's appeal in confirmation of his trust in God invokes the feeling of a mother for her offspring (). A classic articulation of the Golden Rule came from the first century Rabbi Hillel the Elder. Renowned in the Jewish tradition as a sage and a scholar, he is associated with the development of the Mishnah and the Talmud and, as such, is one of the most important figures in Jewish history. Asked for a summary of the Jewish religion "while standing on one leg" (meaning in the most concise terms) Hillel stated: "That which is hateful to you, do not do to your fellow. That is the whole Torah. The rest is the explanation; go and learn." Post 9/11, the words of Rabbi Hillel are frequently quoted in public lectures and interviews around the world by the prominent writer on comparative religion Karen Armstrong. Many Jewish sources speak of the importance of compassion for and prohibitions on causing needless pain to animals. Significant rabbis who have done so include Rabbi Samson Raphael Hirsch Rabbi Simhah Zissel Ziv, and Rabbi Moshe Cordovero. Ancient Greek philosophy In ancient Greek philosophy motivations based on (feeling, passion) were typically distrusted. Reason was generally considered to be the proper guide to conduct. Compassion was considered ; hence, is depicted as blindfolded, because her virtue is dispassion — not compassion. Aristotle compared compassion with indignation and thought they were both worthy feelings: Compassion means being pained by another person's unearned misfortune; indignation means being pained by another's unearned good fortune. Both are an unhappy awareness of an unjust imbalance. Stoicism had a doctrine of rational compassion known as . In Roman society, compassion was often seen as a vice when it was expressed as pity rather than mercy. In other words, showing empathy toward someone who was seen as deserving was considered virtuous, whereas showing empathy to someone deemed unworthy was considered immoral and weak. Confucianism Mencius maintained that everyone possesses the germ or root of compassion, illustrating his case with the famous example of the child at an open well: "Suppose a man were, all of a sudden, to see a young child on the verge of falling into a well. He would certainly be moved to compassion, not because he wanted to get into the good graces of the parents, nor because he wished to win the praise of his fellow-villagers or friends, nor yet because he disliked the cry of the child". Mencius saw the task of moral cultivation as that of developing the initial impulse of compassion into an enduring quality of benevolence. Indian religions Buddhism The first of the Four Noble Truths is the truth of suffering or (unsatisfactoriness or stress). is one of the three distinguishing characteristics of all conditioned existence. It arises as a consequence of not understanding the nature of impermanence (the second characteristic) as well as a lack of understanding that all phenomena are empty of self (the third characteristic). When one has an understanding of suffering and its origins and understands that liberation from suffering is possible, renunciation arises. Renunciation then lays the foundation for the development of compassion for others who also suffer. This is developed in stages: Ordinary compassion The compassion we have for those close to us such as friends and family and a wish to free them from the 'suffering of suffering' Immeasurable compassion This is the compassion that wishes to benefit all beings without exception. It is associated with both the Hinayana and Mahayana paths. Great Compassion This is practiced exclusively in the Mahayana tradition and is associated with the development of Bodhicitta. The Bodhisattva Vow begins (in one version): "Suffering beings are numberless, I vow to liberate them all." The 14th Dalai Lama has said, "If you want others to be happy, practice compassion. If you want to be happy, practice compassion." But he also warned that compassion is difficult to develop: Hinduism In classical literature of Hinduism, compassion is a virtue with many shades, each shade explained by different terms. Three most common terms are (), (), and (). Other words related to compassion in Hinduism include , , and . Some of these words are used interchangeably among the schools of Hinduism to explain the concept of compassion, its sources, its consequences, and its nature. The virtue of compassion to all living beings, claims Gandhi and others, is a central concept in Hindu philosophy. is defined by Padma Purana as the virtuous desire to mitigate the sorrow and difficulties of others by putting forth whatever effort necessary. Matsya Purana describes as the value that treats all living beings (including human beings) as one's own self, wanting the welfare and good of the other living being. Such compassion, claims Matsya Purana, is one of necessary paths to being happy. Ekadashi Tattvam explains is treating a stranger, a relative, a friend, and a foe as one's own self; and argues that compassion is that state when one sees all living beings as part of one's own self, and when everyone's suffering is seen as one's own suffering. Compassion to all living beings, including to those who are strangers and those who are foes, is seen as a noble virtue. , another word for compassion in Hindu philosophy, means placing one's mind in other's favor, thereby seeking to understand the best way to help alleviate their suffering through an act of (compassion). , yet another word for compassion, refers to one's state after one has observed and understood the pain and suffering in others. In Mahabharata, Indra praises Yudhishthira for his – compassion, sympathy – for all creatures. Tulsidas contrasts (compassion) with (arrogance, contempt of others), claiming compassion is a source of dharmic life, while arrogance a source of sin. (compassion) is not (pity) in Hinduism, or feeling sorry for the sufferer, because that is marred with condescension; compassion is recognizing one's own and another's suffering in order to actively alleviate that suffering. Compassion is the basis for , a core virtue in Hindu philosophy and an article of everyday faith and practice. , or non-injury, is compassion-in-action that helps actively prevent suffering in all living things as well as helping beings overcome suffering and move closer to liberation. Compassion in Hinduism is discussed as an absolute and a relative concept. There are two forms of compassion: one for those who suffer even though they have done nothing wrong and one for those who suffer because they did something wrong. Absolute compassion applies to both, while relative compassion addresses the difference between the former and the latter. An example of the latter include those who plead guilty or are convicted of a crime such as murder; in these cases, the virtue of compassion must be balanced with the virtue of justice. The classical literature of Hinduism exists in many Indian languages. For example, Tirukkuṛaḷ, written between and , and sometimes called the Tamil Veda, is a cherished classic on Hinduism written in a South Indian language. It dedicates Chapter 25 of Book 1 to compassion, further dedicating separate chapters each for the resulting values of compassion, chiefly, vegetarianism or veganism (Chapter 26), doing no harm (Chapter 32), non-killing (Chapter 33), possession of kindness (Chapter 8), dreading evil deeds (Chapter 21), benignity (Chapter 58), the right scepter (Chapter 55), and absence of terrorism (Chapter 57), to name a few. Jainism Compassion for all life, human and non-human, is central to the Jain tradition. Though all life is considered sacred, human life is deemed the highest form of earthly existence. To kill any person, no matter their crime, is considered unimaginably abhorrent. It is the only substantial religious tradition that requires both monks and laity to be vegetarian. It is suggested that certain strains of the Hindu tradition became vegetarian due to strong Jain influences. The Jain tradition's stance on nonviolence, however, goes far beyond vegetarianism. Jains refuse food obtained with unnecessary cruelty. Many practice veganism. Jains run animal shelters all over India. The Lal Mandir, a prominent Jain temple in Delhi, is known for the Jain Birds Hospital in a second building behind the main temple. See also References External links Skalski, J. E., & Aanstoos, C. (2023). The Phenomenology of change beyond tolerating. Journal of Humanistic Psychology, 63(5), 660–681. Mirrored emotion Jean Decety, University of Chicago Daniel Goleman, psychologist & author of Emotional Intelligence, video lecture on compassion Concepts in ethics Emotions Giving Kindness Moral psychology Relational ethics Religious ethics Social emotions Suffering Virtue Love
Compassion
[ "Biology" ]
6,132
[ "Behavior", "Human behavior", "Kindness" ]
327,511
https://en.wikipedia.org/wiki/Electrical%20efficiency
The efficiency of a system in electronics and electrical engineering is defined as useful power output divided by the total electrical power consumed (a fractional expression), typically denoted by the Greek small letter eta (η – ήτα). If energy output and input are expressed in the same units, efficiency is a dimensionless number. Where it is not customary or convenient to represent input and output energy in the same units, efficiency-like quantities have units associated with them. For example, the heat rate of a fossil fuel power plant may be expressed in BTU per kilowatt-hour. Luminous efficacy of a light source expresses the amount of visible light for a certain amount of power transfer and has the units of lumens per watt. Efficiency of typical electrical devices Efficiency should not be confused with effectiveness: a system that wastes most of its input power but produces exactly what it is meant to is effective but not efficient. The term "efficiency" makes sense only in reference to the wanted effect. A light bulb, for example, might have 2% efficiency at emitting light yet still be 98% efficient at heating a room (In practice it is nearly 100% efficient at heating a room because the light energy will also be converted to heat eventually, apart from the small fraction that leaves through the windows). An electronic amplifier that delivers 10 watts of power to its load (e.g., a loudspeaker), while drawing 20 watts of power from a power source is 50% efficient. (10/20 × 100 = 50%) Electric kettle: more than 90% (comparatively little heat energy is lost during the 2 to 3 minutes a kettle takes to boil water). A premium efficiency electric motor: more than 90% (see Main Article: Premium efficiency). A large power transformer used in the electrical grid may have efficiency of more than 99%. Early 19th century transformers were much less efficient, wasting up to a third of the energy passing through them. A steam power plant used to generate electricity may have 30-40% efficiency. Efficiency of devices at point of maximum power transfer As a result of the maximum power theorem, devices transfer maximum power to a load when running at 50% electrical efficiency. This occurs when the load resistance (of the device in question) is equal to the internal Thevenin equivalent resistance of the power source. This is valid only for non-reactive source and load impedances. Efficiency of light bulbs Discussion High efficiency is particularly relevant in systems that can operate from batteries. Inefficiency may require weighing the cost either of the wasted energy, or of the required power supply, against the cost of attaining greater efficiency. Efficiency can usually be improved by choosing different components or by redesigning the system. Inefficiency probably produces extra heat within the system, which must be removed if it is to remain within its operating temperature range. In a climate-controlled environment, like a home or office, heat generated by appliances may reduce heating costs or increase air conditioning costs. Impedance bridging connections have a load impedance much larger than the source, which helps transfer voltage signals at high electrical efficiency. See also Antenna efficiency Efficient energy use Energy conversion efficiency Index of electronics articles Maximum power transfer theorem Mechanical efficiency Performance per watt Thermal efficiency References External links Conversion: Energy efficiency in percent of passive loudspeakers to sensitivity in dB per watt and meter 4E - International Energy Agency Implementing Agreement to promote energy efficiency and standards for electrical products worldwide Load Power Sources for Peak Efficiency, EDN 1979 October 5 System for the Peak Efficiency detection, IEEE TPEL Electrical engineering Energy economics de:Wirkungsgrad ru:Коэффициент полезного действия
Electrical efficiency
[ "Engineering", "Environmental_science" ]
767
[ "Electrical engineering", "Energy economics", "Environmental social science" ]
327,567
https://en.wikipedia.org/wiki/Versatile%20Real-Time%20Executive
Versatile Real-Time Executive (VRTX) is a real-time operating system (RTOS) developed and marketed by the company Mentor Graphics. VRTX is suitable for both traditional board-based embedded systems and system on a chip (SoC) architectures. It has been superseded by the Nucleus RTOS. History The VRTX operating system began as a product of Hunter & Ready, a company founded by James Ready and Colin Hunter in 1980 which later became Ready Systems. This firm later merged with Microtec Research in 1993, and went public in 1994. This firm was then acquired by Mentor Graphics in 1995 and VRTX became a Mentor product. The VRTX operating system was released in September 1981. Since the 1980s, the chief rival to VRTX has been VxWorks, a Wind River Systems product. VxWorks had its start in the mid 1980s as compiler and assembly language tools to supplement VRTX, named VRTX works, or VxWorks. Later, Wind River created their own real-time kernel offering similar to VRTX. VRTX VRTX comes in several flavors: VRTX: 16-bit VRTX, for Z8000, 8086, etc. VRTX-32: 32-bit VRTX, for M68K, AMD29K, etc. MPV: Multiprocessor VRTX for distributed applications, such as distributed across VME backplanes. VRTX-mc: Micro-Controller VRTX, for small systems needing minimal memory use. VRTX-oc: On-chip VRTX, freeware community source code for personal and academic use, license required for commercial use. VRTX-sa: Scalable Architecture VRTX for full operating system features. Loosely based on Carnegie Mellon University's Mach microkernel principles. SPECTRA: Virtual machine (VM) implementation for running a VRTX VM on Unix-like hosts. Also includes an open integrated development environment allowing third-party tools open access to cross-development resources. Most companies developing software with VRTX use reduced instruction set computer (RISC) microprocessors including ARM, MIPS, PowerPC, or others. Implementations VRTX runs the Hubble Space Telescope. VRTX runs the Wide Area Augmentation System. VRTX was the first operating system ported to the AMD Am29000. VRTX is used as a core for the Motorola proprietary operating system, which runs on most company devices since the Motorola V60 and T280i, up to the Motorola RAZR2 V9x. It runs on several hardware platforms including LTE (Motorola V300, V500, V600, E398, RAZR V3 and others featuring the ARM7 processor), LTE2 (Motorola L7 and upcoming devices with 176x220 screen resolution), Rainbow POG (3G phones featuring an MCORE processor from Motorola E1000 to RAZR V3x), Argon (all new 3G phones with 532 MHz ARM11 processor since Motorola RAZR MAXX V6, and V3xx), and others. See also List of telescope parts and construction Xenomai is a real-time development software framework cooperating with the Linux kernel. It could be used to port the VRTX based system to Linux although not all features are supported. VRTX has reached end-of-life, an automated porting tool named OS Changer is available to reuse the code on a modern OS. References External links , Mentor Graphics ARM operating systems Embedded operating systems Proprietary operating systems Real-time operating systems Microkernel-based operating systems Microkernels X86 operating systems MIPS operating systems
Versatile Real-Time Executive
[ "Technology" ]
784
[ "Real-time computing", "Real-time operating systems" ]
327,601
https://en.wikipedia.org/wiki/Sybian
A Sybian (), or Sybian saddle, is a type of masturbation device, consisting of a hollow saddle-like seat containing two electric motors, motor speed controller boards, gearing, pulleys, and a platform on cranked axles such that a ridge on the top of the unit can be made to vibrate through a range of speeds as set using a wired external hand controller, and an upward-pointing shaft set on an angle through the ridge can be made to rotate at speeds of up to several hundred revolutions per minute, again by use of the wired remote control. Flexible molded attachments are usually supplied, fitting over the vibrating ridge and shaft which mostly have integrated dildos on their top. In use, the rider inserts the dildo into a body orifice for internal stimulation while applying pressure on the vibrating ridge with their external erogenous parts. Development According to its inventor, Dave Lampert, the Sybian was first conceived in the 1970s and developed in 1983. A prototype was built in 1985 from sheet metal mounted on a wooden frame with a vibrator projecting through an opening inside the housing; a second prototype became the basis for current production models. Lampert and his team initially called the device Master Better, shortened to "MB", for about four years before selecting a new name for it. The prefix syb in Sybian was derived from Sybaris, an Ancient Greek city in southern Italy which was associated with luxurious living. It is currently manufactured by Abco Research Associates in Monticello, Illinois. From research and experience, Lampert theorized that the woman on top position during intercourse works best for female orgasm and that stimulation is enhanced when the penis remains fully inserted and the female partner rocks her pelvis forward and backward, making contact with the sensitive G-spot located on the front wall of the vagina. In 2016, Lampert was honored with an AVN lifetime achievement award for his invention. He died in July 2021 at the age of 90. Specifications The Sybian typically weighs around , and usually measures wide, long, and tall excluding the rubber attachment with an power cord. The unit is designed to work with or without a penetration attachment. When in operation, the Sybian distributes vibrations externally along the user's pelvic floor, including clitoral glans, the introitus to their vagina and cavity of the anus. The optional penetration attachments rotate using a 20:1 ratio gear motor providing that can vary from 0 to 120 rpm and vibration is produced using a electric motor that may be controlled from 0 to 6500 rpm. Power and vibration/rotation are controlled with a remote control. The Sybian can hold over of weight. It has a padded naugahyde cover. The casing has built-in finger grips for carrying. Generally, the user of the Sybian straddles the machine and administers the dildo attachment, inserting it into the vagina or the anus. The vibration and rotation can be controlled by separate on/off switches and two rotary controls. The Sybian is typically sold with multiple attachments of varying sizes made from synthetic rubber in various shapes. Publicity After its release in 1987, the Sybian was featured on the cover of that December's issue of Penthouse Forum. It made its first pornographic video appearance in Orgasmatic (1998), which received an AVN award nomination for "most outrageous sex scene" for Ruby the Original's performance with the device. Although the Sybian was featured in many pornographic video productions beginning in the early 2000s, primarily on the Internet, the device came to prominence on The Howard Stern Show after that show's switch to Sirius Satellite Radio. References External links Official website American inventions Articles containing video clips Female sex toys Machine sex Products introduced in 1987 Vibrators
Sybian
[ "Physics", "Technology" ]
779
[ "Physical systems", "Machines", "Machine sex" ]
327,612
https://en.wikipedia.org/wiki/RadioShack
RadioShack (formerly written as Radio Shack) is an American electronics retailer that was established in 1921 as an amateur radio mail-order business. Its original parent company, Radio Shack Corporation, was purchased by Tandy Corporation in 1962, shifting its focus from radio equipment to hobbyist electronic components sold in retail stores. At its peak in 1999, Tandy operated over 8,000 RadioShack stores in the United States, Mexico, and Canada, and under the Tandy name in The Netherlands, Belgium, Germany, France, the United Kingdom, and Australia. The 21st century proved to be a period of gradual decline. In February 2015, after years of management crises, poor worker relations, diminished revenue, and 11 consecutive quarterly losses, RadioShack was delisted from the New York Stock Exchange and subsequently filed for Chapter 11 bankruptcy. In May 2015, the company's assets, including the RadioShack brand name and related intellectual property, were purchased by General Wireless, a subsidiary of Standard General, for US$26.2 million. In March 2017, General Wireless and subsidiaries filed for bankruptcy, claiming that a store-within-a-store partnership with Sprint was not as profitable as expected. As a result, RadioShack shuttered several company-owned stores and announced plans to shift its business primarily online. RadioShack was acquired by Retail Ecommerce Ventures, a holding company owned by Alex Mehr and self-help influencer Tai Lopez, in November 2020. RadioShack operated primarily as an e-commerce website with a network of independently owned and franchised RadioShack stores, as well as a supplier of parts for HobbyTown USA. In May 2023, Unicomer Group acquired control of the worldwide RadioShack franchise. Unicomer is based in El Salvador and is one of the largest franchisors of RadioShack, with stores in Central America, South America, and the Caribbean. It had purchased its first RadioShack franchise (in El Salvador) in January 1998. History The first 40 years The company was started as Radio Shack in 1921 by two brothers, Theodore and Milton Deutschmann, who wanted to provide equipment for the new field of amateur radio (also known as ham radio). The brothers opened a one-store retail and mail-order operation in the heart of downtown Boston at 46 Brattle Street. They chose the name "Radio Shack", which was the term for a small, wooden structure that housed a ship's radio equipment. The Deutschmanns thought the name was appropriate for a store that would supply the needs of radio officers aboard ships, as well as hams (amateur radio operators). The idea for the name came from an employee, Bill Halligan, who went on to form the Hallicrafters company. The term was already in use — and is to this day — by hams when referring to the location of their stations. The company issued its first catalog in 1939 as it entered the high-fidelity music market. In 1954, Radio Shack began selling its own private-label products under the brand name Realist, changing the brand name to Realistic after being sued by Stereo Realist. During the period the chain was based in Boston, it was commonly referred to disparagingly by its customers as "Nagasaki Hardware", as much of the merchandise was sourced from Japan, then perceived as a source of low-quality, inexpensive parts. In 1959, the store moved its headquarters to 730 Commonwealth Avenue in Boston (across the street from Boston University's Marsh Chapel), with ambitious plans for further expansion. After expanding to nine stores plus an extensive mail-order business, the company fell on hard times in the early 1960s. Tandy Corporation Tandy Corporation, a leather goods corporation, was looking for other hobbyist-related businesses into which it could expand. Charles D. Tandy saw the potential of Radio Shack and retail consumer electronics, purchasing the company in 1962 for US$300,000. At the time of the Tandy Radio Shack & Leather 1962 acquisition, the Radio Shack chain was nearly bankrupt. Tandy's strategy was to appeal to hobbyists. It created small stores that were staffed by people who knew electronics, and sold mainly private brands. Tandy closed Radio Shack's unprofitable mail-order business, ended credit purchases and eliminated many top management positions, keeping the salespeople, merchandisers and advertisers. The number of items carried was cut from 40,000 to 2,500, as Tandy sought to "identify the 20% that represents 80% of the sales" and replace Radio Shack's handful of large stores with many "little holes in the wall", large numbers of rented locations which were easier to close and re-open elsewhere if one location didn't work out. Private-label brands from lower-cost manufacturers displaced name brands to raise Radio Shack profit margins; non-electronic lines from go-carts to musical instruments were abandoned entirely. Customer data from the former RadioShack mail-order business determined where Tandy would locate new stores. As an incentive for them to work long hours and remain profitable, store managers were required to take an ownership stake in their stores. In markets too small to support a company-owned Radio Shack store, the chain relied on independent dealers who carried the products as a sideline. Charles D. Tandy said "We’re not looking for the guy who wants to spend his entire paycheck on a sound system", instead seeking customers "looking to save money by buying cheaper goods and improving them through modifications and accessorizing", making it common among "nerds" and "kids aiming to excel at their science fairs". Charles D. Tandy, who had guided the firm through a period of growth in the 1960s and 1970s, died of a heart attack at age 60 in November 1978. In 1982, the breakup of the Bell System encouraged subscribers to own their own telephones instead of renting them from local phone companies; Radio Shack offered twenty models of home phones. Much of the Radio Shack line was manufactured in the company's own factories. By 1990/1991, Tandy was the world's biggest manufacturer of personal computers; its OEM manufacturing capacity was building hardware for Digital Equipment Corporation, GRiD, Olivetti, AST Computer, Panasonic, and others. The company manufactured everything from store fixtures to computer software to wire and cable, TV antennas, audio and videotape. At one point, Radio Shack was the world's largest electronics chain. In June 1991, Tandy closed or restructured its 200 Radio Shack Computer Centers, acquired Computer City, and attempted to shift its emphasis away from components and cables, toward mainstream consumer electronics. Tandy sold its computer manufacturing to AST Research in 1993, including the laptop computer Grid Systems Corporation which it had purchased in 1988. It sold the Memorex consumer recording trademarks to a Hong Kong firm, and divested most of its manufacturing divisions. House-brand products, which Radio Shack had long marked up heavily, were replaced with third-party brands already readily available from competitors. This reduced profit margins. In 1992, Tandy attempted to launch big-box electronics retailer Incredible Universe; most of the seventeen stores never turned a profit. Its six profitable stores were sold to Fry's Electronics in 1996; the others were closed. Other rebranding attempts included the launch or acquisition of chains including McDuff, Video Concepts and the Edge in Electronics; these were larger stores which carried TVs, appliances and other lines. Tandy closed the McDuff stores and abandoned Incredible Universe in 1996, but continued to add new RadioShack stores. By 1996, industrial parts suppliers were deploying e-commerce to sell a wide range of components online; it would be another decade before RadioShack would sell parts from its website, with a selection so limited that it was no rival to established industrial vendors with million-item specialised, centralised inventories. In 1994, the company introduced a service known as "The Repair Shop at Radio Shack", through which it provided inexpensive out-of-warranty repairs for more than 45 different brands of electronic equipment. The company already had over one million parts in its extensive parts warehouses and 128 service centers throughout the US and Canada; it hoped to leverage these to build customer relationships and increase store traffic. Len Roberts, president of the Radio Shack division since 1993, estimated that the new repair business could generate $500 million per year by 1999. "America's technology store" was abandoned for the "you've got questions, we've got answers" slogan in 1994. In early summer 1995, the company changed its logo; "Radio Shack" was spelled in camel case as "RadioShack". In 1996, RadioShack successfully petitioned the US Federal Communications Commission to allocate frequencies for the Family Radio Service, a short-range walkie-talkie system that proved popular. Battery of the Month From the 1960s until the early 1990s, Radio Shack promoted a "battery of the month" club; a free wallet-sized cardboard card offered one free Enercell per month in-store. Like the free vacuum tube testing offered in-store in the early 1970s, this small loss leader drew foot traffic. The cards also served as generic business cards for the salespeople. Allied Radio In 1970, Tandy Corporation bought Allied Radio Corporation (both retail and industrial divisions), merging the brands into Allied Radio Shack and closing duplicate locations. After a 1973 federal government review, the company sold off the few remaining Allied retail stores and resumed using the Radio Shack name. Allied Electronics, the firm's industrial component operation, continued as a Tandy division until it was sold to Spartan Manufacturing in 1981. Flavoradio The longest-running product for Radio Shack was the AM-only Realistic Flavoradio, sold from 1972 to 2000, 28 years in three designs. This also made the Flavoradio the longest production run in radio history. It was originally released in five colors in the 1972 catalog: vanilla, chocolate, strawberry, avocado, and plum. For 1973, vanilla and chocolate were dropped (and thus are rare today) and replaced by lemon and orange. At some point two-tone models with white backs were offered but never appeared in catalogs; these are extremely rare today. The original design had five transistors (model 166). A sixth was added in 1980 (model 166a). The case was redesigned for 1987, making it taller and thinner, and it came in red, blue, and black. The final model, 201a, came in 1996 and was designed around an integrated circuit. They were first made in Korea then Hong Kong and finally the Philippines. The Flavoradio carried the Realistic name until about 1996, when it switched to "Radio Shack", then finally "Optimus". When the Flavoradio was dropped from the catalog in 2001, it was the last AM-only radio on the market. CB radio The chain profited from the mass popularity of citizens band radio in the mid-1970s which, at its peak, represented nearly 30% of the chain's revenue. Home computers In 1977, two years after the MITS Altair 8800, Radio Shack introduced the TRS-80, one of the first mass-produced personal computers. This was a complete pre-assembled system at a time when many microcomputers were built from kits, backed by a nationwide retail chain when computer stores were in their infancy. Sales of the initial, primitive US$600 (equal to $ today) TRS-80 exceeded all expectations despite its limited capabilities and high price. This was followed by the TRS-80 Color Computer in 1980, designed to attach to a television. Tandy also inspired the Tandy Computer Whiz Kids (1982–1991), a comic-book duo of teen calculator enthusiasts who teamed up with the likes of Archie and Superman. Radio Shack's computer stores offered lessons to pre-teens as "Radio Shack Computer Camp" in the early 1980s. By September 1982, the company had more than 4,300 stores, and more than 2,000 independent franchises in towns not large enough for a company-owned store. The latter also sold third-party hardware and software for Tandy computers, but company-owned stores did not sell or even acknowledge the existence of non-Tandy products. In the mid-1980s, Radio Shack began a transition from its proprietary 8-bit computers to its proprietary IBM PC compatible Tandy computers, removing the "Radio Shack" name from the product in an attempt to shake off the long-running nicknames "Radio Scrap" and "Trash 80" to make the product appeal to business users. Poor compatibility, shrinking margins and a lack of economies of scale led Radio Shack to exit the computer-manufacturing market in the 1990s after losing much of the desktop PC market to newer, price-competitive rivals like Dell. Tandy acquired the Computer City chain in 1991, and sold the stores to CompUSA in 1998. In 1994, RadioShack began selling IBM's Aptiva line of home computers. This partnership would last until 1998, when RadioShack partnered with Compaq and created 'The Creative Learning Center' as a store-within-a-store to promote desktop PCs. Similar promotions were tried with 'The Sprint Store at RadioShack' (mobile telephones), 'RCA Digital Entertainment Center' (home audio and video products), and 'PowerZone' (RadioShack's line of battery products, power supplies, and surge protectors). RadioShack Corporation In the mid-1990s, the company attempted to move out of small components and into more mainstream consumer markets, focusing on marketing wireless phones. This placed the chain, long accustomed to charging wide margins on specialized products not readily available from other local retailers, into direct competition against vendors such as Best Buy and Walmart. In May 2000, the company dropped the Tandy name altogether, becoming RadioShack Corporation. The leather operating assets were sold to The Leather Factory on November 30, 2000; that business remains profitable. House brands Realistic and Optimus were discontinued. In 1999, the company agreed to carry RCA products in a five-year agreement for a "RCA Digital Entertainment Center" store-within-a-store. When the RCA contract ended, RadioShack introduced its own Presidian and Accurian brands, reviving the Optimus brand in 2005 for some low-end products. Enercell, a house brand for dry cell batteries, remained in use until approximately 2014. Most of the RadioShack house brands had been dropped when Tandy divested its manufacturing facilities in the early 1990s; the original list included: Realistic (stereo, hi-fi and radio), Archer (antenna rotors and boosters), Micronta (test equipment), Tandy (computers), TRS-80 (proprietary computer), ScienceFair (kits), DuoFone (landline telephony), Concertmate (music synthesizer), Enercell (cells and batteries), Road Patrol (radar detectors, bicycle radios), Patrolman (Realistic radio scanner), Deskmate (software), KitchenMate, Stereo Shack, Supertape (recording tape), Mach One, Optimus (speakers and turntables), Flavoradio (pocket AM radios in various colours), Weatheradio, Portavision (small televisions) and Minimus (speakers). In 2000, RadioShack was one of multiple backers of the CueCat barcode reader, which soon turned out to be a marketing failure. The company had invested US$35 million in the concept, including printing the barcodes throughout its catalogs, and distributing CueCat devices to customers at no charge. The last annual RadioShack printed catalogs were distributed to the public in 2003. Until 2004, RadioShack routinely asked for the name and address of purchasers so they could be added to mailing lists. Name and mailing address were requested for special orders (RadioShack Unlimited parts and accessories, Direc2U items not stocked locally), returns, check payments, RadioShack Answers Plus credit card applications, service plan purchases and carrier activations of cellular telephones. On December 20, 2005, RadioShack announced the sale of its newly built riverfront Fort Worth, Texas headquarters building to German-based KanAm Grund; the property was leased back to RadioShack for 20 years. In 2008, RadioShack assigned this lease to the Tarrant County College District (TCC), remaining in of the space as its headquarters. In 2005, RadioShack parted with Verizon for a 10-year agreement with Cingular (later AT&T) and renegotiated its 11-year agreement with Sprint. In July 2011, RadioShack ended its wireless partnership with T-Mobile, replacing it with the "Verizon Wireless Store" within a store. 2005 under the leadership of Jim Hamilton, marked a banner year for wireless. RadioShack sold more mobile phones than Walmart, Circuit City and Best Buy combined. RadioShack had not made products under the Realistic name since the early 1990s. Support for many of Radio Shack's traditional product lines, including amateur radio, had ended by 2006. A handful of small-town franchise dealers used their ability to carry non-RadioShack merchandise to bring in parts from outside sources, but these represented a minority. PointMobl and "The Shack" In mid-December 2008, RadioShack opened three concept stores under the name "PointMobl" to sell wireless phones and service, netbooks, iPod and GPS navigation devices. The three Texas stores (Dallas, Highland Village and Allen) were furnished with white fixtures like those in the remodelled wireless departments of individual RadioShack stores, but there was no communicated relationship to RadioShack itself. Had the test proved successful, RadioShack could have moved to convert existing RadioShack locations into PointMobl stores in certain markets. While some PointMobl products, such as car power adapters and phone cases, were carried as store-brand products in RadioShack stores, the stand-alone PointMobl stores were closed and the concept abandoned in March 2011. In August 2009, RadioShack rebranded itself as "The Shack". The campaign increased sales of mobile products, but at the expense of its core components business. RadioShack aggressively promoted Dish Network subscriptions. In November 2012, RadioShack introduced Amazon Locker parcel pick-up services at its stores, only to dump the program in September 2013. In 2013, the chain made token attempts to regain the do it yourself market, including a new "Do It Together" slogan. Long-time staff observed a slow and gradual shift away from electronic parts and customer service and toward promotion of wireless sales and add-ons; the pressure to sell gradually increased, while the focus on training and product knowledge decreased. Morale was abysmal; longtime employees who were paid bonus and retirement in stock options saw the value of these instruments fade away. Financial decline In 1998, RadioShack called itself the single largest seller of consumer telecommunications products in the world; its stock reached its peak a year later. InterTAN, a former Tandy subsidiary, sold the Tandy UK stores in 1999 and the Australian stores in 2001. InterTAN was sold (with its Canadian stores) to rival Circuit City in 2004. The RadioShack brand remained in use in the United States, but the 21st century proved a period of long decline for the chain, which was slow to respond to key trends— such as e-commerce, the entry of competitors like Best Buy and Amazon.com, and the growth of the maker movement. By 2011, smartphone sales, rather than general electronics, accounted for half of the chain's revenue. The traditional RadioShack clientele of do-it-yourself tinkerers were increasingly sidelined. Electronic parts formerly stocked in stores were now mostly only available through on-line special order. Store employees concentrated efforts selling profitable mobile contracts, while other customers seeking assistance were neglected and left the stores in frustration. Demand for consumer electronics was also increasingly being weakened by consumers buying the items online. 2004: "Fix 1500" initiative In early 2004, RadioShack introduced Fix 1500, a sweeping program to "correct" inventory and profitability issues company-wide. The program put the 1,500 lowest-graded store managers, of over 5,000, on notice of the need to improve. Managers were graded not on tangible store and personnel data but on one-on-one interviews with district management. Typically, a 90-day period was given for the manager to improve (thus causing another manager to then be selected for Fix 1500). A total of 1,734 store managers were reassigned as sales associates or terminated in a 6-month period. Also, during this period, RadioShack cancelled the employee stock purchase plan. By the first quarter of 2005, the metrics of skill assessment used during Fix 1500 had already been discarded, and the corporate officer who created the program had resigned. In 2004, RadioShack was the target of a class-action lawsuit in which more than 3,300 current or former RadioShack managers alleged the company required them to work long hours without overtime pay. In an attempt to suppress the news, the company launched a successful strategic lawsuit against public participation against Bradley D. Jones, the webmaster of RadioShackSucks.com and a former RadioShack dealer for 17 years. 2006: Management problems On February 20, 2006, CEO David Edmondson admitted to "misstatements" on his curriculum vitae and resigned after the Fort Worth Star-Telegram debunked his claim to degrees in theology and psychology from Heartland Baptist Bible College. Chief operating officer Claire Babrowski briefly took over as CEO and president. A 31-year veteran of McDonald's Corporation, where she had been vice president and Chief Restaurant Operations Officer, Babrowski had joined RadioShack several months prior. She left the company in August 2006, later becoming CEO and Executive Vice President of Toys "R" Us. RadioShack's board of directors appointed Julian C. Day as chairman and chief executive officer on July 7, 2006. Day had financial experience and had played a key role in revitalizing such companies as Safeway, Sears and Kmart but lacked any practical front-line sales experience needed to run a retail company. The Consumerist named him one of the "10 Crappiest CEOs" of 2009 (among consumer-facing companies, according to their own employees). He resigned in May 2011. RadioShack Chief Financial Officer James Gooch succeeded Day as CEO in 2011, but "agreed to step down" 16 months later following a 73% plunge in the price of the stock. On February 11, 2013, RadioShack Corp. hired Joseph C. Magnacca from Walgreens, because he had experience in retail. 2006: Corporate layoffs and new strategy In the spring of 2006, RadioShack announced a strategy to increase average unit volume, lower overhead costs, and grow profitable square footage. In early to mid-2006, RadioShack closed nearly 500 locations. It was determined that some stores were too close to each other, causing them to compete with one another for the same customers. Most of the stores closed in 2006 brought in less than US$350,000 in revenue each year. Despite these actions, stock prices plummeted within what was otherwise a booming market. On August 10, 2006, RadioShack announced plans to eliminate a fifth of its company headquarters workforce to reduce overhead expense, improving its long-term competitive position while supporting a significantly smaller number of stores. On Tuesday, August 29, the affected workers received an e-mail: "The work force reduction notification is currently in progress. Unfortunately your position is one that has been eliminated." Four hundred and three workers were given 30 minutes to collect their personal effects, say their goodbyes to co-workers and then attend a meeting with their senior supervisors. Instead of issuing severance payments immediately, the company withheld them to ensure that company-issued BlackBerrys, laptops and cellphones were returned. This move drew immediate widespread public criticism for its lack of sensitivity. 2009: Customer relations problems RadioShack and the Better Business Bureau of Fort Worth, Texas, met on April 23, 2009, to discuss unanswered and unresolved complaints. The company implemented a plan of action to address existing and future customer service issues. Stores were directed to post a sign with the district manager's name, the question "How Are We Doing?" and a direct toll-free number to the individual district office for their area. RadioShackHelp.com was created as another portal for customers to resolve their issues through the Internet. , the BBB had upgraded RadioShack from an "F" to an "A" rating; this was changed to "no rating" after the 2015 bankruptcy filing. According to an experience ratings report published by Temkin Group, an independent research firm, RadioShack was ranked as the retailer with the worst overall customer experience; it maintained this position for six consecutive years. 2012–2014: Financial distress From 2000 to 2011, RadioShack spent US$2.6 billion repurchasing its own stock in an attempt to prop up a share price which fell from US$24.33 to US$2.53; the buyback and the stock dividend were suspended in 2012 to conserve cash and reduce debt as the company continued to lose money. Company stock had declined 81 percent since 2010 and was trading well below book value. The stock reached an all-time low on April 14, 2012. In September 2012, RadioShack's head office laid off 130 workers after a US$21 million quarterly loss. Layoffs continued in August 2013; headquarters employment dropped from more than 2,000 before the 2006 layoffs to slightly fewer than 1,000 in late 2013. At the end of 2013, the chain owned 4,297 US stores. The company had received a cash infusion in 2013 from Salus Capital Partners and Cerberus Capital Management. This debt carried onerous conditions, preventing RadioShack from gaining control over costs by limiting store closures to 200 per year and restricting the company's refinancing efforts. With too many underperforming stores remaining open, the chain continued to spiral toward bankruptcy. On March 4, 2014, the company announced a net trading loss for 2013 of US$400.2 million, well above the 2012 loss of US$139.4 million, and proposed a restructuring which would close 1,100 lower-performing stores, almost 20% of its US locations. On May 9, 2014, the company reported that creditors had prevented it from carrying out those closures, with one lender presuming fewer stores would mean fewer assets to secure the loan and reduce any recovery it would get in a bankruptcy reorganization. On June 10, 2014, RadioShack said that it had enough cash to last 12 months, but that lasting a year depended on sales growing. Sales had fallen for nine straight quarters, and by year's end the company realized a loss in "each of its 10 latest quarters". On June 20, 2014, RadioShack's stock price fell below US$1, triggering a July 25 warning from the New York Stock Exchange that it could be delisted for failure to maintain a stock price above $1. On July 28, 2014, Mergermarket's Debtwire reported RadioShack was discussing Chapter 11 bankruptcy protection as an option. On September 11, 2014, RadioShack admitted it might have to file for bankruptcy, and would be unable to finance its operations "beyond the very near term" unless the company was sold, restructured, or received a major cash infusion. On September 15, 2014, RadioShack replaced its CFO with a bankruptcy specialist. On October 3, RadioShack announced an out-of-court restructuring, a 4:1 dilution of shares, and a rights issue priced at 40 cents a share. RadioShack's stock () was halted on the New York exchange for the entire day. Despite the debt restructuring proposal, in December Salus and Cerberus informed RadioShack that it was in default of the they had provided as a cash infusion in 2013. At the end of October 2014, quarterly figures indicated RadioShack was losing US$1.1 million per day. A November 2014 attempt to keep the stores open from 8AM to midnight on Thanksgiving Day drew a sharp backlash from employees and a few resignations; comparable store sales for the three days (Thursday-Saturday) were 1% lower than the prior year, when the stores were open for two of the days. The company's problems maintaining inventories of big-ticket items, such as Apple's iPhone 6, further cut into sales. By December 2014, RadioShack was being sued by former employees for having encouraged them to invest 401(k) retirement savings in company stock, alleging a breach of fiduciary duties to "prudently" handle the retirement fund which caused "devastating losses" in the retirement plans as the stock dropped from US$13 in 2011 to 38 cents at the end of 2014. These claims were dismissed by the Fifth U.S. Circuit Court of Appeals in 2018. 2015 bankruptcy On January 15, 2015, The Wall Street Journal reported RadioShack had delayed rent payments to some commercial landlords and was preparing a bankruptcy filing that could come as early as February. Officials of the company declined to comment on the report. A separate report by Bloomberg claimed the company might sell leases to as many as half its stores to Sprint. On February 2, 2015, the company was delisted from the New York Stock Exchange after its average market capitalization remained below US$50 million for longer than thirty consecutive days. That same day, Bloomberg News reported RadioShack was in talks to sell half of its stores to Sprint and close the rest, which would effectively render RadioShack no longer a stand-alone retailer. Amazon.com and Brookstone were also mentioned to be potential bidders, the former having at the time been wanting to establish a brick and mortar presence. On February 3, RadioShack defaulted on its loan from Salus Capital. On the days following these reports, some employees were instructed to reduce prices and transfer inventory out of stores designated for closing to those that would remain open during the presumed upcoming bankruptcy proceedings, while the rest remained "in the dark" as to the company's future. Many stores had already closed abruptly on Sunday, February 1, 2015, the first day of the company's fiscal year, with employees only given a few hours advance notice. Some had been open with a skeleton crew, little inventory and reduced hours only because the Salus Capital loan terms limited the chain to 200 store closures a year. A creditor group alleged the chain had remained on life support instead of shutting down earlier and cutting its losses merely so that Standard General could avoid paying on credit default swaps which expired on December 20, 2014. On February 5, 2015, RadioShack announced that it had filed for Chapter 11 bankruptcy protection. Using bankruptcy to end contractual restrictions that had required it keep unprofitable stores open, the company immediately published a list of 1784 stores which it intended to close, a process it wished to complete by the month's end to avoid an estimated US$7 million in March rent. Customers had initially been given until March 6, 2015, to return merchandise or redeem unused gift cards. However, after legal pressure from the Attorneys General of several states, RadioShack ultimately agreed to reimburse customers for the value of unused gift cards. RadioShack was criticized for including the personally identifying information of 67 million of its customers as part of its assets for sale during the proceedings, despite its long-standing policy and a promise to customers that data would never be sold for any reason at any time. The Federal Trade Commission and the Attorneys General of 38 states fought against this proposal. The sale of this data was ultimately approved, albeit greatly reduced from what was initially proposed. General Wireless Operations, Inc. On March 31, 2015, the bankruptcy court approved a US$160 million offer by the Standard General affiliate General Wireless Operations, Inc., gaining ownership of 1,743 RadioShack locations. As part of the deal, the company entered into a partnership with Sprint, in which the company would become a co-tenant at 1,435 RadioShack locations and establish store within a store areas devoted to selling its wireless brands, including Sprint, Boost Mobile and Virgin Mobile. The stores would collect commissions on the sale of Sprint products, and Sprint would assist in promotion. Sprint stated that this arrangement would increase the company's retail footprint by more than double; the company previously had around 1,100 company-owned retail outlets, in comparison to the over 2,000 run by AT&T Mobility. Although they would be treated as a co-tenant, a mockup showed Sprint branding being more prominent in promotion and exterior signage than that of RadioShack. The acquisition did not include rights to RadioShack's intellectual property (such as its trademarks), rights to RadioShack's franchised locations, and customer records, which were to be sold separately. Re-branded stores soft launched on April 10, 2015, with a preliminary conversion of the stores' existing wireless departments to exclusively house Sprint brands, with all stores eventually to be renovated in waves to allocate larger spaces for Sprint. In May 2015, the acquisition of the "RadioShack" name and its assets by General Wireless for US$26.2 million was finalized. Chief marketing officer Michael Tatelman emphasized that the company that emerged from the 2015 proceedings is an entirely new company, and went on to affirm that the old RadioShack did not re-emerge from bankruptcy, calling it "defunct". Less than one year after the bankruptcy events of 2015, Ron Garriques and Marty Amschler stepped down from their respective chief executive officer and chief financial officer positions; Garriques had held his position for nine months. 2017 bankruptcy It was speculated on March 2, 2017, that General Wireless was preparing to take RadioShack through its second bankruptcy in two years. This was evidenced when dozens of corporate office employees were laid off and two hundred stores were planned to be shuttered, and further evidenced when the RadioShack website began displaying "all sales final" banners for in-store purchases at all locations. RadioShack's Chapter 11 bankruptcy was formally filed on March 8, 2017. Of the then 1,300 remaining stores, several hundred were converted into Sprint-only locations. Despite declaring Chapter 11 bankruptcy (typically reserved for reorganization of debt) instead of Chapter 7 (liquidation), the company engaged in liquidation of all inventory, supplies, and store fixtures, as well as auctioning off old memorabilia. On May 26, RadioShack announced plans to close all but 70 corporate stores and shift its business primarily to online. These stores closed after Memorial Day Weekend of 2017. Of the remaining stores, 50 more closed by the end of June 2017. One particular store closing in April 2017 garnered widespread media attention when a Facebook account, calling itself "RadioShack - Reynoldsburg, Ohio", began posting aggressive messages alluding to the bankruptcy, such as "We closed. Fuck you all." RadioShack addressed these posts on their official Facebook page denying any involvement. On June 29, 2017, RadioShack's creditors sued Sprint, claiming that it sabotaged its co-branded locations with newly built Sprint retail stores—which were constructed near well-performing RadioShack locations as determined by confidential sales information. The suit argued that Sprint's actions "destroyed nearly 6,000 RadioShack jobs". General Wireless announced plans on June 12, 2017, to auction off the RadioShack name and IP, with bidding to begin on July 18. Bidding concluded on July 19, 2017, when one of RadioShack's creditors, Kensington Capital Holdings, obtained the RadioShack brand and other intellectual properties for US$15 million. Kensington was the sole bidder. In October 2017, General Wireless officially exited bankruptcy and was allowed to retain the company's warehouse, e-commerce site, dealer network operations, and up to 28 stores. Post-bankruptcy RadioShack began shrinking its U.S. headquarters operation in 2017. By September of that year, it had a staff of 50 and moved to RadioShack's distribution center on Terminal Road just north of the Fort Worth Stockyards. In late July 2018, RadioShack partnered up with HobbyTown USA to open up around 100 RadioShack "Express" stores. HobbyTown owners select which RadioShack products to carry. RadioShack dealerships had re-opened around 500 stores by October 2018. By November 2018, it had signed 77 of HobbyTown's 137 franchise stores. Retail Ecommerce Ventures (REV) In November 2020, RadioShack's intellectual property and its remaining operations—about 400 independent authorized dealers, about 80 Hobbytown USA affiliate stores, and its online sales operation—were purchased by Retail Ecommerce Ventures (REV), a Florida-based company that had previously purchased defunct retailers Pier 1 Imports, Dress Barn, Modell's Sporting Goods, and Linens 'n Things, along with The Franklin Mint. In December 2021, REV announced they would use part of the brand name on a cryptocurrency platform called RadioShack DeFi (an abbreviation of decentralized finance). The platform would allow customers to exchange and freely swap existing cryptocurrency tokens for a token called $RADIO through the new platform. The Twitter account for RadioShack gained notoriety in June 2022 when it began posting tweets with not safe for work content in an effort to attract attention towards its cryptocurrency platform, then renamed RadioShack Swap. The strategy, directed by chief marketing officer Ábel Czupor, received a mixed reaction among dealers; HobbyTown USA subsequently terminated its relationship with RadioShack in response to customer confusion surrounding the posts. Corporate headquarters In the 1970s RadioShack had a new headquarters "Tandy Towers" built in downtown Fort Worth on Throckmorton Street. In 2001, RadioShack bought the former Ripley Arnold public housing complex in Downtown Fort Worth along the Trinity River for US$20 million. The company razed the complex and had a corporate headquarters campus built, after the City of Fort Worth approved a 30-year economic agreement to ensure that the company stayed in Fort Worth. RadioShack moved into the campus in 2005. In 2009, with two years left on a rent-free lease of the building, the Fort Worth Star-Telegram reported that the company was considering a new site for its headquarters. The Tampa Bay Business Journal reported rumors among Tampa Bay Area real estate brokers and developers that RadioShack might select Tampa as the site of its headquarters. In 2010, however, RadioShack announced efforts to remain at its current site. The headquarters was ultimately reduced to a small group after the second bankruptcy filing. In September 2017, what was left of RadioShack (about 50 people) left the downtown location, moving to a warehouse on Terminal Road just north of "The Stockyards". Non-US operations InterTAN Inc. In 1986, Tandy Corp. announced it would create a spinoff of its international retail operations, called InterTAN Inc. The new company would take over operations of over 2,000 international company-owned and franchised stores, while Tandy retained its 7,253 domestic outlets and 30 of its manufacturing facilities. InterTAN had two main units, Tandy Electronics Ltd., which operated in Canada, the UK, France, Belgium, West Germany, and the Netherlands; and Tandy Australia Ltd., which operated in Australia. At the end of 1989, there were 1,417 stores operated by InterTAN under the Tandy or Radio Shack names. InterTAN operated Tandy or Radio Shack stores in the UK until 1999 and Australia until 2001. RadioShack branded merchandise accounted for 9.5% of InterTAN's inventory purchases in its 2002–2003 fiscal year, the last complete year before the Circuit City acquisition, and later disappeared from stores entirely. Canada Following the creation of InterTAN, Tandy Electronics operated 873 stores in Canada, and owned the rights to the RadioShack name. In 2004, Circuit City purchased InterTAN, which held the rights to use the RadioShack name in Canada until 2010. Radio Shack Corp., which operated Radio Shack stores in the US, sued InterTAN in an attempt to end the contract for the company name early. On March 24, 2005, a US district court judge ruled in favour of RadioShack, requiring InterTAN stop using the brand name in products, packaging or advertising by June 30, 2005. The Canadian stores were rebranded under the name "The Source by Circuit City". Radio Shack briefly re-entered the Canadian market, but eventually closed all stores to refocus attention on its core US business. The Source was acquired by BCE Inc. in 2009. In January 2024, Bell announced a brand licensing agreement with its competitor Best Buy, which will see its locations rebranded as Best Buy Express and integrated into Best Buy's retail network, but remain under the ownership of BCE. Asia In March 2012, Malaysian company Berjaya Retail Berhad, entered into a franchising agreement with Radio Shack. Later that year, the company announced a second franchising deal with Chinese company, Cybermart. Berjaya had six stores in Malaysia before it quietly ceased operations in 2017. Mexico In 1986, Grupo Gigante signed a deal with Tandy Corporation to operate Radio Shack branded stores in Mexico. After growing their electronics chain within Mexico to 24 stores, Grupo Gigante signed a new deal with Tandy in 1992 to form a new joint ventured called Radio Shack de México in which both companies had an equal share. As part of the deal, Grupo Gigante transferred their electronics stores to Radio Shack de México. In 2008, Grupo Gigante separated from Radio Shack, (then renamed Radio Shack Corporation) and sold its share of the joint venture to Radio Shack Corp. for $42.3 million. In June 2015, Grupo Gigante repurchased 100 percent of RadioShack de Mexico, including stores, warehouses, and all related brand names and intellectual properties for use within Mexico, from the US Bankruptcy Court in Delaware for US$31.5 million. The chain had 247 stores in Mexico at the time of the sale. Following the sale, all Radio Shack stores, warehouses, brands, assets, and related trademarks in Mexico are currently owned by RadioShack de México S.A. de C.V., a subsidiary of Grupo Gigante. A major Mexican news magazine had reported in March 2015 that Grupo Gigante actually purchased 100% of the stock in RadioShack de México from RadioShack Corporation for US$31.8 million, two months prior to the bankruptcy filing, but had only had to hand over US$11.8 million to RadioShack Corp. for also assuming approximately US$20 million in debt liabilities. While Radio Shack was facing a second bankruptcy in the United States, Grupo Gigante announced in October 2017 that they planned to expand the Radio Shack brand within Mexico by opening eight more stores. Latin America & the Caribbean When Radio Shack Corporation filed for bankruptcy the first time in 2015, the Unicomer Group (Grupo Unicomer) purchased the Radio Shack brand from the bankruptcy court for its exclusive use in Latin America and the Caribbean, except Mexico. Unicomer, through its corporate parent Regal Forest Holding Co. Ltd., paid $5 million for the brand. The company's relationship with Radio Shack dated back to 1998, when Unicomer opened its first Radio Shack franchise store in El Salvador. It later expanded into Honduras, Guatemala, and Nicaragua. By January 2015, Unicomer had 57 Radio Shack stores distributed throughout four countries within Central America. In April 2015, Unicomer began receiving franchise payments from franchises in several countries that Unicomer had not previously had a business presence in. It expanded into Trinidad in 2016, Jamaica in 2017, Barbados in 2017, and Guyana in 2017. By the end of 2017, Unicomer had company-owned stores located in the countries of Barbados, El Salvador, Guatemala, Guyana, Honduras, Jamaica, Nicaragua, and Trinidad while receiving franchise payments from independent franchised stores located in the countries of Antigua, Aruba, Costa Rica, Paraguay and Peru in which Unicomer did not have a business presence in. Since 2014, the independent company Coolbox is an authorized dealer for RadioShack products in Peru. In April 2018, the RadioShack brand returned to Bolivia when franchisee Cosworld Trading opened two franchised stores for Unicomer in the capital city of La Paz. The previous RadioShack stores had closed in 2015 as a result of RadioShack first bankruptcy filing. Middle East When Radio Shack filed for bankruptcy the first time in 2015, the Egypt-based Delta RS for Trading purchased the Radio Shack brand from the bankruptcy court for its exclusive use in Middle East and North Africa for $US5 million. Delta RS for Trading, as Radio Shack Egypt, had opened its first Radio Shack franchised store in 1998 in Nasr City. By March 2003, Radio Shack Egypt had 65 company-operated stores plus 15 sub-franchised stores. In 2017, the Egyptian government accused Radio Shack Egypt and its parent Delta RS in aiding the Muslim Brotherhood. Other operations Corporate citizenship In 2006, RadioShack supported the National Center for Missing & Exploited Children by providing store presence for the StreetSentz program, a child identification and educational kit offered to families without charge. RadioShack supported United Way of America Charities to assist their Oklahoma and Texas relief efforts after the 2013 Moore tornado. RadioShack's green initiative promotes the Rechargeable Battery Recycling Corporation, which accepts end-of-life rechargeable batteries and wireless phones dropped off in-store to be safely recycled. Other retailer partnerships In August 2001, RadioShack opened kiosk-style stores inside Blockbuster outlets, only to abandon the project in February 2002; CEO Len Roberts announced that the stores did not meet expectations. RadioShack operated wireless kiosks within 417 Sam's Club discount warehouses from 2004 to 2011. The kiosk operations, purchased from Arizona-based Wireless Retail Inc, operated as a subsidiary, SC Kiosks Inc., with employees contracted through RadioShack Corporation. No RadioShack-branded merchandise was sold. The kiosks closed in 2011, costing RadioShack an estimated US$10–15 million in 2011 operating income. RadioShack then attempted a joint venture with Target to deploy mobile telephone kiosks in 1,490 Target stores by April 2011. In April 2013, RadioShack's partnership with Target ended and the Target Mobile in-store kiosks were turned over to a new partnership with Brightstar and MarketSource. No-contract wireless On September 5, 2012, RadioShack in a partnership with Cricket Wireless, began offering its own branded no-contract wireless services using Cricket and Sprint's nationwide networks. The service was discontinued on August 7, 2014; clients who had already purchased the service from RadioShack continue to receive service from Cricket Wireless. Cycling team sponsorship In 2009, the company became the main sponsor of a new cycling team, Team RadioShack, with Lance Armstrong and Johan Bruyneel. RadioShack featured Armstrong in a number of television commercials and advertising campaigns. RadioShack came under fire for having Armstrong as a spokesperson in 2011, when allegations that the cyclist had used performance-enhancing drugs surfaced. Lawsuits In September 1999, AutoZone, Inc., sued Tandy Corp., then the owner of RadioShack, in a federal district court in Tennessee for infringing the AutoZone trademark by using the name "PowerZone" for a section in RadioShack's retail stores. In November 2001, the district court granted Tandy's motion for summary judgment to dismiss the case, finding that AutoZone failed to prove that the use of "PowerZone" infringed the "AutoZone" trademark. AutoZone appealed that decision. In June 2004, the federal court of appeals affirmed the district court's dismissal of the case. In June 2011, a customer sued Sprint and RadioShack after finding pornography on their newly purchased cell phones. In 2012, a Denver jury awarded $674,938 to David Nelson, a 25-year RadioShack employee who had been fired in retaliation for complaining about age discrimination. In 2013, a federal jury awarded over $1 million in an age discrimination suit to a longtime RadioShack store manager who was fired in 2010 from the San Francisco store he had managed since 1998. In July 2014, in Verderame v. RadioShack Corp., the U.S. District Court for the Eastern District of Pennsylvania found that RadioShack owed its store managers in Pennsylvania a possible US$5.8 million for unpaid overtime. In popular culture In the 1980 film Used Cars, an electronics engineer needs equipment to do some last-minute repairs to a bootleg microwave transmitter, and says to his partner, "RadioShack closes in half an hour." A "Radio Shock" store (owned by the "Tandy Corporation") appeared in the original 1991 release of Space Quest IV, displaced by "Hz. So Good" in later editions because of threats of legal action by Tandy. RadioShack is featured prominently in Short Circuit 2, which serves as a "clinic" for Johnny 5 while he repairs himself after being assaulted by thieves. RadioShack is mentioned and briefly featured on the pilot episode of Young Sheldon. Visits to RadioShack are a frequent plot point in the Young Sheldon series, building off allusions to childhood visits made by the character Sheldon Cooper in its parent series, The Big Bang Theory. The family returns to the RadioShack store in a later episode, where his mother purchases him a Tandy 1000. RadioShack appears in the second season of the Netflix series Stranger Things as the workplace of Bob Newby. In one scene, an Armatron (a product actually sold at RadioShack during that period) can be seen on a shelf above his head. In the 2001 re-make of the 1960 movie Ocean's Eleven, after Livingston asks an FBI agent to not touch his equipment by asking, "Do you see me grabbing the gun out of your holster and waving it around?", the agent retorts with "Hey 'RadioShack', relax". American sportswriter and YouTuber Jon Bois worked at RadioShack sometime in the early to mid 2000s, later publishing multiple articles detailing his personal experiences as an employee. References Notes Further reading Hayden, Andrew, "Radio Shack: A Humble Beginning for an Electronics Giant", antiqueradio.com, February 2007 External links Radio Shack Records in Fort Worth Library Archives Radioshackcatalogs.com, an 80-year archive of RadioShack catalogs, plus other corporate publications and historic photos 1921 establishments in Massachusetts Companies based in Fort Worth, Texas Companies formerly listed on the New York Stock Exchange Companies that filed for Chapter 11 bankruptcy in 2015 Companies that filed for Chapter 11 bankruptcy in 2017 Consumer electronics retailers of the United States Electronic kit manufacturers Home computer hardware companies Loudspeaker manufacturers American companies established in 1921 Retail companies established in 1921 2015 mergers and acquisitions Radio manufacturers
RadioShack
[ "Engineering" ]
10,629
[ "Radio electronics", "Radio manufacturers" ]
327,744
https://en.wikipedia.org/wiki/MSX%20BASIC
MSX BASIC is a dialect of the BASIC programming language. It is an extended version of Microsoft's MBASIC Version 4.5, adding support for graphic, music, and various peripherals attached to MSX microcomputers. Generally, MSX BASIC is designed to follow GW-BASIC, released the same year for IBM PCs and clones. During the creation of MSX BASIC, effort was made to make the system flexible and expandable. Distribution MSX BASIC came bundled in the ROM of all MSX computers. At system start-up MSX BASIC is invoked, causing its command prompt to be displayed, unless other software placed in ROM takes control (which is the typical case of game cartridges and disk interfaces, the latter causing the MSX-DOS prompt to be shown if there is a disk present which contains the DOS system files). When MSX BASIC is invoked, the ROM code for BIOS and the BASIC interpreter itself are visible on the lower 32K of the Z80 addressing space. The upper 32K are set to RAM, of which about 23K to 28K are available for BASIC code and data (the exact amount depends on the presence of disk controller and on the MSX-DOS kernel version). Development Environment MSX BASIC development environment is very similar to other versions of Microsoft BASIC. It has a command line-based Integrated Development Environment (IDE) system; all program lines must be numbered, all non-numbered lines are considered to be commands in direct mode (i.e., to be executed immediately). The user interface is entirely command-line-based. Versions of MSX BASIC Every new version of the MSX computer was bundled with an updated version of MSX BASIC. All versions are backward compatible and provide new capabilities to fully explore the new and extended hardware found on the newer MSX computers. MSX BASIC 1.0 Bundled with MSX1 computers 16 KB in size No native support for floppy disk requiring the Disk BASIC cartridge extension (4 KB overhead) Support for all available screen modes: Screen 0 (text mode 40 x 24 characters) Screen 1 (mixed text mode 32 x 24 characters, sprites and colored custom characters) Screen 2 (high resolution graphic mode 256 x 192 pixels, 16 colors) Screen 3 (low resolution graphic mode 64×48 - 4×4 pixel blocks over the screen 2 resolution) Full support for hardware sprites and interrupt-driven automatic collision detection Full support for the General Instruments AY-3-8910 Programmable Sound Generator (PSG) Note that the Brazilian MSX "clones" by Sharp and Gradiente show other versions of MSX BASIC (on the Sharps even called HOT-BASIC), but they're basically just unlicensed MSX BASIC 1.0. MSX BASIC 2.0 / 2.1 Bundled with MSX2 computers 32 KB in size (First 16 KB directly available, second 16 KB in other slot and has to be paged in/out for usage) Added support for new available screen modes, including graphic modes with 212 progressive or 424 interlaced lines: Updated Screen 0 (text mode 80 x 24) Screen 5 (graphic mode 256 x 212/424 pixels, 16 colors out of 512) Screen 6 (graphic mode 512 x 212/424 pixels, 4 colors out of 512) Screen 7 (graphic mode 512 x 212/424 pixels, 16 colors out of 512) Screen 8 (graphic mode 256 x 212/424 pixels, 256 colors, no palette) Added support for multicolored sprites (16 colors) Added support for hardware accelerated graphics functions (copy, fill, blitting, etc.) Added support for using the lower 32K RAM of the computer (not directly visible because the BIOS and BASIC interpreter ROMs take over the addressing space) as a limited RAM disk (only certain types of files could be saved). MSX BASIC 2.1 supports using the memory mapper (if available on the machine) to expand this RAM disk to almost 90 KB. MSX BASIC 2.1 exists on computers like the Philips MSX2 machines (except for the VG 8230), the Yamaha YIS-805 and Sanyo MPC-2300. MSX BASIC 3.0 Bundled with MSX2+ computers 32 KB in size (First 16 KB directly available, second 16 KB in other slot and has to be paged in/out for usage) Added command SET SCROLL for smooth, hardware based scrolling in BASIC Added support for new available screen modes: Screen 10 (graphic mode 256 x 212/424 pixels, 12499 YJK at once + 16 colors out of 512 RGB in ML) Screen 11 (graphic mode 256 x 212/424 pixels, 12499 YJK at once + 16 colors out of 512 RGB) Screen 12 (graphic mode 256 x 212/424 pixels, 19268 YJK at once) MSX BASIC 4.0 Bundled with the Panasonic FS-A1ST MSX turbo R model Added _PAUSE command to make delays in BASIC independent of the current CPU and clock Added extra commands for the PCM device (_PCMPLAY, _PCMREC) MSX BASIC 4.1 Bundled with the Panasonic FS-A1GT MSX turbo R model Added MIDI extensions Sample Extensions of MSX BASIC Since MSX BASIC was meant to be expandable from inception, it was possible to write add-on modules quite easily. Support for specific hardware was commonly added by means of expansion cartridges, which also served as the interface to the hardware in question. MSX Disk-BASIC is an example, bundled in the cartridge that provides the hardware interface to the disk drives, it adds commands to access the floppy disk drives. References External links MSX.bas - A Portuguese website focusing completely on development in MSX-BASIC. MSX2 Technical Handbook, Chapter 2: BASIC - Official documentation of MSX-BASIC 2.0, generated manually from a printed copy of MSX2 Technical Handbook. Discontinued Microsoft BASICs BASIC, MSX BASIC interpreters BASIC programming language family MSX-DOS Microsoft programming languages
MSX BASIC
[ "Technology" ]
1,259
[ "Computing platforms", "MSX-DOS" ]
327,799
https://en.wikipedia.org/wiki/Content-addressable%20memory
Content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications. It is also known as associative memory or associative storage and compares input search data against a table of stored data, and returns the address of matching data. CAM is frequently used in networking devices where it speeds up forwarding information base and routing table operations. This kind of associative memory is also used in cache memory. In associative cache memory, both address and content is stored side by side. When the address matches, the corresponding content is fetched from cache memory. History Dudley Allen Buck invented the concept of content-addressable memory in 1955. Buck is credited with the idea of recognition unit. Hardware associative array Unlike standard computer memory, random-access memory (RAM), in which the user supplies a memory address and the RAM returns the data word stored at that address, a CAM is designed such that the user supplies a data word and the CAM searches its entire memory to see if that data word is stored anywhere in it. If the data word is found, the CAM returns a list of one or more storage addresses where the word was found. Thus, a CAM is the hardware embodiment of what in software terms would be called an associative array. A similar concept can be found in the data word recognition unit, as proposed by Dudley Allen Buck in 1955. Standards A major interface definition for CAMs and other network search engines was specified in an interoperability agreement called the Look-Aside Interface (LA-1 and LA-1B) developed by the Network Processing Forum. Numerous devices conforming to the interoperability agreement have been produced by Integrated Device Technology, Cypress Semiconductor, IBM, Broadcom and others. On December 11, 2007, the OIF published the serial look-aside (SLA) interface agreement. Semiconductor implementations CAM is much faster than RAM in data search applications. There are cost disadvantages to CAM, however. Unlike a RAM chip, which has simple storage cells, each individual memory bit in a fully parallel CAM must have its own associated comparison circuit to detect a match between the stored bit and the input bit. Additionally, match outputs from each cell in the data word must be combined to yield a complete data word match signal. The additional circuitry increases the physical size and manufacturing cost of the CAM chip. The extra circuitry also increases power dissipation since every comparison circuit is active on every clock cycle. Consequently, CAM is used only in specialized applications where searching speed cannot be accomplished using a less costly method. One successful early implementation was a General Purpose Associative Processor IC and System. In the early 2000s several semiconductor companies including Cypress, IDT, Netlogic, Sibercore, and MOSAID introduced CAM products targeting networking applications. These products were labelled Network Search Engines (NSE), Network Search Accelerators (NSA), and Knowledge-based Processors (KBP) but were essentially CAM with specialized interfaces and features optimized for networking. Currently Broadcom offers several families of KBPs. Alternative implementations To achieve a different balance between speed, memory size and cost, some implementations emulate the function of CAM by using standard tree search or hashing designs in hardware, using hardware tricks like replication or pipelining to speed up effective performance. These designs are often used in routers. The Luleå algorithm is an efficient implementation for longest prefix match searches as required in internet routing tables. Ternary CAMs Binary CAM is the simplest type of CAM and uses data search words consisting entirely of 1s and 0s. Ternary CAM (TCAM) allows a third matching state of X or don't care for one or more bits in the stored word, thus adding flexibility to the search. For example, a stored word of 10XX0 in a ternary CAM will match any of the four search words 10000, 10010, 10100, or 10110. The added search flexibility comes at an additional cost over binary CAM as the internal memory cell must now encode three possible states instead of the two for the binary CAM. This additional state is typically implemented by adding a mask bit (care or don't care bit) to every memory cell. In 2013, IBM fabricated a nonvolatile TCAM using 2-transistor/2-resistive-storage (2T-2R) cells. A design of TCAM using hybrid Ferroelectric FeFET was recently published by a group of International scientists. Example applications Content-addressable memory is often used in computer networking devices. For example, when a network switch receives a data frame from one of its ports, it updates an internal table with the frame's source MAC address and the port it was received on. It then looks up the destination MAC address in the table to determine what port the frame needs to be forwarded to, and sends it out on that port. The MAC address table is usually implemented with a binary CAM so the destination port can be found very quickly, reducing the switch's latency. Ternary CAMs are often used in network routers, where each address has two parts: the network prefix, which can vary in size depending on the subnet configuration, and the host address, which occupies the remaining bits. Each subnet has a network mask that specifies which bits of the address are the network prefix and which bits are the host address. Routing is done by consulting a routing table maintained by the router which contains each known destination network prefix, the associated network mask, and the information needed to route packets to that destination. In a simple software implementation, the router compares the destination address of the packet to be routed with each entry in the routing table, performing a bitwise AND with the network mask and comparing it with the network prefix. If they are equal, the corresponding routing information is used to forward the packet. Using a ternary CAM for the routing table makes the lookup process very efficient. The addresses are stored using don't care for the host part of the address, so looking up the destination address in the CAM immediately retrieves the correct routing entry; both the masking and comparison are done by the CAM hardware. This works if (a) the entries are stored in order of decreasing network mask length, and (b) the hardware returns only the first matching entry; thus, the match with the longest network mask (longest prefix match) is used. Other CAM applications include: Fully associative cache controllers and translation lookaside buffers Database engines Data compression hardware Artificial neural networks Intrusion prevention systems Network processors Several custom computers, like the Goodyear STARAN, were built to implement CAM. See also Content-addressable network Content-addressable parallel processor Content-addressable storage, or file system Sparse distributed memory Tuple space References Bibliography Anargyros Krikelis, Charles C. Weems (editors) (1997). Associative Processing and Processors, IEEE Computer Science Press. Stormon, C.D.; Troullinos, N.B.; Saleh, E.M.; Chavan, A.V.; Brule, M.R.; Oldfield, J.V.; A general-purpose CMOS associative processor IC and system, Coherent Research Inc., East Syracuse, NY, USA, IEEE Micro, Dec. 1992, Volume: 12 Issue:6. External links CAM Primer Arithmetic Processing using Associative memory Associative arrays Computer memory Computer networking
Content-addressable memory
[ "Technology", "Engineering" ]
1,550
[ "Computer networking", "Computer science", "Computer engineering" ]
327,817
https://en.wikipedia.org/wiki/Kaseya%20Center
Kaseya Center (Pat Riley Court at Kaseya Center) is a multi-purpose arena on Biscayne Bay in Miami, Florida. The arena is home to the Miami Heat of the National Basketball Association. The arena was previously named American Airlines Arena from opening in 1999 until 2021, FTX Arena from 2021 until 2023 following the bankruptcy of FTX, and Miami-Dade Arena during an interim period in 2023. Since April 2023, the naming rights to the arena are owned by Kaseya under a 17-year, $117.4 million agreement. The arena has capacity for 19,500 people, including 2,105 club seats, 80 luxury suites, and 76 private boxes. Additionally, for more intimate performances, The Waterfront Theater, the largest indoor theater in Florida, is within the arena complex, seating between 3,000 and 5,800 patrons. The theater can be configured for concerts, worship events, family events, musical theatre shows and other stage productions. American Airlines, which has a hub at Miami International Airport, maintains a travel center at the venue. The arena is directly served by the Miami Metrorail at Government Center station via free transfers to Metromover Omni Loop, providing direct service to Freedom Tower station and Park West station stations, within walking distance. It is also within walking distance from the Historic Overtown/Lyric Theatre station. The arena has 939 parking spaces, with those spaces reserved for premium seat and Dewar's 12 Clubhouse ticket holders during Heat games. ParkJockey manages the arena's on-site parking. History In 1997, the owners of the Miami Heat of the National Basketball Association, which then played in the eight-year-old, publicly financed Miami Arena, threatened to move to Broward County unless they were given the $38 million parcel of land for the new arena by Alex Penelas, then-mayor of Miami-Dade County. The agreement provided that the county receive 40% of annual profits of the arena, which was above $14 million. Construction began on February 6, 1998. The arena was designed by Arquitectonica and 360 Architecture. Kaseya Center opened as the American Airlines Arena on December 31, 1999, and its construction cost was $213 million. Architectural design team members included George Heinlein, Cristian Petschen, Reinaldo Borges, and Lance Simon. The arena's opening was inaugurated with a concert by Gloria Estefan. Two days later, on January 2, 2000, the Miami Heat played its first game in the new arena by defeating the Orlando Magic 111–103. As part of its sponsorship arrangement, American Airlines had a giant aircraft painted atop the arena's roof, with an American Airlines logo in the center. The design was visible from airplanes taking off and landing at Miami International Airport, where American has a hub. The arena also has luxury skyboxes called "Flagship Lounges", a trademark originally used for American's premium-class lounges at certain airports. Until the date that the arena was renamed in 2020–2021, the arena used the 1967-2013 logo of American Airlines. The arena was sometimes referred to as "Triple-A" or "A3" (A cubed). The arena is known for its unusual scoreboard, designed by artist Christopher Janney and installed in 1998 as part of the original construction. Drawing on the underwater anemone forms, the scoreboard also changes colors depending on the atmosphere. For concerts in an arena configuration, end stage capacity is 12,202 for 180° shows, 15,402 for 270° shows, and 18,309 for 360° shows. For center stage concerts the arena can seat 19,146. WTVJ, the city's NBC-owned and operated station in Miami, had their Downtown Miami Studios in the back of the arena from 2001 until 2011. In 2013, the Miami Heat paid rent on the arena for the first time pursuant to the percentage rent agreement with the county; the payment was $3.32 million. On September 10, 2019, American Airlines said that it would not renew its naming rights upon expiration at the end of 2019. The American Airlines Arena court decals were removed from the Heat's floor before the 2020–21 season and replaced temporarily with the logo of team/league vehicle sponsor Kia Motors. In March 2021, FTX acquired the naming rights to the arena in a $135 million, 19-year agreement. The NBA approved the deal in early April, and the arena was renamed FTX Arena in June 2021, just after the Miami Heat were swept by the Milwaukee Bucks in the first round of the 2021 NBA playoffs. As part of the bankruptcy of FTX, the naming rights agreement was terminated effective January 2023. After three months under the temporary name of Miami-Dade Arena, a 17-year naming rights agreement was reached with Miami-based software company Kaseya to name the arena Kaseya Center beginning April 2023. Under the terms of the contract, the county receives the majority of the naming rights revenue while the Heat receives $2 million annually. In October 2024, it was announced that the court would be dedicated to longtime coach and executive Pat Riley, who led the Heat to three championships and helped the team acquire LeBron James and Chris Bosh in 2010. Notable events Circus In January 2017, the closing of the Ringling Bros. and Barnum & Bailey Circus was announced after shows at the arena. Basketball The then-named American Airlines Arena, along with the American Airlines Center in Dallas, hosted the 2006 NBA Finals and the 2011 NBA Finals as the Miami Heat played the Dallas Mavericks. The Heat won the championship in 2006 in Dallas and the Mavericks won in the 2011 rematch in Miami. These series were the first and second appearances in the NBA Finals for both franchises. As the airline held naming rights to both venues, people nicknamed the matchups as the "American Airlines series". The arena hosted the 2012, 2013 and 2014 NBA Finals along with the Chesapeake Energy Arena in Oklahoma City in 2012, and the AT&T Center in San Antonio in 2013 and 2014. In 2012, the Heat defeated the Oklahoma City Thunder in five games, winning the championship at home. In 2013, the Heat played the San Antonio Spurs. The Heat faced a 3–2 series deficit returning to Miami but won games 6 and 7 to defend their championship. In 2014, the Spurs defeated the Heat in five games in San Antonio and won the championship and the rematch. The arena hosted the 2023 NBA Finals under its current name of Kaseya Center, along with the Ball Arena in Denver as the Heat played the Denver Nuggets. The Nuggets defeated the Heat in five games to win their first championship. Since 2015, the arena has hosted the annual Hoophall Miami Invitational, an NCAA Division I college basketball showcase event. Professional wrestling The arena hosted Uncensored (2000), the World Championship Wrestling WCW Uncensored pay-per-view. Four major WWE pay-per-view events have been held at the arena: the Royal Rumble in 2006, Survivor Series in 2007 and 2010, and WWE Hell in a Cell in 2013. It has also hosted various episodes of WWE Raw and WWE SmackDown. Mixed martial arts On April 25, 2003, the arena hosted the first Ultimate Fighting Championship event in Florida, UFC 42: Sudden Impact. The UFC returned to the arena after twenty years on April 8, 2023, for UFC 287: Pereira vs. Adesanya 2. The promotion returned again on March 9, 2024, for UFC 299: O'Malley vs. Vera 2. Other sports The arena features a regulation NHL ice rink, though the arena has never hosted the sport, as the Florida Panthers have played in Sunrise at the Amerant Bank Arena since October 1998. The rink, lined with a smaller wall, instead accommodates ice shows such as Disney on Ice. The Waterfront Theatre at the arena hosted the 2020 NFL Honors on February 1, 2020, which was broadcast by Fox Broadcasting Company. Music Notable musicians to perform at the arena include Olivia Rodrigo, Doja Cat, Gloria Estefan, Phish, Shakira, Dua Lipa, Kylie Minogue, Mariah Carey, Cher, Kelly Clarkson, Clay Aiken, Britney Spears, U2, Soda Stereo, Kanye West, Tina Turner, Celine Dion, Justin Bieber, Lady Gaga, Coldplay, Jennifer Lopez, SZA, Madonna, Miley Cyrus, Hillsong United, Justin Timberlake, One Direction, Katy Perry, Demi Lovato, Ariana Grande, Chris Brown, Janet Jackson, Taylor Swift, The Weeknd, Rihanna, Selena Gomez, Maroon 5, Adele, Carrie Underwood, Jimmie Allen, Ricardo Arjona, RBD, and Tini. The 2004 MTV Video Music Awards and the 2005 MTV Video Music Awards, Sensation, as well as the For Darfur benefit concert were held at the arena. Awards ceremonies The arena hosts the annual Premio Lo Nuestro Latin music awards since 2001. The awards are held on a Thursday night in late February. The Kaseya Center hosted the Latin Grammy Awards, with the 2003, the 2020 and in 2024 on November 14. Gallery See also List of indoor arenas by capacity References External links Satellite view from Google Maps 1999 establishments in Florida American Airlines Arquitectonica buildings Basketball venues in Florida Boxing venues in the United States Leadership in Energy and Environmental Design certified buildings Miami Heat Miami Sol Mixed martial arts venues in Florida Music venues completed in 1999 Music venues in Florida NBA venues Sports venues completed in 1999 Sports venues in Miami Tourist attractions in Miami Women's National Basketball Association venues
Kaseya Center
[ "Engineering" ]
1,961
[ "Building engineering", "Leadership in Energy and Environmental Design certified buildings" ]
327,893
https://en.wikipedia.org/wiki/Isoflurane
Isoflurane, sold under the brand name Forane among others, is a general anesthetic. It can be used to start or maintain anesthesia; however, other medications are often used to start anesthesia, due to airway irritation with isoflurane. Isoflurane is given via inhalation. Side effects of isoflurane include a decreased ability to breathe (respiratory depression), low blood pressure, and an irregular heartbeat. Serious side effects can include malignant hyperthermia or high blood potassium. It should not be used in patients with a history of malignant hyperthermia in either themselves or their family members. It is unknown if its use during pregnancy is safe for the fetus, but use during a cesarean section appears to be safe. Isoflurane is a halogenated ether. Isoflurane was approved for medical use in the United States in 1979. It is on the World Health Organization's List of Essential Medicines. Medical uses Isoflurane is always administered in conjunction with air or pure oxygen. Often, nitrous oxide is also used. Although its physical properties imply that anaesthesia can be induced more rapidly than with halothane, its pungency can irritate the respiratory system, negating any possible advantage conferred by its physical properties. Thus, it is mostly used in general anesthesia as a maintenance agent after induction of general anesthesia with an intravenous agent such as thiopentone or propofol. Mechanism of action Similar to many general anesthetics, the exact mechanism of the action has not been clearly delineated. Isoflurane reduces pain sensitivity (analgesia) and relaxes muscles. Isoflurane likely binds to GABA, glutamate and glycine receptors, but has different effects on each receptor. Isoflurane acts as a positive allosteric modulator of the GABAA receptor in electrophysiology studies of neurons and recombinant receptors. It potentiates glycine receptor activity, which decreases motor function. It inhibits receptor activity in the NMDA glutamate receptor subtypes. Isoflurane inhibits conduction in activated potassium channels. Isoflurane also affects intracellular molecules. It inhibits plasma membrane calcium ATPases (PMCAs) which affects membrane fluidity by hindering the flow of Ca2+ (calcium ions) out across the membrane, this in turn affects neuron depolarization. It binds to the D subunit of ATP synthase and NADH dehydrogenase. General anaesthesia with isoflurane reduces plasma endocannabinoid AEA concentrations, and this could be a consequence of stress reduction after loss of consciousness. Adverse effects Isoflurane can cause a sudden decrease in blood pressure due to dose-dependent peripheral vasodilation. This may be specially marked in hypovolemic patients. Animal studies have raised safety concerns of certain general anesthetics, in particular ketamine and isoflurane, in young children. The risk of neurodegeneration was increased in combination of these agents with nitrous oxide and benzodiazepines such as midazolam. Whether these concerns occur in humans is unclear. Elderly Biophysical studies using NMR spectroscopy has provided molecular details of how inhaled anesthetics interact with three amino acid residues (G29, A30 and I31) of amyloid beta peptide and induce aggregation. This area is important as "some of the commonly used inhaled anesthetics may cause brain damage that accelerates the onset of Alzheimer's disease". Physical properties It is administered as a racemic mixture of (R)- and (S)-optical isomers. Isoflurane has a boiling point of . It is non-combustible but can give off irritable and toxic fumes when exposed to flame. History Together with enflurane and halothane, Isoflurane began to replace the flammable ethers used in the pioneer days of surgery; this shift began in the 1940s to the 1950s. Its name comes from being a structural isomer of enflurane, hence they have the same empirical formula. Environment The average lifetime of isoflurane in the atmosphere is 3.2 years, its global warming potential is 510 and the yearly emissions add up to 880 tons. Veterinary use Isoflurane is frequently used for veterinary anaesthesia. References External links - 1-chloro-2,2,2-trifluoroethyl difluoromethyl ether - 1-chloro-2,2,2-trifluoroethyl difluoromethyl ether as an anesthetic agent 5-HT3 agonists Ethers GABAA receptor positive allosteric modulators General anesthetics Glycine receptor agonists Nicotinic antagonists NMDA receptor antagonists Organochlorides Organofluorides World Health Organization essential medicines Fluranes Wikipedia medicine articles ready to translate Trifluoromethyl compounds Difluoromethoxy compounds
Isoflurane
[ "Chemistry" ]
1,067
[ "Organic compounds", "Functional groups", "Ethers" ]
327,896
https://en.wikipedia.org/wiki/Sevoflurane
Sevoflurane, sold under the brand name Sevorane, among others, is a sweet-smelling, nonflammable, highly fluorinated methyl isopropyl ether used as an inhalational anaesthetic for induction and maintenance of general anesthesia. After desflurane, it is the volatile anesthetic with the fastest onset. While its offset may be faster than agents other than desflurane in a few circumstances, its offset is more often similar to that of the much older agent isoflurane. While sevoflurane is only half as soluble as isoflurane in blood, the tissue blood partition coefficients of isoflurane and sevoflurane are quite similar. For example, in the muscle group: isoflurane 2.62 vs. sevoflurane 2.57. In the fat group: isoflurane 52 vs. sevoflurane 50. As a result, the longer the case, the more similar will be the emergence times for sevoflurane and isoflurane. It is on the World Health Organization's List of Essential Medicines. Medical uses It is one of the most commonly used volatile anesthetic agents, particularly for outpatient anesthesia, across all ages, but particularly in pediatric anesthesia, as well as in veterinary medicine. Together with desflurane, sevoflurane is replacing isoflurane and halothane in modern anesthesia practice. It is often administered in a mixture of nitrous oxide and oxygen. Physiological effects Sevoflurane is a potent vasodilator. As such, it induces a dose dependent reduction in blood pressure and cardiac output. It is a bronchodilator, however, in patients with pre-existing lung pathology, it may precipitate coughing and laryngospasm. It reduces the ventilatory response to hypoxia and hypercapnia, and impedes hypoxic pulmonary vasoconstriction. Sevoflurane vasodilatory properties also cause it to increase intracranial pressure and cerebral blood flow. However, it reduces cerebral metabolic rate. Adverse effects Sevoflurane has an excellent safety record, but is under review for potential hepatotoxicity, and may accelerate Alzheimer's. There were rare reports involving adults with symptoms similar to halothane hepatotoxicity. Sevoflurane is the preferred agent for mask induction due to its lesser irritation to mucous membranes. Sevoflurane is an inhaled anesthetic that is often used to induce and maintain anesthesia in children for surgery. During the process of awakening from the medication, it has been associated with a high incidence (>30%) of agitation and delirium in preschool children undergoing minor noninvasive surgery. It is not clear if this can be prevented. Studies examining a current significant health concern, anesthetic-induced neurotoxicity (including with sevoflurane, and especially with children and infants) are "fraught with confounders, and many are underpowered statistically", and so are argued to need "further data... to either support or refute the potential connection". Concern regarding the safety of anaesthesia is especially acute with regard to children and infants, where preclinical evidence from relevant animal models suggest that common clinically important agents, including sevoflurane, may be neurotoxic to the developing brain, and so cause neurobehavioural abnormalities in the long term; two large-scale clinical studies (PANDA and GAS) were ongoing as of 2010, in hope of supplying "significant [further] information" on neurodevelopmental effects of general anaesthesia in infants and young children, including where sevoflurane is used. In 2021, researchers at Massachusetts General Hospital published in Communications Biology research that sevoflurane may accelerate existing Alzheimer's or existing tau protein to spread: "These data demonstrate anesthesia-associated tau spreading and its consequences. [...] This tau spreading could be prevented by inhibitors of tau phosphorylation or extracellular vesicle generation." According to Neuroscience News, "Their previous work showed that sevoflurane can cause a change (specifically, phosphorylation, or the addition of phosphate) to tau that leads to cognitive impairment in mice. Other researchers have also found that sevoflurane and certain other anesthetics may affect cognitive function." Additionally, there has been some investigation into potential correlation of sevoflurane use and renal damage (nephrotoxicity). However, this should be subject to further investigation, as a recent study shows no correlation between sevoflurane use and renal damage as compared to other control anesthetic agents. There is also evidence that renal damage may be caused by compound A, a product of the degradation of sevoflurane. Pharmacology The exact mechanism of the action of general anaesthetics has not been delineated. Sevoflurane acts as a positive allosteric modulator of the GABAA receptor in electrophysiology studies of neurons and recombinant receptors. However, it also acts as an NMDA receptor antagonist, potentiates glycine receptor currents, and inhibits nAChR and 5-HT3 receptor currents. History Sevoflurane was discovered by Ross Terrell alongside Louise Speers. Sevoflurane was concurrently synthesized by Richard Wallen. The rights for sevoflurane worldwide were held by AbbVie. It is available as a generic drug. Global-warming potential Sevoflurane is a greenhouse gas. The twenty-year global-warming potential, GWP(20), for sevoflurane is 349, however this is significantly lower than isoflurane or desflurane. Degradation Sevoflurane will degrade into what is most commonly referred to as compound A (fluoromethyl 2,2-difluoro-1-(trifluoromethyl)vinyl ether) when in contact with CO2 absorbents, and this degradation tends to enhance with decreased fresh gas flow rates, increased temperatures, and increased sevoflurane concentration. Compound A may be correlated with renal damage. References Further reading Drugs developed by AbbVie Ethers Drugs developed by GSK plc GABAA receptor positive allosteric modulators General anesthetics Glycine receptor agonists Greenhouse gases Nicotinic antagonists NMDA receptor antagonists 5-HT3 antagonists Fluranes Organofluorides Trifluoromethyl compounds Products introduced in 1990 World Health Organization essential medicines Veterinary drugs
Sevoflurane
[ "Chemistry", "Environmental_science" ]
1,423
[ "Environmental chemistry", "Functional groups", "Organic compounds", "Ethers", "Greenhouse gases" ]
327,940
https://en.wikipedia.org/wiki/Paleoethnobotany
Paleoethnobotany (also spelled palaeoethnobotany), or archaeobotany, is the study of past human-plant interactions through the recovery and analysis of ancient plant remains. Both terms are synonymous, though paleoethnobotany (from the Greek words palaios [παλαιός] meaning ancient, ethnos [έθνος] meaning race or ethnicity, and votano [βότανο] meaning plants) is generally used in North America and acknowledges the contribution that ethnographic studies have made towards our current understanding of ancient plant exploitation practices, while the term archaeobotany (from the Greek words archaios [αρχαίος] meaning ancient and votano) is preferred in Europe and emphasizes the discipline's role within archaeology. As a field of study, paleoethnobotany is a subfield of environmental archaeology. It involves the investigation of both ancient environments and human activities related to those environments, as well as an understanding of how the two co-evolved. Plant remains recovered from ancient sediments within the landscape or at archaeological sites serve as the primary evidence for various research avenues within paleoethnobotany, such as the origins of plant domestication, the development of agriculture, paleoenvironmental reconstructions, subsistence strategies, paleodiets, economic structures, and more. Paleoethnobotanical studies are divided into two categories: those concerning the Old World (Eurasia and Africa) and those that pertain to the New World (the Americas). While this division has an inherent geographical distinction to it, it also reflects the differences in the flora of the two separate areas. For example, maize only occurs in the New World, while olives only occur in the Old World. Within this broad division, paleoethnobotanists tend to further focus their studies on specific regions, such as the Near East or the Mediterranean, since regional differences in the types of recovered plant remains also exist. Macrobotanical vs. microbotanical remains Plant remains recovered from ancient sediments or archaeological sites are generally referred to as either ‘macrobotanicals’ or ‘microbotanicals.’ Macrobotanical remains are vegetative parts of plants, such as seeds, leaves, stems and chaff, as well as wood and charcoal that can either be observed with the naked eye or the with the use of a low-powered microscope. Microbotanical remains consist of microscopic parts or components of plants, such as pollen grains, phytoliths and starch granules, that require the use of a high-powered microscope in order to see them. The study of seeds, wood/charcoal, pollen, phytoliths and starches all require separate training, as slightly different techniques are employed for their processing and analysis. Paleoethnobotanists generally specialize in the study of a single type of macrobotanical or microbotanical remain, though they are familiar with the study of other types and can sometimes even specialize in more than one. History The state of Paleoethnobotany as a discipline today stems from a long history of development that spans more than two hundred years. Its current form is the product of steady progression by all aspects of the field, including methodology, analysis and research. Initial work The study of ancient plant remains began in the 19th century as a result of chance encounters with desiccated and waterlogged material at archaeological sites. In Europe, the first analyses of plant macrofossils were conducted by the botanist C. Kunth (1826) on desiccated remains from Egyptian tombs and O. Heer (1866) on waterlogged specimens from lakeside villages in Switzerland, after which point archaeological plant remains became of interest and continued to be periodically studied from different European countries until the mid-20th century. In North America, the first analysis of plant remains occurred slightly later and did not generate the same interest in this type of archaeological evidence until the 1930s when Gilmore (1931) and Jones (1936) analysed desiccated material from rock shelters in the American Southwest. All these early studies, in both Europe and North America, largely focused on the simple identification of the plant remains in order to produce a list of the recovered taxa. Establishment of the field During the 1950s and 1960s, Paleoethnobotany gained significant recognition as a field of archaeological research with two significant events: the publication of the Star Carr excavations in the UK and the recovery of plant material from archaeological sites in the Near East. Both convinced the archaeological community of the importance of studying plant remains by demonstrating their potential contribution to the discipline; the former produced a detailed paleoenvironmental reconstruction that was integral to the archaeological interpretation of the site and the latter yielded the first evidence for plant domestication, which allowed for a fuller understanding of the archaeological record. Thereafter, the recovery and analysis of plant remains received greater attention as a part of archaeological investigations. In 1968, the International Work Group for Palaeoethnobotany (IWGP) was founded. Expansion and growth With the rise of Processual archaeology, the field of Paleoethnobotany began to grow significantly. The implementation in the 1970s of a new recovery method, called flotation, allowed archaeologists to begin systematically searching for plant macrofossils at every type of archaeological site. As a result, there was a sudden influx of material for archaeobotanical study, as carbonized and mineralized plant remains were becoming readily recovered from archaeological contexts. Increased emphasis on scientific analyses also renewed interest in the study of plant microbotanicals, such as phytoliths (1970s) and starches (1980s), while later advances in computational technology during the 1990s facilitated the application of software programs as tools for quantitative analysis. The 1980s and 1990s also saw the publication of several seminal volumes about Paleoethnobotany that demonstrated the sound theoretical framework in which the discipline operates. And finally, the popularization of Post-Processual archaeology in the 1990s, helped broaden the range of research topics addressed by paleoethnobotanists, for example 'food-related gender roles'. Current state of the field Paleoethnobotany is a discipline that is ever evolving, even up to the present day. Since the 1990s, the field has continued to gain a better understanding of the processes responsible for creating plant assemblages in the archaeological record and to refine its analytical and methodological approaches accordingly. For example, current studies have become much more interdisciplinary, utilizing various lines of investigation in order to gain a fuller picture of the past plant economies. Research avenues also continue to explore new topics pertaining to ancient human-plant interactions, such as the potential use of plant remains in relation to their mnemonic or sensory properties. Interest in plant remains surged in the 2000s alongside the improvement of stable isotope analysis and its application to archaeology, including the potential to illuminate the intensity of agricultural labor, resilience, and long-term social and economic changes. Archaeobotany had not been used extensively in Australia until recently. In 2018 a study of the Karnatukul site in the Little Sandy Desert of Western Australia showed evidence of continuous human habitation for around 50,000 years, by analysing wattle and other plant items. Modes of preservation As organic matter, plant remains generally decay over time due to microbial activity. In order to be recovered in the archaeological record, therefore, plant material must be subject to specific environmental conditions or cultural contexts that prevent their natural degradation. Plant macrofossils recovered as paleoenvironmental, or archaeological specimens result from four main modes of preservation: Carbonized (Charred): Plant remains can survive in the archaeological record when they have been converted into charcoal through exposure to fire under low-oxygen conditions. Charred organic material is more resistant to deterioration, since it is only susceptible to chemical breakdown, which takes a long time (Weiner 2010). Due to the essential use of fire for many anthropogenic activities, carbonized remains constitute the most common type of plant macrofossil recovered from archaeological sites. This mode of preservation, however, tends to be biased towards plant remains that come into direct contact with fire for cooking or fuel purposes, as well as those that are more robust, such as cereal grains and nut shells. Waterlogged: Preservation of plant material can also occur when it is deposited in permanently wet, anoxic conditions, because the absence of oxygen prohibits microbial activity. This mode of preservation can occur in deep archaeological features, such as wells, and in lakebed or riverbed sediments adjacent to settlements. A wide range of plant remains are usually preserved as waterlogged material, including seeds, fruit stones, nutshells, leaves, straw and other vegetative matter. Desiccated: Another mode by which plant material can be preserved is desiccation, which only occurs in very arid environments, such as deserts, where the absence of water limits decomposition of organic matter. Desiccated plant remains are a rarer recovery, but an incredibly important source of archaeological information, since all types of plant remains can survive, even very delicate vegetative attributes, such as onion skins and crocus stigmas (saffron), as well as woven textiles, bunches of flowers and entire fruits. Mineralized: Plant material can also preserve in the archaeological record when its soft organic tissues are completely replaced by inorganic minerals. There are two types of mineralization processes. The first, 'biomineralization,' occurs when certain plant remains, such as the fruits of Celtis sp. (hackberry) or nutlets of the Boraginaceae family, naturally produce increased amounts of calcium carbonate or silica throughout their growth, resulting in calcified or silicified specimens. The second, 'replacement mineralization,' occurs when plant remains absorb precipitating minerals present in the sediment or organic matter in which they are buried. This mode of preservation by mineralization only occurs under specific depositional conditions, usually involving a high presence of phosphate. Mineralized plant remains, therefore, are most commonly recovered from middens and latrine pits – contexts which often yield plant remains that have passed through the digestive track, such as spices, grape pips and fig seeds. The mineralization of plant material can also occur when remains are deposited alongside metal artefacts, especially those made of bronze or iron. In this circumstance, the soft organic tissues are replaced by the leaching of corrosion products that form over time on the metal objects. In addition to the above-mentioned modes of preservation, plant remains can also be occasionally preserved in a frozen state or as impressions. The former occurs quite rarely, but a famous example comes from Ötzi, the 5,500 year old mummy found frozen in the French Alps, whose stomach contents revealed the plant and meat components of his last meal. The latter occurs more regularly, though plant impressions do not actually preserve the macrobotanical remains themselves, but rather their negative imprints in pliable materials like clay, mudbrick or plaster. Impressions often result from the deliberate employment of plant material for decorative or technological purposes (such as the use of leaves to create patterning on ceramics or the use of chaff as temper in the construction of mudbricks), however, they can also derive from accidental inclusions. Identification of plant impressions is achieved by creating a silicone cast of the imprints and studying them under the microscope. Recovery methods In order to study ancient plant macrobotanical material, Paleoethnobotanists employ a variety of recovery strategies that involve different sampling and processing techniques depending on the kind of research questions they are addressing, the type of plant macrofossils they are expecting to recover and the location from which they are taking samples. Sampling In general, there are four different types of sampling methods that can be used for the recovery of plant macrofossils from an archaeological site: Full Coverage sampling: involves taking at least one sample from all contexts and features Judgement sampling: entails the sampling of only areas and features most likely to yield ancient plant remains, such as a hearth Random sampling: consists of taking random samples either arbitrarily or via a grid system Systematic sampling: involves taking samples at set intervals during excavation Each sampling method has its own pros and cons and for this reason, paleoethnobotanists sometimes implement more than one sampling method at a single site. In general, Systematic or Full Coverage sampling is always recommended whenever possible. The practicalities of excavation, however, and/or the type of archaeological site under investigation sometimes limit their use and Judgment sampling tends to occur more often than not. Aside from sampling methods, there are also different types of samples that can be collected, for which the standard, recommended sample size is ~20L for dry sites and 1-5L for waterlogged sites. Point/Spot samples: consist of sediment collected only from a particular location Pinch samples: consist of small amounts of sediment that are collected from across the whole context and combined in one bag Column samples: consist of sediment collected from the different stratigraphic layers of a column of sediment that was deliberately left unexcavated These different types of samples again serve different research aims. For example, Point/Spot samples can reveal the spatial differentiation of food-related activities, Pinch samples are representative of all activities associated with a specific context, and Column samples can show change or variation or time. The sampling methods and types of samples used for the recovery of microbotanical remains (namely, pollen, phytoliths, and starches) follows virtually the same practices as outline above, with only some minor differences. First, the required sample size is much smaller: ~50g (a couple of tablespoons) of sediment for each type of microfossil analysis. Secondly, artefacts, such as stone tools and ceramics, can also be sampled for microbotanicals. And third, control samples from unexcavated areas in and around the site should always be collected for analytical purposes. Processing There are several different techniques for the processing of sediment samples. The technique a paleoethnobotanist chooses depends entirely upon the type of plant macrobotanical remains they expect to recover. Dry Screening involves pouring sediment samples through a nest of sieves, usually ranging from 5–0.5 mm. This processing technique is often employed as a means of recovering desiccated plant remains, since the use of water can weaken or damage this type of macrofossil and even accelerate its decomposition. Wet Screening is most often used for waterlogged contexts. It follows the same basic principle as dry screening, expect water is gently sprayed onto the sediment once it has been pour into the nest of sieves in order to help it break up and pass down through the various mesh sizes. The Wash-Over technique was developed in the UK as an effective way of processing waterlogged samples. The sediment is poured into a bucket with water and gently agitated by hand. When the sediment has effectively broken up and the organic matter is suspended, all the contents from the bucket, expect for the heavy inorganic matter at the bottom, is carefully poured out onto a 300μ mesh. The bucket is then emptied and the organic matter carefully rinsed from the mesh back into the bucket. More water is added before the contents are again poured out through a nest of sieves. Flotation is the most common processing technique employed for the recovery of carbonized plant remains. It uses water as a mechanism for separating charred and organic material from the sediment matrix, by capitalizing on their buoyancy properties.  When a sediment sample is slowly added to agitated water, the stones, sand, shells and other heavy material within the sediment sink to the bottom (heavy fraction or heavy residue), while the charred and organic material, which is less dense, float to the surface (light fraction or flot). This floating material can either be scooped off or spilled over into a fine-mesh sieve (usually ~300 μm). Both the heavy and light fractions are then left to dry before being examined for archaeological remains. Plant macrofossils are mostly contained within the light fraction, though some denser specimens, such as pulses or mineralized grape endosperms, are also sometimes found in the heavy fraction. Thus, each fraction must be sorted to extract all plant material. A microscope is used in order to aid the sorting of the light fractions, while heavy fractions are sorted with the naked eye. Flotation can be undertaken manually with buckets or by machine-assistance, which circulates the water through a series of tanks by means of a pump. Small-scale, manual flotation can also be used in the laboratory on waterlogged samples. Microbotanical remains (namely, pollen, phytoliths and starches) require completely different processing procedures in order to extract specimens from the sediment matrix. These procedures can be quite expensive, as they involve various chemical solutions, and are always carried out in the laboratory. Analysis Analysis is the key step in paleoethnobotanical studies that makes the interpretation of ancient plant remains possible. The quality of identifications and the use of different quantification methods are essential factors that influence the depth and breadth of interpretative results. Identification Plant macrofossils are analyzed under a low-powered stereomicroscope. The morphological features of different specimens, such as size, shape and surface decoration, are compared with images of modern plant material in identification literature, such as seed atlases, as well as real examples of modern plant material from reference collections, in order to make identifications. Based on the type of macrofossils and their level of preservation, identifications are made to various taxonomic levels, mostly family, genus and species. These taxonomic levels reflect varying degrees of identification specificity: families comprise big groups of similar type plants; genera make up smaller groups of more closely related plants within each family, and species consist of the different individual plants within each genus. Poor preservation, however, may require the creation of broader identification categories, such as ‘nutshell’ or ‘cereal grain’, while extremely good preservation and/or the application of analytical technology, such as Scanning Electron Microscopy (SEM) or Morphometric Analysis, may allow even more precise identification down to subspecies or variety level Desiccated and waterlogged macrofossils often have a very similar appearance with modern plant material, since their modes of preservation do not directly affect the remains. As a result, fragile seed features, such as anthers or wings, and occasionally even color, can be preserved, allowing for very precise identifications of this material. The high temperatures involved in the carbonization of plant remains, however, can sometimes cause the damage to or loss of plant macrofossil features. The analysis of charred plant material, therefore, often includes several family- or genus-level identifications, as well as some specimen categories. Mineralized plant macrofossils can range in preservation from detailed copies to rough casts depending on depositional conditions and the kind of replacing mineral. This type of macrofossil can easily be mistaken for stones by the untrained eye. Microbotanical remains follow the same identification principles, but require a high-powered (greater magnification) microscope with transmitted or polarized lighting. Starch and phytolith identifications are also subject to limitations, in terms of taxonomical specificity, based on the state of current reference material for comparison and considerable overlap in specimen morphologies. Quantification After identification, paleoethnobotanists provide absolute counts for all plant macrofossils recovered in each individual sample. These counts constitute the raw analytical data and serve as the basis for any further quantitative methods that may be applied. Initially, paleoethnobotanical studies mostly involved a qualitative assessment of the plant remains at an archaeological site (presence and absence), but the application of simple statistical methods (non-multivariate) followed shortly thereafter. The use of more complex statistics (multivariate), however, is a more recent development. In general, simple statistics allow for observations concerning specimen values across space and over time, while more complex statistics facilitate the recognition of patterning within an assemblage, as well as the presentation of large datasets. The application of different statistical techniques depends on the quantity of material available. Complex statistics require the recovery of a large number of specimens (usually around 150 from each sample involved in this type of quantitative analysis), whereas simple statistics can be applied regardless of the amount of recovered specimens – though obviously, the more specimens, the more effective the results. The quantification of microbotanical remains differs slightly from that of macrobotanical remains, mostly due to the high numbers of microbotanical specimens that are usually present in samples. As a result, relative/percentage occurrence sums are usually employed in the quantification of microbotanical remains instead of absolute taxa counts. Research results The work done in Paleoethnobotany is constantly furthering over understanding of ancient plant exploitation practices. The results are disseminated in digital archives, archaeological excavation reports and at academic conferences, as well as in books and journals related to archaeology, anthropology, plant history, paleoecology, and social sciences. In addition to the use of plants as food, such as paleodiet, subsistence strategies and agriculture, Paleoethnobotany has illuminated many other ancient uses for plants (some examples provided below, though there are many more): Production of bread/pastry in the widest sense Production of beverages Extraction of oils and dyes Agricultural regimes (irrigation, manuring, and sowing) Economic practices (production, storage, and trade) Building materials Fuel Symbolic use in ritual activities See also References Bibliography Twiss, K.C. 2019. The Archaeology of Food. Cambridge: Cambridge University Press. ISBN 9781108670159 Kristen J.G. 1997. People, Plants, and Landscapes: Studies in Paleoethnobotany. Alabama: University of Alabama Press. . Miksicek, C.H.1987. "Formation Processes of the Archaeobotanical Record." In M.B.Schiffer (ed.). Advances in Archaeological Method and Theory 10. New York: Academic Press, 211–247. . External links International Associations Association of Environmental Archaeology (AEA) International Work Group for Palaeoethnobotany (IWGP) Journals Vegetation History and Archaeobotany, exclusively publishing archaeobotanical/palaeoethnobotanical research, official publishing organ of the IWGP Archaeological and Anthropological Sciences Environmental Archaeology Interdisciplinaria Archaeologica (IANSA) Various knowledge resources ArchBotLit, Kiel University Digital Plant Atlas, Groningen University Integrated Archaeobotanical Research Project (IAR), originally hosted at the University of Sheffield Terry B. Ball, "Phytolith Literature Review" Steve Archer, "About Phytoliths" Alwynne B. Beaudoin, "The Dung File" Anthropology Archaeological sub-disciplines Branches of botany Ethnobotany
Paleoethnobotany
[ "Biology" ]
4,791
[ "Branches of botany" ]