id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
44,116,266
https://en.wikipedia.org/wiki/Effector-triggered%20immunity
Effector-triggered immunity (ETI) is one of the pathways, along with the pattern-triggered immunity (PTI) pathway, by which the innate immune system recognises pathogenic organisms and elicits a protective immune response. ETI is elicited when an effector protein secreted by a pathogen into the host cell is successfully recognised by the host. Alternatively, effector-triggered susceptibility (ETS) can occur if an effector protein can block the immune response triggered by pattern recognition receptors (PRR) and evade immunity, allowing the pathogen to propagate in the host. ETI was first identified in plants but has also been identified in animal cells. The basis of the ETI model lies in the gene-for-gene resistance hypothesis proposed by Harold Henry Flor in 1942. Flor proposed that plants may express resistance (R) proteins that recognise avirulence (Avr) proteins from pathogens, thus making them resistant to pathogen invasion. His hypothesis has since been confirmed by identifying multiple Avr-R gene pairs. Some Avr proteins are direct ligands for receptors encoded by the R genes, such as the Leu-rich repeat receptors (LRRs). Other Avr proteins, called effectors, act to modify host proteins and those modifications are sensed by R proteins on the host plant side to initiate effector-triggered immunity. References Immune system process
Effector-triggered immunity
Biology
290
54,960,916
https://en.wikipedia.org/wiki/Tardos%20function
In graph theory and circuit complexity, the Tardos function is a graph invariant introduced by Éva Tardos in 1988 that has the following properties: Like the Lovász number of the complement of a graph, the Tardos function is sandwiched between the clique number and the chromatic number of the graph. These two numbers are both NP-hard to compute. The Tardos function is monotone, in the sense that adding edges to a graph can only cause its Tardos function to increase or stay the same, but never decrease. The Tardos function can be computed in polynomial time. Any monotone circuit for computing the Tardos function requires exponential size. To define her function, Tardos uses a polynomial-time approximation scheme for the Lovász number, based on the ellipsoid method and provided by . Approximating the Lovász number of the complement and then rounding the approximation to an integer would not necessarily produce a monotone function, however. To make the result monotone, Tardos approximates the Lovász number of the complement to within an additive error of , adds to the approximation, and then rounds the result to the nearest integer. Here denotes the number of edges in the given graph, and denotes the number of vertices. Tardos used her function to prove an exponential separation between the capabilities of monotone Boolean logic circuits and arbitrary circuits. A result of Alexander Razborov, previously used to show that the clique number required exponentially large monotone circuits, also shows that the Tardos function requires exponentially large monotone circuits despite being computable by a non-monotone circuit of polynomial size. Later, the same function was used to provide a counterexample to a purported proof of P ≠ NP by Norbert Blum. References Graph invariants Circuit complexity
Tardos function
Mathematics
370
64,439,717
https://en.wikipedia.org/wiki/80%20Million%20Tiny%20Images
80 Million Tiny Images is a dataset intended for training machine learning systems constructed by Antonio Torralba, Rob Fergus, and William T. Freeman in a collaboration between MIT and New York University. It was published in 2008. The dataset has size 760 GB. It contains 79,302,017 32×32 pixel color images, scaled down from images scraped from the World Wide Web over 8 months. The images are classified into 75,062 classes. Each class is a non-abstract noun in WordNet. Images may appear in more than one class. The dataset was motivated by non-parametric models of neural activations in the visual cortex upon seeing images. The CIFAR-10 dataset uses a subset of the images in this dataset, but with independently generated labels, as the original labels were not reliable. The CIFAR-10 set has 6000 examples of each of 10 classes and the CIFAR-100 set has 600 examples of each of 100 non-overlapping classes. Construction It was first reported in a technical report in April 2007, during the middle of the construction process, when there were only 73 million images. The full dataset was published in 2008. They began with all 75,846 nonabstract nouns in WordNet, and then for each of these nouns, they scraped 7 Image search engines: Altavista, Ask.com, Flickr, Cydral, Google, Picsearch and Webshots. After 8 months of scraping, they obtained 97,245,098 images. Since they didn't have enough storage, they downsized the images to 32×32 as they were scraped. After gathering, they removed images with zero variance and intra-word duplicate images, resulting in the final dataset. Out of the 75,846 nouns, only 75,062 classes had any results, so the other nouns did not appear in the final dataset. The number of images per noun follows a Zipf-like distribution, with 1056 images per noun on average. To prevent a few nouns taking up too many images, they put an upper bound of at most 3000 images per noun. Retirement The 80 Million Tiny Images dataset was retired from use by its creators in 2020, after a paper by researchers Abeba Birhane and Vinay Prabhu found that some of the labeling of several publicly available image datasets, including 80 Million Tiny Images, contained racist and misogynistic slurs which were causing models trained on them to exhibit racial and sexual bias. The dataset also contained offensive images. Following the release of the paper, the dataset's creators removed the dataset from distribution, and requested that other researchers not use it for further research and to delete their copies of the dataset. See also List of datasets in computer vision and image processing References Machine learning Datasets in computer vision External links
80 Million Tiny Images
Engineering
592
18,866,777
https://en.wikipedia.org/wiki/Steiner%20chain
In geometry, a Steiner chain is a set of circles, all of which are tangent to two given non-intersecting circles (blue and red in Figure 1), where is finite and each circle in the chain is tangent to the previous and next circles in the chain. In the usual closed Steiner chains, the first and last (-th) circles are also tangent to each other; by contrast, in open Steiner chains, they need not be. The given circles and do not intersect, but otherwise are unconstrained; the smaller circle may lie completely inside or outside of the larger circle. In these cases, the centers of Steiner-chain circles lie on an ellipse or a hyperbola, respectively. Steiner chains are named after Jakob Steiner, who defined them in the 19th century and discovered many of their properties. A fundamental result is Steiner's porism, which states: If at least one closed Steiner chain of circles exists for two given circles and , then there is an infinite number of closed Steiner chains of circles; and any circle tangent to and in the same way is a member of such a chain. The method of circle inversion is helpful in treating Steiner chains. Since it preserves tangencies, angles and circles, inversion transforms one Steiner chain into another of the same number of circles. One particular choice of inversion transforms the given circles and into concentric circles; in this case, all the circles of the Steiner chain have the same size and can "roll" around in the annulus between the circles similar to ball bearings. This standard configuration allows several properties of Steiner chains to be derived, e.g., its points of tangencies always lie on a circle. Several generalizations of Steiner chains exist, most notably Soddy's hexlet and Pappus chains. Definitions and types of tangency The two given circles α and β cannot intersect; hence, the smaller given circle must lie inside or outside the larger. The circles are usually shown as an annulus, i.e., with the smaller given circle inside the larger one. In this configuration, the Steiner-chain circles are externally tangent to the inner given circle and internally tangent to the outer circle. However, the smaller circle may also lie completely outside the larger one (Figure 2). The black circles of Figure 2 satisfy the conditions for a closed Steiner chain: they are all tangent to the two given circles and each is tangent to its neighbors in the chain. In this configuration, the Steiner-chain circles have the same type of tangency to both given circles, either externally or internally tangent to both. If the two given circles are tangent at a point, the Steiner chain becomes an infinite Pappus chain, which is often discussed in the context of the arbelos (shoemaker's knife), a geometric figure made from three circles. There is no general name for a sequence of circles tangent to two given circles that intersect at two points. Closed, open and multi-cyclic The two given circles α and β touch the n circles of the Steiner chain, but each circle Ck of a Steiner chain touches only four circles: α, β, and its two neighbors, Ck−1 and Ck+1. By default, Steiner chains are assumed to be closed, i.e., the first and last circles are tangent to one another. By contrast, an open Steiner chain is one in which the first and last circles, C1 and Cn, are not tangent to one another; these circles are tangent only to three circles. Multicyclic Steiner chains wrap around the inner circle more than once before closing, i.e., before being tangent to the initial circle. Closed Steiner chains are the systems of circles obtained as the circle packing theorem representation of a bipyramid. Annular case and feasibility criterion The simplest type of Steiner chain is a closed chain of n circles of equal size surrounding an inscribed circle of radius r; the chain of circles is itself surrounded by a circumscribed circle of radius R. The inscribed and circumscribed given circles are concentric, and the Steiner-chain circles lie in the annulus between them. By symmetry, the angle 2θ between the centers of the Steiner-chain circles is 360°/n. Because Steiner chain circles are tangent to one another, the distance between their centers equals the sum of their radii, here twice their radius ρ. The bisector (green in Figure) creates two right triangles, with a central angle of . The sine of this angle can be written as the length of its opposite segment, divided by the hypotenuse of the right triangle Since θ is known from n, this provides an equation for the unknown radius ρ of the Steiner-chain circles The tangent points of a Steiner chain circle with the inner and outer given circles lie on a line that pass through their common center; hence, the outer radius . These equations provide a criterion for the feasibility of a Steiner chain for two given concentric circles. A closed Steiner chain of n circles requires that the ratio of radii R/r of the given circles equal exactly As shown below, this ratio-of-radii criterion for concentric given circles can be extended to all types of given circles by the inversive distance δ of the two given circles. For concentric circles, this distance is defined as a logarithm of their ratio of radii Using the solution for concentric circles, the general criterion for a Steiner chain of n circles can be written If a multicyclic annular Steiner chain has n total circles and wraps around m times before closing, the angle between Steiner-chain circles equals In other respects, the feasibility criterion is unchanged. Properties under inversion Circle inversion transforms one Steiner chain into another with the same number of circles. In the transformed chain, the tangent points between adjacent circles of the Steiner chain all lie on a circle, namely the concentric circle midway between the two fixed concentric circles. Since tangencies and circles are preserved under inversion, this property of all tangencies lying on a circle is also true in the original chain. This property is also shared with the Pappus chain of circles, which can be construed as a special limiting case of the Steiner chain. In the transformed chain, the tangent lines from O to the Steiner chain circles are separated by equal angles. In the original chain, this corresponds to equal angles between the tangent circles that pass through the center of inversion used to transform the original circles into a concentric pair. In the transformed chain, the n lines connecting the pairs of tangent points of the Steiner circles with the concentric circles all pass through O, the common center. Similarly, the n lines tangent to each pair of adjacent circles in the Steiner chain also pass through O. Since lines through the center of inversion are invariant under inversion, and since tangency and concurrence are preserved under inversion, the 2n lines connecting the corresponding points in the original chain also pass through a single point, O. Infinite family A Steiner chain between two non-intersecting circles can always be transformed into another Steiner chain of equally sized circles sandwiched between two concentric circles. Therefore, any such Steiner chain belongs to an infinite family of Steiner chains related by rotation of the transformed chain about O, the common center of the transformed bounding circles. Elliptical/hyperbolic locus of centers The centers of the circles of a Steiner chain lie on a conic section. For example, if the smaller given circle lies within the larger, the centers lie on an ellipse. This is true for any set of circles that are internally tangent to one given circle and externally tangent to the other; such systems of circles appear in the Pappus chain, the problem of Apollonius, and the three-dimensional Soddy's hexlet. Similarly, if some circles of the Steiner chain are externally tangent to both given circles, their centers must lie on a hyperbola, whereas those that are internally tangent to both lie on a different hyperbola. The circles of the Steiner chain are tangent to two fixed circles, denoted here as α and β, where β is enclosed by α. Let the radii of these two circles be denoted as rα and rβ, respectively, and let their respective centers be the points A and B. Let the radius, diameter and center point of the kth circle of the Steiner chain be denoted as rk, dk and Pk, respectively. All the centers of the circles in the Steiner chain are located on a common ellipse, for the following reason. The sum of the distances from the center point of the kth circle of the Steiner chain to the two centers A and B of the fixed circles equals a constant Thus, for all the centers of the circles of the Steiner chain, the sum of distances to A and B equals the same constant, rα + rβ. This defines an ellipse, whose two foci are the points A and B, the centers of the circles, α and β, that sandwich the Steiner chain of circles. The sum of distances to the foci equals twice the semi-major axis a of an ellipse; hence, Let p equal the distance between the foci, A and B. Then, the eccentricity e is defined by 2 ae = p, or From these parameters, the semi-minor axis b and the semi-latus rectum L can be determined Therefore, the ellipse can be described by an equation in terms of its distance d to one focus where θ is the angle with the line joining the two foci. Conjugate chains If a Steiner chain has an even number of circles, then any two diametrically opposite circles in the chain can be taken as the two given circles of a new Steiner chain to which the original circles belong. If the original Steiner chain has n circles in m wraps, and the new chain has p circles in q wraps, then the equation holds A simple example occurs for Steiner chains of four circles (n = 4) and one wrap (m = 1). In this case, the given circles and the Steiner-chain circles are equivalent in that both types of circles are tangent to four others; more generally, Steiner-chain circles are tangent to four circles, but the two given circles are tangent to n circles. In this case, any pair of opposite members of the Steiner chain may be selected as the given circles of another Steiner chain that involves the original given circles. Since m = p = 1 and n = q = 4, Steiner's equation is satisfied: Generalizations The simplest generalization of a Steiner chain is to allow the given circles to touch or intersect one another. In the former case, this corresponds to a Pappus chain, which has an infinite number of circles. Soddy's hexlet is a three-dimensional generalization of a Steiner chain of six circles. The centers of the six spheres (the hexlet) travel along the same ellipse as do the centers of the corresponding Steiner chain. The envelope of the hexlet spheres is a Dupin cyclide, the inversion of a torus. The six spheres are not only tangent to the inner and outer sphere, but also to two other spheres, centered above and below the plane of the hexlet centers. Multiple rings of Steiner chains are another generalization. An ordinary Steiner chain is obtained by inverting an annular chain of tangent circles bounded by two concentric circles. This may be generalized to inverting three or more concentric circles that sandwich annular chains of tangent circles. Hierarchical Steiner chains are yet another generalization. If the two given circles of an ordinary Steiner chain are nested, i.e., if one lies entirely within the other, then the larger given circle circumscribes the Steiner-chain circles. In a hierarchical Steiner chain, each circle of a Steiner chain is itself the circumscribing given circle of another Steiner chain within it; this process may be repeated indefinitely, forming a fractal. See also Poncelet porism Ford circles Apollonian gasket Notes References Bibliography Further reading External links Interactive animation of a Steiner chain, CodePen Interactive Applet by Michael Borcherds showing an animation of Steiner's Chain with a variable number of circles made with GeoGebra. Circles Inversive geometry Circle packing
Steiner chain
Mathematics
2,534
19,474,448
https://en.wikipedia.org/wiki/Phosphoamino%20acid%20analysis
Phosphoamino acid analysis, or PAA, is an experimental technique used in molecular biology to determine which amino acid or acids are phosphorylated in a protein. Technique A protein is first phosphorylated using 32P-labeled ATP, usually via an in vitro kinase assay. Most of the amino acids in the protein are then hydrolyzed, usually by the use of a strong acid such as hydrochloric acid. These amino acids are then separated using 2-dimensional thin layer chromatography, along with amino acid standards for the three amino acids that are phosphorylated in eukaryotes: serine, threonine, and tyrosine. These amino acid standards can be visualized on the TLC substrate by exposure to ninhydrin, which colors the amino acids a visible purple when heated at ~100 °C. The radioactive amino acids can be detected via autoradiography, and an overlay of the two images will show which amino acids are phosphorylated. References Rothberg, P.G., Harris, T.J.R., Nomoto, A., and Wimmer, E. (1978) O4-(5'-uridylyl)tyrosine is the bond between the genome-linked protein and the RNA of poliovirus. Proc. Natl. Acad. Sci. USA 75, 4868-4872. Eckhart, W., Hutchinson, M.A., and Hunter, T. (1979) An activity phosphorylating tyrosine in polyoma T antigen immunoprecipitates. Cell 18, 925-933. Protein methods
Phosphoamino acid analysis
Chemistry,Biology
360
953,448
https://en.wikipedia.org/wiki/Top7
Top7 is an artificial protein, classified as a de novo protein. This means that the protein itself was designed to have a specific structure and functional properties. Background Top7 was designed by Brian Kuhlman and Gautam Dantas in David Baker's laboratory at the University of Washington. Top7's design was built through the use of a general computational method that repeated its sequence design and structure prediction. The end goal was to develop a 93-residue α/β protein with a new sequence and arrangement of its structure, or topology. These computational methods helped to design the proteins along with protein structure prediction algorithms. The resulting sequence of residues is: Structure Due to the de novo design, Top7 possesses a unique three-dimensional structure. The protein is described as a 93-residue α/β protein, which suggest that Top7 contains both alpha helices,α, and beta sheets, β, in its secondary structure. Overall, the structure consists of two alpha helices packed on a five-stranded anti-parallel beta sheet. The combination of alpha helices and beta sheets is seen commonly in protein structures; this contributes to the overall stability and functionality of the protein. In order to achieve a target structure, researchers first developed a two-dimensional diagram and utilized it to determine the constraints that allowed them to construct the three-dimensional model of Top7. Determination of the high-resolution X-ray structure of the experimentally expressed and purified protein revealed that the structure (PDB: 1QYS) was indeed very similar (1.2 Å RMSD) to the computer-designed model. Characterization Researchers used a variety of biophysical methods in order to characterize the Top7 protein. These processes were able to define certain characteristics to describe the protein. Gel filtration chromatography was used to determined that Top7 is monomeric and is highly soluble. It was also discovered that an increase in temperature allows the protein to unfold cooperatively and displays cold denaturation. Crystallization trials with Top7 design resulted in negligible differences in nuclear magnetic resonance therefore the design model exhibited a structure very similar to the true structure. Structure-Based models were used to further studying folding characteristics of Top7. Through these analyzes, it was determined that the Top7 protein is extremely stable. Folding kinetics Top7 exhibits non-cooperative folding behavior. Many naturally occurring proteins display cooperative folding, indicating that the whole structure folds in a coordinated procedure. In contrast, the folding of Top7 does not follow a smooth, single phase process. Its non-cooperative characteristic may be linked to its designed sequence, which promotes the formation of an independently folded C-terminal intermediate structure. Studies found that mutations in C-terminal as well as N-terminal of the amino acid sequence of a base model prove that there is a probable sequence of Top7 that allows fold cooperative folding. Implications The creation of the de novo protein Top7 showcases the capability of computational methods in creating proteins with specific three-dimensional structures. This has broad implications for advancing the field of computational protein design and provides a platform for the creation of novel biomolecules with desired properties. The stability and folding characteristics of Top7 provide insights into the relationship between sequence, structure, and folding cooperativity. Understanding these principles can contribute to the development of more stable and functional proteins not derived from natural evolution. Top7 was featured as the RCSB Protein Data Bank's 'Molecule of the Month' in October 2005, and a superposition of the respective cores (residues 60-79) of its predicted and X-ray crystal structures are featured in the Rosetta@home logo. References Engineered proteins Protein structure
Top7
Chemistry
745
24,972,641
https://en.wikipedia.org/wiki/Environmental%20analysis
Environmental analysis is the use of examination and statistical methods to study the chemical and biological factors that determine the quality of an environment. The purpose of this is commonly to monitor and study levels of pollutants in the atmosphere, rivers and other specific settings. Also, to monitor amounts of natural and chemical components. Other environmental analysis techniques include biological surveys or biosurveys, soil analysis or soil tests, vegetation surveys, tree identification, and remote sensing which uses satellite imagery to assess the environment on different spatial scales. Analysis techniques Chemical analysis typically involves sampling some part of the environment and using lab equipment to figure out how much of a certain target compound exists. Chemical analysis may be used to assess pollution levels for remediation, or to make sure groundwater is safe for drinking. Biological surveys typically includes a measurement of the abundance of a certain species within a certain area to confirm information about the ecosystem for specific reasons. Analysis like this could be used in efforts to understand species abundance, or to look at how external effects from the environment are affecting an ecosystem. Soil tests may involve chemical analysis, but most often soil tests involve removing a section of soil to understand what each layer of soil is composed of for specific reasons. Soil samples might be needed when determining whether they can build on a certain site, or just to produce a model of an area, or to determine possible crop production considering nutrient levels. Vegetation surveys are quite similar to a biosurvey, it's the process of measuring the abundance of plant species and trees within a specific area to understand more about the ecosystem for specific reasons. Sometimes these are done to understand ecological effects from outside factors, or to just determine overall ecosystem health. Remote sensing can be used for environmental analysis by taking imagery shot by satellites in multiple wavelengths to assess areas of different scales for a certain objective. Remote sensing can be used to identify land use, it can be used to determine damages from forest fires, it can be used for weather systems and meteorology, and also for atmospheric composition. Recent advances in remote sensing field has also led to the development of autonomous devices for the analysis of physical and chemical parameters of the environment using the sensors. References Analytical chemistry Environmental science
Environmental analysis
Chemistry,Environmental_science
439
72,155,758
https://en.wikipedia.org/wiki/Australian%20Academy%20of%20Art
The Australian Academy of Art was a conservative Australian government-authorised art organisation which operated for ten years between 1937 and 1946 and staged annual exhibitions. Its demise resulted from opposition by Modernist artists, especially those associated with the Contemporary Art Society, though the influence of the Academy continued into the 1960s. History Precedents Efforts to form an art academy in Australia were initially limited to individual States: The Academy of Arts, Australia, under the presidentship of P. Fletcher Watson was founded in Sydney in 1891, with its first exhibition held in 1892, but survived only four years. The Society of Artists, founded in Sydney in 1897, and the Australian Artists’ Association, of Melbourne, both had members from various States, but held their regular exhibitions only in their home states. Formation Aspiring to the principles of the long-established, but independent, privately funded, and also by then conservative, British Royal Academy of Arts (founded in 1768), Attorney-General Robert Menzies envisaged an overarching, Federal organisation promoting art that would be "understood by" the ordinary Australian amongst the middle class who were his prime supporters in his later prime-ministerships. In The Argus of 3 May 1937 in an article headed "Does Not Like the "cross-eyed drawing" of Modern Art," he was reported to take issue with the idea that this might be de facto censorship of "those whose conception of art is not his," as had been suggested by Mr. Norman Macgeorge in a letter published the previous Saturday. MacGeorge, Menzies responded, was "misinformed about the object of the proposed Australian Academy;" It is true, however, as Mr. Macgeorge claims, that I find nothing but absurdity in much so-called modern art, with its evasion of real problems and its cross-eyed drawing. It is equally true that I think that in art beauty is the condition of immortality - a conclusion strengthened by an examination of the works of the great European masters and that the language of beauty ought to be capable of being understood by reasonably cultivated people who are not themselves artists. I realise that an academy should find room in its membership for all schools of artistic thought provided they are based on competent craftsmanship. So much do I realise this truth, which I take to be the basis of Mr. Macgeorge's letter, that at the outset, when mentioning the academy idea to a committee of artists, I stipulated that I would take no steps to further it unless this principle were adhered to. The published list of those invited to join the proposed academy is the best proof that the principle has been followed. The list was selected by artists of the highest standing. My only function has been, and is, as an uninstructed lover of fine painting and drawing, to do as much as I can to help obtain for Australia the benefits of an artistic organisation which has been invaluable in England." Subsequently, at a meeting of ten state delegates in the smokeroom of the Canberra Hotel, Menzies formed the Australian Academy of Art, on 19 June 1937 and was its inaugural chair. Where long-established European art academies were teaching institutions, the Australian Academy was not, and served to present annual salons by invitation to established artists. Its other role was to advise government on art administration as "a body which will be recognised as a standard reference on art." It was to be the second such academy in the British dominions, following Canada's which was established in 1880 with a Royal charter, which was sought also by Australia's. The Academy was to continue in an anti-Modernist stance, with one member, Norman St Clair Carter, describing 'contemporary art' as a 'fungoid growth.' While tolerating some Australian post-impressionism, its exhibitions showed traditional figurative and realist paintings by Hans Heysen, William Dargie, John Longstaff, Elioth Gruner and Charles Mere as examples of conventional academic values of draughtsmanship and technical prowess; the Modernists' innovation and originality meant they were excluded. Its first catalogue announced that its nationalist, doctrinaire intent;...marks a definite move towards the co-ordination of the artistic activities in a true Federal spirit. Hitherto there has not existed an institution which has adequately represented the whole continent. Nor has there been a body of artists who could speak with one authoritative voice on the many questions that concern the right development of the Fine Arts of this country. It is hoped, then, as the Academy proceeds with its work, the Federal and State Governments, as well as the general public, will realize the value that such an institution can be to the community, not only as a group of artists representing various points of view in their work but also as an advisory body which works in the interests of government and people alike. Influence and demise The organisation failed to obtain a royal charter when opposed by the Contemporary Art Society and other modernist groups, so its last annual exhibition was in 1947, although its influence remained through former members who were assembling national collections, writing art criticism and teaching art, in particular through those who were instructors or administrators at Melbourne's National Gallery School, who held roles as curators, or who were critics for newspapers and magazines. William Nicholas Rowell was appointed drawing master at the National Gallery in 1941 and was acting head of its art school briefly in 1946. William Beckwith (Billy) McInnes was acting-director at the National Gallery of Victoria (1935) and an instructor in its art school, while The Age critic James Stuart (Jimmy) MacDonald supported Menzies and reviled George Bell, and Lionel Lindsay used his art criticism in the Melbourne Herald to spruik the organisation. Foundation members By June 1937 it was announced that forty-seven artists had accepted invitations to be foundation members. The initiators appear in a group photograph taken on the day of the Academy's founding, and representing five states of the Commonwealth, but not Western Australia; New South Wales Victoria Queensland South Australia Hans Heysen Tasmania John Eldershaw In addition to the foundation members, others who showed in the annual exhibitions hosted by the Academy were William Wallace Anderson (exhibited in the 1939 and 1943 shows), Archibald Bertram Webb (1938), Frank Charles Medworth (1939), Joshua Smith (1938), Lyndon Raymond Dadswell (1938), Amalie Sara Colquhoun (1938), L. J. Harvey (1938), Isabel Mackenzie (1938), and Elma Roach (1938) among others. Max Meldrum joined Menzies' organisation but resigned before the Academy held its first exhibition, though kept showing in early annual exhibitions. Frederick William (Fred) Leist was a foundation member but soon resigned. Rayner Hoff had died before the inaugural exhibition, as had Paul Montford. Opposition In the Victorian Artists Society autumn exhibition being opened at its quarters in East Melbourne on 27 April 1937 by Menzies, the Society's new president (and foundation member of the Academy) James Quinn, had included modernists whose works he had seen on his visits to their studios. When Menzies had finished his speech condemning modernity in painting as "doing all that great artists wouldn't have done," like making "a face look exactly like a cabbage, or a cabbage resemble a face," Quinn indignantly attacked Menzies, pointing out that Rembrandt himself was a rebel; "Instead of painting for buyers he painted to please himself as an artist and, accordingly, 'went broke'," he countered. The confrontation prompted letters from readers. When the Academy's exclusion of modernist art from its officially sanctioned exhibitions became clear, opposition to the Academy was led by George Bell, a spokesman for 'modern art'. His argument with Menzies was very public, pursued through the newspapers, and in The Australian Quarterly. The avant-garde Angry Penguins''' first three issues published in Adelaide also reflected these bitter tensions in what C.P. Snow regarded as "the last flowering of a 'national' modernism that a completely internationalised world of the arts was likely to see". In July 1938 Bell issued a leaflet, To Art Lovers, which led to the formation of the Contemporary Art Society, of which he became founding president, with painter and writer Adrian Lawlor as secretary, who produced a book, Arquebus (1937), and pamphlet, Eliminations (1939), detailing their opposition. Others who declared themselves against a conservative, outmoded 'Academy,' were Isabel May Tweddle and Norman Macgeorge, while Rupert Bunny, Sydney Long and William Lister Lister publicly refused Menzies' invitiaton to join, while James Quinn was in conflict with Menzies over his open support for modern art. In contrast to the Academy's venue for its first show, in Sydney's Education Department gallery, the first CAS exhibition was held at the National Gallery of Victoria in 1939, where it presented young artists including Sidney Nolan, Albert Tucker, Joy Hester, Russell Drysdale, William Dobell, James Gleeson, Eric Thake, Peter Purves Smith, Noel Counihan and new arrivals from Europe, Yosl Bergner and Danila Vassilieff. William Frater switched allegiances after the first Academy exhibition and showed with the CAS. Exhibitions of the Australian Art Academy First exhibition By the time of its first exhibition, held 8–29 April 1938 at the Education Department's Art Gallery, Loftus Street, Sydney, the catalogue lists more; Robert Henderson (Bob) Croll (Academy general secretary) William Frater, and John Rowell The catalogue also names as Patrons; Rt. Hon. R. G. Menzies, P.C., M.P., Alexander Melrose, LL..B., G. R. Nicholas, J. R. McGregor, Charles Lloyd Jones, Hon. John Lane Mullins, Howard Hinton, O.B.E.; and its officers; the President Sir John Longstaff (who held the office until 1941); Vice-President Sydney Ure Smith, O.B.E., Exhibition Manager C. Parker, Secretary and Treasurer R. H. Croll, Assistant Secretary and Treasurer Vera Carruthers. For this first exhibition, a Selection Committee was formed comprising Sir John Longstaff, W. B. McInnes, Harold Herbert, Lionel Lindsay, Sydney Ure Smith, Norman Carter, William Rowell, Thea Proctor, Margaret Preston, and Douglas Dundas. Its Council had two 'divisions',' Northern, whose members were Norman Carter, Lionel Lindsay, Elioth Gruner, Thea. Proctor and Sydney Ure Smith; and Southern, whose officers were Harold Herbert, W. B. McInnes, Hans Heysen, Sir John Longstaff and William Rowell. Second exhibition The second annual Academy exhibition was held 5 April-3 May 1939 at the National Gallery of Victoria in Swanston Street, Melbourne. The exhibitors, several of whom were not Academy members, were from all states except Western Australia; New South Wales artists represented by 4 works each were; Sydney Ure Smith O.B.E., Lloyd Rees, Adelaide E. Perry. With 3 works: Norman Carter, Grace Cossington-Smith, Elioth Gruner, Margaret Preston, Douglas Dundas, Adrian Feint. With 2 works: James. R. Jackson, Frank Medworth, Enid Cambridge, E. A. Harvey, Ralph D. Shelley, Maud Sherwood, Lionel Lindsay, Thea Proctor, Lyndon R. Dadswell. And with 1 work: Hector Gilliland, Sydney Long A.R.E., Freda Robertshaw, Will Ashton R.O.I., Nora Heysen, Gordon Esling, Norman Cartet, Harold Abbott, Eileen Vaughan, Unk White, G. T. Williamson, Dorothy Thornhill Victorians with 4 works: H. Septimus Power, William Rowell, A. D. Colquhoun. With 3 works: Violet M. Mcinnes, John Rowell, James Quinn R.O.I.. R.P., Harley Griffiths Jr., Harry B. Harrison, Harold B. Herbert, Dora L. Wilson. With 2 works: Dorothy Whitehead, W. Beckwith McInnes, W. D. Knox, Wm. A. Dargie, A. E. Newbury, Polly Hurry, Amalie Colquhoun, Arnold Shore, Norah Gurdon, William Spence, John S. Loxton, Alfred Coleman, John W. Elischer, Orlando Dutton, Raymond Ewers, Stanley J. Hammond, W. Leslie Bowles, Geo. H. Allen, Ernest Buckmaster, Aileen Dent. And with 1 work: Alexander Colquhoun, Edward Heffernan, William Frater, John Farmer, Norman B. Cathcart, Ethel Wardle, Max Meldrum, Lance J. Sullivan, Charles Hills, W. Prater, Geo. H. Allen, Wallace Anderson South Australians with 3 works: Hans Heysen. With 2 works: Ivor Hele, F. Millward Grey. And with 1 work: George Whinnen, Max Ragless, T. H. Bone, John C. Goodchild, Gwen Barringer Queenslanders with 4 works: Vida Lahey. With 3 works: Kenneth Macqueen. With 2 works: Noel Wood. And with 1 work: L. J. Harvey Tasmanians with 3 works: John R. Eldershaw. And with 1 work each: Joseph Connor, Ethel M. Nicholls Third exhibition The Academy's third exhibition was held, again at the Education Department gallery in Sydney, March–April 1940 during World War II. Arthur Murch, foundation member of Menzies' organisation, in his review in The Home which included an illustration of Roy de Maistre's 1938 quasi-cubist Football Match, reported that the "Exhibition demonstrates the changing face of Australian Art," and that there was evidence of a French influence, and picked out as "names to remember: Eric Wilson, Jean Bellette, Frank Medworth, Muriel Medworth, M. B. Paxton, Desiderius Orban, Alison Rehfisch, George Duncan, Arthur Fleischmann, Nora Heysen, Paul Haefliger, Alice Danciger," and the sculptures of Orlando Dutton and Lyndon Dadswell, asking of the latter "You would not like to live with his "Decorative Head”? No, nor vice-versa! but it could stand the competition of architectural surroundings or the irregularity of tree forms in open air. Does he see things like that? Certainly not. He has consciously produced a work in a decorative baroque manner."  The Bulletin declared that "The most original thing in the show is William Dobell’s Red Lady, a fantastic and not at all beautiful composition. Examples of the “modern” style by Arnold Shore and essays in esoteric expressionism by Grace Cossington Smith, Roi de Maistre and M. B. Paxton demonstrate the Academy’s beautiful tolerance." Writing in the magazine Pertinent, Frank Rhodes Farmer found the Academy show 'depressed' him, while being 'transported' by photography of the Miniature Camera Group at Blaxland Gallery, in which "appeared that same enthusiasm for life, for the new, fresh angle, as in Giotto, Chaucer, Shakespeare," asking; "Why then does the Australian Academy of Art lack this freshness, this new approach to life, this enthusiasm?" Fourth exhibition The Melbourne Athenaeum theatre was the venue for the fourth of the Academy's annual exhibitions, on which The Bulletin commented that of works inducing 'pleasant feelings,' only one belonged to a member of the A.A.A., but that "The true-blue three As. can't be said to have justified their claim to being a national institution. They are not Argonauts in search of the Golden Fleece, but more like a party on an ocean liner exchanging current gossip. The Old Guard weigh in with portraits in their accepted manner, and a disquietening feature is that the young portrait-painters, who are not A.A.As., appear to be trying to paint like the A.A.As." Fifth exhibition From 20–31 July 1943, the fifth annual exhibition of the Academy was held again at the Melbourne Athenaeum, was opened by Menzies, and featured war artists Adams, Dargie, Hele, Herbert, Hodgkinson, Murch and Norton. George Bell, reviewing it for the Melbourne Herald, remarked that "Although the catalogue says the show is restricted to the Southern Division, the walls are crowded — too crowded to show the pictures at their best. More stringent selection would have made a better show." While picking out paintings by Frater, Bryans, Ragless, Murch, Eldershaw, Watson, Whinnen, N. Heysen and Grant for favourable comment, Bell considered that "A number of well-known names are represented by works which, well enough painted though they be, call for no further comment than has been accorded many times. If the artist continually repeats himself there Is no reason why the critic should follow suit." Sixth exhibition Again limited to artists from the Academy's southern division as New South Wales and Queensland (the 'northern division') had decided not to exhibit for the duration of the war, the venue for the annual show held 11–22 July 1944 was again the Melbourne Athenaeum. It was opened by Governor Winston Dugan, and Academy member Harold Herbert reviewed it in The Argus and conceded that, among a majority of landscapes, "There is a leaven of semi-modern or contemporary work which is not altogether lacking in interest-an admission hard to wring from this stone-hearted reviewer!" Recent acquisitions of works by Australian official war artists in Australia, the Pacific, and abroad lent by the board of management of the Australian War Memorial. Herbert, also a war artist, considered that "the quality of some of the work, as painting, is open to question. They are vivid records, at all events." Seventh exhibition The Seventh annual exhibition was held at the Athenaeum from 31 July – 11 August 1945 and again opened by the Governor of Victoria. At the hieight of the Pacific War it received little media attention. Clive Turnbull's article in the Herald was headed 'Art Exhibition Is Not Outstanding,' with praise only for "a blood transfusion from a few non-members," and reacts to the 'remarkable' catalogue statement that; "Recognition by the Federal Government of the Academy as the principal representative art body in Australia has been evidenced by an invitation to advise the Government on the appointment of war artists, on additions and alterations to be made to the War Memorial at Canberra, and on other cultural matters." "If this is so," he then asks, "it is an extraordinary and reactionary decision which ought to be annulled. An admirable advisory committee. however, could be made from artists who are not members of the Academy, according to its published list. It would include Rupert Bunny and George Bell in Melbourne and William Dobell and Russell Drysdale in Sydney. If the Academy has indeed been set up as a quasi-official advisory body it would be interesting to know what Minister made the decision, and why." Eighth exhibition Alan McCulloch welcomed the "smaller—and therefore better hung" eighth annual exhibition of the Australian Academy of Art, on 23 July 1946, and once again at the Athenaeum Gallery, Collins Street, Melbourne. Conductor Eugene Goossens officiated and encouraged attendees to purchase works "to lake them home for refreshment of the soul." Only seven Academy members showed; Quinn, Power, Ragless, Rowell, Buckmaster, Dora Wilson, and Violet Mcinnes. McCulloch's review in The Argus concluded that; ...the business-like competence of academy members is considerably helped by some of the more modest, perhaps more inspired, invitees. Lina Bryan's rolling forms and lively colours attract attention. "Afternoon, Frankston," by Alan Moore, is a quietly poetic and charming work, and three small works two low-toned lyrical pastels, and a head study in pencil by David Eager, are quietly impressive. "Burke Road Bridge," by Annois, is outstanding in the water-colour section." Herald critic Clive Turnbull commented; "As now seems to be usual, outsiders show the best work — Charles Bush with two Koepang scenes, Alan Moore with a little beach scene, Lorna [sic] Bryans with a landscape. William Frater, strangely met in this company, livens up the ranks of the academics with a portrait and a couple of other works." The Academy's eighth annual exhibition was not its very last; in November that year a private viewing in Melbourne was arranged during the visit of the then Governor-General Prince Henry, Duke of Gloucester and the Duchess, herself an artist. From it, a loan of fifteen Academy works was hung at Government House, Yarralumla, in Canberra. The paintings the Vice-Royal couple selected were one by William Dargie, four by Will Rowell, three by Alfred Coleman, two by Violet McInnes and two by C. Dudley Wood, and others by W.B. McInnes, Ernest Buckmaster, and Gwen Barringer. Also that year the Australian Government commissioned three Australian artists, Academy member Colin Colahan, and war artists Stella Bowen and Lt. G. R. Mainwaring, to paint views of the Victory Parade for the Australian War Memorial Board. Legacy The controversy and confrontations between the modernist and antimodernist forces spilled into politics, as Herbert Vere (Doc) Evatt, largely at the prompting of his wife Mary, sole female trustee of the AGNSW, championed the modernists during his leadership (1951–1960) of the Labor opposition to Robert Menzies' Liberal Party. As Sarah Scott argues, even after the collapse of the Academy, Menzies' views continued to impact Australia's modernist artists in his second term as prime minister from 1949. The 'conservative old guard' of which Menzies was a part continued its influence due to the government's monopoly in selection of works for official overseas exhibitions. Twenty years after disputes over the Academy, the conflict erupted again over which art should be Australia's first official representation at the 1958 Venice Biennale; the Commonwealth Arts Advisory Board sent outdated examples of the Heidelberg School and a few Arthur Boyd landscapes (and not the more radical Brides'' series he was then painting). A consequence of the ensuing critical rejection was that Australia refused an invitation to exhibit at the 1960 biennale and did not show in Venice again until 1978; the country was absent from the world's showcase of international art for twenty years. The ramifications for the nation's artists, and the cultural presentation of the nation through art, were profound, and deep divisions emerged between nationalist values represented by the heritage of the Heidelberg school versus the internationalism of those aligned with European modernism. Gallery of works by founding members of the Academy 1930s-1940s in chronological order References Arts organizations established in 1937 1937 establishments in Australia 1947 disestablishments in Australia Australian art movements Australian art Censorship in Australia Conservatism in Australia Art and design organizations Academic art 1930s in art 1940s in art Modernism
Australian Academy of Art
Engineering
4,840
24,416,798
https://en.wikipedia.org/wiki/Syringoderma
Syringoderma is a genus in the family Syringodermataceae of the brown algae (class Phaeophyceae). The genus contains four species. References Further reading Brown algae Brown algae genera
Syringoderma
Biology
44
55,633,698
https://en.wikipedia.org/wiki/Legal%20status%20of%20psychoactive%20Amanita%20mushrooms
This is a list of the legality of psychoactive Amanita mushrooms by country. In addition to muscimol and ibotenic acid, some species of Amanita mushrooms, including Amanita muscaria and Amanita citrina, may contain bufotenine which is illegal in many countries and is not included on this list. References Drug control law Drug policy by country Amanita mushrooms Psychoactive Amanita mushrooms Psychoactive fungi
Legal status of psychoactive Amanita mushrooms
Chemistry
97
609,865
https://en.wikipedia.org/wiki/Tape%20head
A tape head is a type of transducer used in tape recorders to convert electrical signals to magnetic fluctuations and vice versa.They can also be used to read credit/debit/gift cards because the strip of magnetic tape on the back of a credit card stores data the same way that other magnetic tapes do. Cassettes, reel-to-reel tapes, 8-tracks, VHS tapes, and even floppy disks and early hard drive disks all use the same principle of physics to store and read back information. The medium is magnetized in a pattern. It then moves at a constant speed over an electromagnet. Since the moving tape is carrying a changing magnetic field with it, it induces a varying voltage across the head. That voltage can then be amplified and connected to speakers in the case of audio, or measured and sorted into ones and zeroes in the case of digital data. Principles of operation The electromagnetic arrangement of a tape head is generally similar for all types, though the physical design varies considerably depending on the application - for example videocassette recorders (VCR) use rotating heads which implement a helical scan, whereas most audio recorders have fixed heads. A head consists of a core of magnetic material arranged into a doughnut shape or toroid, into which a very narrow gap has been let. This gap is filled with a diamagnetic material, such as gold. This forces the magnetic flux out of the gap into the magnetic tape medium more than air would, and also forces the magnetic flux out of the magnetic tape medium into the gap. The flux thus magnetises the tape or induces current in the coil at that point. A coil of wire wrapped around the core opposite the gap interfaces to the electrical side of the apparatus. The basic head design is fully reversible - a variable magnetic field at the gap will induce an electric current in the coil, and an electric current in the coil will induce a magnetic field at the gap. Reversibility While a head is reversible in principle, and very often in practice, there are desirable characteristics that differ between the playback and recording phases. One of these is the impedance of the coil - playback preferring a high impedance, and recording a low one. In the very best tape recorders, separate heads are used to avoid compromising these desirable characteristics. Having separate heads for recording and playback has other advantages, such as off-tape monitoring during recording, etc. Head gap width The width of the head gap is also critical - the narrower the gap, the better the head will be - a narrow gap gives much better transcription in the magnetic domain (which equals to more output with high frequency signals in the case with playback heads). The desirability for a narrow gap means that most practical heads are made by forming a narrow V-shaped groove in the back face of the core, and grinding away the front face until the V-groove is just breached. In this way, gaps of the order of micrometres are achievable. A record head, on the other hand, has a gap typically six times larger than that of the replay head, this gives a larger flux to magnetise the tape. The ideal gap size in a cassette deck are; wide record head gap and narrow playback head. The larger gap does not affect frequency response because the 'image' is largely made by the trailing edge of the gap. A combined record/replay head has a compromise size gap typically three times that of a replay only head. There are also negative aspects of narrow head gaps, particularly for magnetic recording. The narrower the head gap, the more bias signal must be used to maintain linearity of the signal on tape which in turn will reduce the high frequency headroom or SOL (Saturated Output Level), particularly with slower tape speeds. Manufacturers must find a compromise between intended tape speeds and head gaps for this reason. Types The physical design of a head depends on whether it is fixed or rotating. In either case, the face of the head where the gap is must be made hard wearing and highly smooth to avoid excessive head wear. It can also be seen that due to the construction method of the head gap, head wear will tend to widen the gap, reducing the head's performance over time. The vertical alignment of the heads (the azimuth) must also match between recording and playback for good fidelity, and the gap should be as close to exactly vertical as possible for highest frequency response. Most tape transport mechanisms will allow fine mechanical adjustment of the azimuth of the heads. Sometimes this can be achieved by automatic circuitry - the actual mechanical azimuth adjustment being carried out by taking advantage of the piezo effect of certain types of crystal material. Rotating heads Rotating play heads, as used in video recorders, digital audio tape and other applications, are used to achieve a high relative head/tape speed while maintaining a low overall tape transport speed. One or more transducers are mounted on a rotating drum set at an angle to the tape. The drum spins rapidly compared to the speed that the tape moves past it, so that the transducers describe a path of stripes across the tape, rather than linearly along it as a fixed head does. The wear characteristics of such helical scan heads are even more critical, and highly polished heads and tapes are required. The electrical signals of rotating heads are coupled either inductively or capacitively - there is no direct connection to the head coils. Erase heads An erase head is constructed in a similar manner to a record or replay head, but has a much larger gap, or more frequently, two large gaps. The erase head is powered during recording from a high frequency source (usually the same oscillator that provides the AC bias). In some inexpensive cassette recorder designs, the erase head is a permanent magnet that is mechanically moved into contact with the moving tape only during recording. Permanent magnet erase heads are also sometimes used in machines that are equipped with DC bias. Cross-field heads Instead of feeding both the bias signal and the audio signal into the same recording head, a few brands of audio tape recorder, notably Tandberg, Akai and its US cousin Roberts, used a separate bias head on the opposite side of the tape from the recording head; this system was termed cross-field. Head materials Record and replay heads are traditionally made of soft iron (the softness is an essential requisite for good record and replay characteristics). This material features extremely good electro-acoustical properties, but wears away fairly rapidly with a consequent deterioration of performance. Some higher end recorders featured heads made from ferrite, which features excellent electro-acoustical properties while being a very hard material which resists wear. Its two main disadvantages are that it is brittle and easily damaged, and that it has a much higher noise output due to the Barkhausen effect. In more recent years, more exotic materials have appeared, some involving ceramics, which offer the best of both of the traditional materials. Cleaning With use the head will become dirty with loose tape shedding, and distort the sound. The tape head can be cleaned using a cloth with alcohol. Video head cleaner can be used to clean video, audio, erase, or control track heads. Gallery See also Recording head Magnetic tape sound recording References Magnetic devices Audio storage Tape recording
Tape head
Technology
1,502
11,556,158
https://en.wikipedia.org/wiki/Concrete%20landscape%20curbing
Concrete landscape curbing (or concrete landscape bordering) is an alternative to plastic or metal landscape edging. Landscape curbing is made with various elements of concrete depending on the climate where it is being used. Concrete landscape curbing has become more popular over the last decade with suppliers offering a variety of styling options. Concrete landscape curbing or 'Stamped Concrete Edging' has been installed in every climate in the United States and throughout the world. It is usually installed using a specialized equipment that is expensive and takes a skilled and experienced person to operate. The equipment most often utilized in landscape curbing is based on a design that originated in Australia around the 1970s. Due to the need of professional installation, concrete landscape curbing is usually utilized as a complete system to create a permanent border. Concrete landscape curbing can be used to highlight and emphasize a flowerbed or other landscaping area. Various colors and styles are available and the final look achieved will vary from installer to installer based on their level of training and experience. A lawn mower wheel can be run on the curbing which helps eliminate the need for edging where a curb is installed. Because of its weight and depth in the ground, the concrete landscape curbing border acts as a root barrier, and is more elegant looking and will last for years. Paving Concrete curbing can contain decomposed granite, pavers, brick, mulch, and other pavement and walking surfaces for paths, walkways, driveways, and other outdoor circulation. References Concrete Lawn care Garden features Masonry
Concrete landscape curbing
Engineering
310
23,277,743
https://en.wikipedia.org/wiki/Zion%20%E2%80%93%20Mount%20Carmel%20Highway
The Zion – Mount Carmel Highway is a long road in Washington and Kane counties in southern Utah, United States, that is listed on the National Register of Historic Places and is a National Historic Civil Engineering Landmark. Description The highway consists of the eastern half of Utah State Route 9. It begins northeast of Springdale and runs east into Zion National Park, where it passes through the long Zion-Mount Carmel Tunnel. After exiting the park, the highway continues east to U.S. Route 89 at Mount Carmel Junction. The road became part of a loop tour of Zion, Bryce Canyon National Park, Cedar Breaks National Monument, and the North Rim of Grand Canyon National Park. Design and construction The route was surveyed in 1923 by B.J. Finch, district engineer of the US Bureau of Public Roads, Howard C. Means, a Utah state engineer, and John Winder, a local rancher. The National Park Service evaluated alternative routes, including one that used Parunuweap Canyon (following the East Fork Virgin River), but settled on the Pine Creek route, which required a tunnel through the Great Arch. Detailed design work on the road was carried out by the Bureau of Public Roads. Details including bridges, retaining walls, culverts, and other features were designed by the National Park Service Branch of Plans and Design under the supervision of Thomas Chalmers Vint. Work began in 1927 on a total of of road, which was completed in 1930. The highway features a tunnel that follows the profile of the Pine Creek Canyon wall at a consistent distance of from the outside face of the rock to the centerline of the tunnel. The west portal is framed by a masonry facade of cut sandstone, while the east portal is a naturalistically formed hole in the rock, entered directly from a bridge. Construction proceeded using mining techniques rather than traditional tunneling techniques, starting from a stope and working outward to the portals. The tunnel uses galleries to provide light and ventilation through the canyon wall to the outside air. The galleries also provided a place to dispose of rock generated during construction, which was dumped through the galleries into the canyon. Parking spaces were originally provided at the galleries, but were discontinued due to safety concerns. Some galleries have been repaired and partially closed with concrete due to damage from rockslides. The interior of the tunnel is rock-faced, with concrete reinforcement at selected locations. Work on the tunnel was started in 1927 by the Nevada Construction Company and was completed in 1930 at a cost of $503,000 (equivalent to $ million in ). At the time of its completion it was the longest non-urban road tunnel in the United States. The tunnel's restricted dimensions require that vehicles over in height or in width give advance notice so that two-way traffic can be shut down in the tunnel, allowing oversize vehicles to proceed down the center of the tunnel. Vehicles over tall and semi-trailers as well as bicycles and pedestrians are prohibited in the tunnel. Other significant structures include the Pine Creek and Virgin River Bridges and a second, short tunnel through a rock spur east of the main tunnel. The Zion – Mount Carmel Highway was added to the National Register of Historic Places on July 7, 1987. The Zion Mt. Carmel Tunnel and Highway was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2011. In popular culture Portions of the 1977 horror movie The Car was filmed along the highway, including the Zion - Mount Carmel Tunnel where the evil, mysterious unmanned matte black car was operating through. See also East Entrance Sign (Zion National Park) Floor of the Valley Road List of bridges documented by the Historic American Engineering Record in Utah List of tunnels documented by the Historic American Engineering Record in Utah National Register of Historic Places listings in Kane County, Utah National Register of Historic Places listings in Zion National Park References External links (video of vehicle passing through Zion Tunnel) Historic American Engineering Record (HAER) documentation, filed under Springdale, Washington County, UT: Parkitecture in the Western Parks: Transportation Systems National Park Service Transportation buildings and structures on the National Register of Historic Places in Utah Historic American Engineering Record in Utah Roads on the National Register of Historic Places in Utah Transportation in Washington County, Utah Tunnels completed in 1930 Tunnels in Utah Road tunnels on the National Register of Historic Places Road tunnels in the United States National Register of Historic Places in Washington County, Utah National Register of Historic Places in Kane County, Utah National Register of Historic Places in Zion National Park 1930 establishments in Utah National Park Service rustic in Utah Historic Civil Engineering Landmarks
Zion – Mount Carmel Highway
Engineering
916
74,735,829
https://en.wikipedia.org/wiki/List%20of%20weights
A weight (also known as a mass) is an object, normally with high density, whose chief task is to have mass and exert weight (through gravity). It is used for different purposes, such as in: Anchor Balance weight Ballast Bob Counterweight Fishing sinker Paperweight Plumb bob Tuned mass damper Weight training equipment
List of weights
Physics
69
22,649,319
https://en.wikipedia.org/wiki/Hamming%20scheme
The Hamming scheme, named after Richard Hamming, is also known as the hyper-cubic association scheme, and it is the most important example for coding theory. In this scheme the set of binary vectors of length and two vectors are -th associates if they are Hamming distance apart. Recall that an association scheme is visualized as a complete graph with labeled edges. The graph has vertices, one for each point of and the edge joining vertices and is labeled if and are -th associates. Each edge has a unique label, and the number of triangles with a fixed base labeled having the other edges labeled and is a constant depending on but not on the choice of the base. In particular, each vertex is incident with exactly edges labeled ; is the valency of the relation The in a Hamming scheme are given by Here, and The matrices in the Bose-Mesner algebra are matrices, with rows and columns labeled by vectors In particular the -th entry of is if and only if References Coding theory
Hamming scheme
Mathematics
203
32,312,231
https://en.wikipedia.org/wiki/Fire%20staff
A fire staff is a staff constructed out of wood or metal with Kevlar wick added to one or both ends. Fire staffs are used for fire performance. Manipulation There are two predominant styles for manipulating a fire staff: rotational and contact. In rotational fire staff manipulation, the performer's hands are used to manipulate the motion and rotation of the staff. Contact fire staff is a technique whereby the performer rolls the staff over parts of the arms, legs and body. Both techniques can be used in a performance. Another technique is staff juggling, in which three staffs are thrown and caught. Construction Fire staffs can vary in length, weight, balance, and wick arrangements. A staff can range anywhere from a half-metre in length to two or more metres. Fire staffs contrast from fire knives in that their centre of balance rests in the middle of their length. The most common wick arrangement for a fire staff is two wicks of equal size and thickness on either end of the staff. However, multiple wicks may be placed on the staff, and may even be placed "out of balance", displacing the point of equilibrium. A "fire staff" with wicks on only one end of the staff, is not a fire staff – it is either a fire spear or fire spade, both of which employ vastly different movement styles than fire staff. Dragonstaff One of the more extreme wick arrangements for fire staff is the Dragonstaff. A cross of three, four or more wicks on spokes is added to the ends, which gives the staff more rotational inertia. The dragon staff is able to create incredibly intricate patterns of fire. A Dragonstaff is similar to a contact staff in that it rolls over a performer's body and isn't held, but requires a different set of skills to manipulate because of the larger ends and the extra momentum generated by the rotation or rolling of the staff. Instruction Fire staff technique is taught around the world at fire dance festivals, workshops and retreats. Instructional DVDs and online videos are also available. Burnoff A burnoff is a technique performed by a fire staff performer and aims to create two fireballs when excess fuel from the wicks of the staff spray outward from the spinning staff. A number of variations of the usual burnoff are possible, including the 'circular burnoff' and the 'helix burnoff'. References External links Instructional Fire Staff DVD (2012) Fire staff basics Basic fire staff construction Object manipulation Fire arts Stick and staff weapons
Fire staff
Biology
513
33,020,981
https://en.wikipedia.org/wiki/Eris%20%28simulation%29
Eris is a computer simulation of the Milky Way galaxy's physics. It was done by astrophysicists from the Institute for Theoretical Physics at the University of Zurich, Switzerland and University of California, Santa Cruz. The simulation project was undertaken at the NASA Advanced Supercomputer Division's Pleiades and the Swiss National Supercomputing Centre for nearly eight months, which would have otherwise taken 570 years in a personal computer. The Eris simulation is the first successful detailed simulation of a Milky Way like galaxy. The results of the simulation were announced in August 2011. Background Simulation projects intending to simulate spiral galaxies have been undertaken for the past 20 years. All of these projects had failed as the simulation results showed central bulges which are huge compared to the disk size. Simulation The simulation was undertaken using supercomputers which include the Pleiades supercomputer, the Swiss National Supercomputing Centre and the supercomputers at the University of California, Santa Cruz. The simulation used 1.4 million processor-hours of the Pleiades supercomputer. It is based on the theory that in the early universe, cold and slow moving dark matter particles clumped together. These dark matter clumps then formed the "scaffolding" around galaxies and galactic clusters. The motions of more than 60 million particles which represented dark matter and galactic gas were simulated for a period of 13 billion years. The software platform Gasoline was used for the simulation. Simulation results The Eris simulation is the first successful simulation to have resolved the high-density gas clouds where stars formed. The simulation result consisted of a galaxy which is very similar to the Milky Way galaxy. Some of the parameters which were similar to Milky Way are stellar content, gas content, kinematic decomposition, brightness profile and the bulge-to-disk ratio. References External links Institute of Theoretical Physics, University of Zurich Cosmological simulation
Eris (simulation)
Physics
391
7,768,546
https://en.wikipedia.org/wiki/Death-inducing%20signaling%20complex
The death-inducing signaling complex (DISC) is a multi-protein complex formed by members of the death receptor family of apoptosis-inducing cellular receptors. A typical example is FasR, which forms the DISC upon trimerization as a result of its ligand (FasL) binding. The DISC is composed of the death receptor, FADD, and caspase 8. It transduces a downstream signal cascade resulting in apoptosis. Description The Fas ligands, or cytotoxicity-dependent APO-1-associated proteins, physically associate with APO-1 (also known as the Fas receptor, or CD95), a tumor necrosis factor containing a functional death domain. This association leads to the formation of the DISC, thereby inducing apoptosis. The entire process is initiated when the cell registers the presence of CD95L, the cognate ligand for APO-1. Upon binding, the CAP proteins and procaspase-8 (composed of FLICE, MACH, and Mch5) bind to CD95 through death domain and death effector domain interactions. Procaspase-8 activation is thought to occur through a dimerization process with other procaspase-8 molecules, known as an induced proximity model. Forming complex The CAP proteins associate only with the oligomerized version of APO-1 when forming the complex. The CAP1 are CAP2 proteins are also known as FADD/MORT1, an adaptor molecule with a death domain. CAP4 is also called FLICE, a cysteine protease with two death effector domains. CAP3 is the prodomain of FLICE generated during proteolytic activation. Once the DISC assembles, it allows APO-1 signaling to occur, which triggers cell death. In order to do this, downstream targets such as FLICE must be activated. In its inactive state, FLICE's two death domains are thought to bind together and prevent its activation. Once APO-1 aggregates within the cytosol, it recruits FADD, CAP3, and FLICE to the receptor, where FLICE is modified into several active subunits, which have the ability to cleave a variety of substrates. This proteolytic activity then results in a cascade of caspase activation, and ultimately cell death. This apoptotic activity is critical for tissue homeostasis and immune function. Inhibiting factors APO-1-mediated apoptosis can be inhibited by a variety of factors, including the viral caspase inhibitors CrmA and p35, as well as viral FLICE-inhibitory proteins known as v-FLIPs. When in the presence of APO-1, v-FLIPs preferentially bind and prevent procaspase-8 from being recruited; as such, apoptosis is stalled. Humans have a homolog for v-FLIP known as c-FLIP, which occurs in two endogenous forms (c-FLIPL (long) and c-FLIPS (short)). These are similar in structure to procaspase-8, but lack the amino acids necessary for caspase-8 catalytic activity. It is thought that c-FLIP may be involved in modulating the immune system, as c-FLIPS is upregulated upon stimulation of the T cell receptor. Furthermore, as high expression of FLIP is known to promote tumor growth, these inhibitor molecules play a role in cancer proliferation. The DISC has been implicated as a possible drug development target for various cancers, including leukemia, glioma, and colon cancer. In glioma cells, the effects of TRAIL (tumor necrosis factor-related apoptosis-inducing ligand) have been shown to induce DISC-mediated apoptosis. Specifically, TRAIL works by activating two death receptors, DR4 and DR5; these bind to FADD, which then interacts with caspase-8 to assemble the DISC. Tumor cells show varying sensitivity to TRAIL modulated apoptosis, depending on the presence of the antiapoptotic FLIP proteins. Additionally, studies in leukemia have indicated that the histone deacetylase inhibitor LAQ824 increases apoptosis by decreasing the expression levels of the c-FLIPs. As such, these inhibitors are promising targets for anti-cancer therapy. References External links Programmed cell death Apoptosis
Death-inducing signaling complex
Chemistry,Biology
912
53,115,925
https://en.wikipedia.org/wiki/Microflotation
Microflotation is a further development of standard dissolved air flotation (DAF). Microflotation is a water treatment technology operating with microbubbles of 10–80 μm in size instead of 80-300 μm like conventional DAF units. The general operating method of microflotation is similar to standard recycled stream DAF units. The advancements of microflotation are lower pressure operation, smaller footprints and less energy consumption. Process description The method of Microflotation is comparable to recycled stream DAF. A portion of the clarified effluent water leaving the Microflotation tank is pumped into a small pressure vessel into which compressed air is also introduced. This results in saturating the pressurized effluent water with air. The air-saturated water stream is recycled to the front of the Microflotation cell and flows through a pressure release valve just as it enters the front of the float tank, which results in the air being released in the form of tiny bubbles. Bubbles form at nucleation sites on the surface of the suspended particles, adhering to the particles. As more bubbles form, the lift from the bubbles eventually overcomes the force of gravity. This causes the suspended matter to float to the surface where it forms a froth layer which is then removed by a skimmer. The froth-free water exits the float tank as the clarified effluent from the Microflotation unit. A particular circular DAF system is called "Zero speed", allowing quite water status then highest performances; a typical example is an Easyfloat 2K DAF system. Advantages Microflotation is an enhanced method to float particles to the surface with the aid of adherent air bubbles. The adherence of suspended solids to bubbles is easier and more intensive, the smaller the bubbles are. Because of the improved adherence capacity of small microbubbles, the saturation of the introduced air as well as the reduction capability of particles lead to an improved suspended solids reduction, a higher solids content in the float sludge and a more stable float sludge on the surface of the microflotation cell. A difference has to be made to dispersed flotation used in mining industry in mineral segregation processes where the bubble are bigger being 500-2000 μm in size and volume of air is many fold compared to the water volume. Traditional Dissolved Air flotation (DAF) mainly operates with bubble sizes ranging from 80 to 300 μm with very inhomogeneous bubble size distribution. A major difference of low pressure dissolved air flotation and other flotation processes lies in the volumes of bubbles, amount of air and raising speeds. One macro bubble can be 1000 times bigger in volume compared to one micro bubble. And vice versa the number of micro bubbles can be 1000 fold in number compared to one macro bubble having same volume. Microflotation enables bubbles in size 40-70 μm with rise rates from 3–10 m/h. The rise rate is slow enough not to destroy the fragile flocks forming an agglomeration of particles with weak mutual bonding and high enough to allow time for separation of the agglomeration. With the attachment of particles to bubbles the size range of "flock-bubble" grows, and the rise velocities grow simultaneously. The separation rate is accelerated leading to residence times of combined chemical precipitation and flotation from 10 to 60 minutes with need of small footprint areas of treatment plants and decreasing the cost structures of treatment processes. A distribution of bubble sizes between 20 and 50 microns is the necessary requirement for an optimum flotation result. Even a small number of bubbles with diameters of above 100 microns can disable a flotation separation process, because larger bubbles rise more quickly and cause turbulence, which severely destroys already build air-flocks-agglomerates. Applications Microflotation is technically appropriately and primarily economic to substitute classic technology like sand filtration and sedimentation. Beyond there are several applications at which low pressure Microflotation is an alternative to membrane technology or represents a convincing addition. Microflotation can be used as: Non-Chemical/Chemical Industrial PreTreatment (COD, BOD, F.O.G., TSS reduction. heavy metal- and color removal) Primary treatment Tertiary treatment Replacement or protection of filtration units Sludge thickening Protection and performance improvement of MBR units, aerobic and anaerobic biologies References Flotation processes Water treatment Waste treatment technology
Microflotation
Chemistry,Engineering,Environmental_science
931
38,556,220
https://en.wikipedia.org/wiki/Q%20Scorpii
Q Scorpii, also designated as HD 159433, is an astrometric binary (100% chance) located in the southern zodiac constellation Scorpius. It has an apparent magnitude of 4.27, making it readily visible to the naked eye under ideal conditions. It lies in the tail of Scorpius, between the stars λ Scorpii and μ Scorpii and is located away from the faint globular cluster Tonantzintla 2. Based on parallax measurements from Gaia DR3, the system is estimated to be 158 light years distant, but is approaching the Solar System with a heliocentric radial velocity of . The visible component is a red giant with a stellar classification of K0 IIIb. The IIIb luminosity class indicates that it is a lower luminosity giant star. Q Scorpii is a red clump star located on the cool end of the horizontal branch, fusing helium at its core. It has 110% the mass of the Sun but has expanded to 12.4 times its girth. It radiates 62 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it an orange hue. Q Scorpii has an iron abundance half of the Sun's, making it metal deficient. Like most giant stars, it spins slowly, having a projected rotational velocity lower than . References K-type giants Horizontal-branch stars Scorpius Scorpii, Q CD-38 12044 159433 086170 6546 Scorpii, 159
Q Scorpii
Astronomy
333
44,356,103
https://en.wikipedia.org/wiki/Founterior
Founterior is an American-based online interior design magazine that covers the field of design. The four major subjects of the magazine are interior design, furniture, decorations, and architecture. It was established in December 2012 and updated on a daily basis. The founders of the magazine are Martin Patzekov and Cvetelina Todorova. History The first issue was presented as an interior design magazine and was focused only on interiors. The magazine is located in New York City, but it is not limited only to American readers. Recognition In a recent chart of interior design and architecture magazine, Founterior was listed as a source of information. Also, a Los Angeles based magazine credited Founterior for their research on school architecture. Editors Martin Patzekov (2012–2013) Max Titch (2013) Gas Tontch (2013–2014) References External links Architecture magazines Lifestyle magazines published in the United States Design magazines Magazines established in 2012 Magazines published in New York City Online magazines published in the United States
Founterior
Engineering
206
1,515,853
https://en.wikipedia.org/wiki/Food%20engineering
Food engineering is a scientific, academic, and professional field that interprets and applies principles of engineering, science, and mathematics to food manufacturing and operations, including the processing, production, handling, storage, conservation, control, packaging and distribution of food products. Given its reliance on food science and broader engineering disciplines such as electrical, mechanical, civil, chemical, industrial and agricultural engineering, food engineering is considered a multidisciplinary and narrow field. Due to the complex nature of food materials, food engineering also combines the study of more specific chemical and physical concepts such as biochemistry, microbiology, food chemistry, thermodynamics, transport phenomena, rheology, and heat transfer. Food engineers apply this knowledge to the cost-effective design, production, and commercialization of sustainable, safe, nutritious, healthy, appealing, affordable and high-quality ingredients and foods, as well as to the development of food systems, machinery, and instrumentation. www.alepeople.org,https://scholar.google.com/citations?user=o7sODVIAAAAJ&hl=en,https://orcid.org/0000-0002-7599-8085 History Although food engineering is a relatively recent and evolving field of study, it is based on long-established concepts and activities. The traditional focus of food engineering was preservation, which involved stabilizing and sterilizing foods, preventing spoilage, and preserving nutrients in food for prolonged periods of time. More specific traditional activities include food dehydration and concentration, protective packaging, canning and freeze-drying . The development of food technologies were greatly influenced and urged by wars and long voyages, including space missions, where long-lasting and nutritious foods were essential for survival. Other ancient activities include milling, storage, and fermentation processes. Although several traditional activities remain of concern and form the basis of today’s technologies and innovations, the focus of food engineering has recently shifted to food quality, safety, taste, health and sustainability. Application and practices The following are some of the applications and practices used in food engineering to produce safe, healthy, tasty, and sustainable food: Refrigeration and freezing The main objective of food refrigeration and/or freezing is to preserve the quality and safety of food materials. Refrigeration and freezing contribute to the preservation of perishable foods, and to the conservation some food quality factors such as visual appearance, texture, taste, flavor and nutritional content. Freezing food slows the growth of bacteria that could potentially harm consumers. Evaporation Evaporation is used to pre-concentrate, increase the solid content, change the color, and reduce the water content of food and liquid products. This process is mostly seen when processing milk, starch derivatives, coffee, fruit juices, vegetable pastes and concentrates, seasonings, sauces, sugar, and edible oil. Evaporation is also used in food dehydration processes. The purpose of dehydration is to prevent the growth of molds in food, which only build when moisture is present. This process can be applied to vegetables, fruits, meats, and fish, for example. Packaging Food packaging technologies are used to extend the shelf-life of products, to stabilize food (preserve taste, appearance, and quality), and to maintain the food clean, protected, and appealing to the consumer. This can be achieved, for example, by packaging food in cans and jars. Because food production creates large amounts of waste, many companies are transitioning to eco-friendly packaging to preserve the environment and attract the attention of environmentally conscious consumers. Some types of environmentally friendly packaging include plastics made from corn or potato, bio-compostable plastic and paper products which disintegrate, and recycled content. Even though transitioning to eco-friendly packaging has positive effects on the environment, many companies are finding other benefits such as reducing excess packaging material, helping to attract and retain customers, and showing that companies care about the environment. Energy for food processing To increase sustainability of food processing there is a need for energy efficiency and waste heat recovery. The replacement of conventional energy-intensive food processes with new technologies like thermodynamic cycles and non-thermal heating processes provide another potential to reduce energy consumption, reduce production costs, and improve the sustainability in food production. Heat transfer in food processing Heat transfer is important in the processing of almost every commercialized food product and is important to preserve the hygienic, nutritional and sensory qualities of food. Heat transfer methods include induction, convection, and radiation. These methods are used to create variations in the physical properties of food when freezing, baking, or deep frying products, and also when applying ohmic heating or infrared radiation to food. These tools allow food engineers to innovate in the creation and transformation of food products. Food Safety Management Systems (FSMS) A Food Safety Management System (FSMS) is "a systematic approach to controlling food safety hazards within a business in order to ensure that the food product is safe to consume." In some countries FSMS is a legal requirement, which obliges all food production businesses to use and maintain a FSMS based on the principles of Hazard Analysis Critical Control Point (HACCP). HACCP is a management system that addresses food safety through the analysis and control of biological, chemical, and physical hazards in all stages of the food supply chain. The ISO 22000 standard specifies the requirements for FSMS. Emerging technologies The following technologies, which continue to evolve, have contributed to the innovation and advancement of food engineering practices: Three-dimensional printing of food Three-dimensional (3D) printing, also known as additive manufacturing, is the process of using digital files to create three dimensional objects. In the food industry, 3D printing of food is used for the processing of food layers using computer equipment. The process of 3D printing is slow, but is improving over time with the goal of reducing costs and processing times. Some of the successful food items that have been printed through 3D technology are: chocolate, cheese, cake frosting, turkey, pizza, celery, among others. This technology is continuously improving, and has the potential of providing cost-effective, energy efficient food that meets nutritional stability, safety and variety. Biosensors Biosensors can be used for quality control in laboratories and in different stages of food processing. Biosensor technology is one way in which farmers and food processors have adapted to the worldwide increase in demand for food, while maintaining their food production and quality high. Furthermore, since millions of people are affected by food-borne diseases caused by bacteria and viruses, biosensors are becoming an important tool to ensure the safety of food. They help track and analyze food quality during several parts of the supply chain: in food processing, shipping and commercialization. Biosensors can also help with the detection of genetically modified organisms (GMOs), to help regulate GMO products. With the advancement of technologies, like nanotechnology, the quality and uses of biosensors are constantly being improved. Milk pasteurization by microwave When storage conditions of milk are controlled, milk tends to have a very good flavor. However, oxidized flavor is a problem that affects the taste and safety of milk in a negative way. To prevent the growth of pathogenic bacteria and extend the shelf life of milk, pasteurization processes were developed. Microwaved milk has been studied and developed to prevent oxidation compared to traditional pasteurized milk methods, and it has been concluded that milk has a better quality when it has microwaved milk pasteurization. Education and training In the 1950s, food engineering emerged as an academic discipline, when several U.S. universities included food science and food technology in their curricula, and important works on food engineering appeared. Today, educational institutions throughout the world offer bachelors, masters, and doctoral degrees in food engineering. However, due to the unique character of food engineering, its training is more often offered as a branch of broader programs on food science, food technology, biotechnology, or agricultural and chemical engineering. In other cases, institutions offer food engineering education through concentrations, specializations, or minors. Food engineering candidates receive multidisciplinary training in areas like mathematics, chemistry, biochemistry, physics, microbiology, nutrition, and law. Food engineering is still growing and developing as a field of study, and academic curricula continue to evolve. Future food engineering programs are subject to change due to the current challenges in the food industry, including bio-economics, food security, population growth, food safety, changing eating behavior, globalization, climate change, energy cost and change in value chain, fossil fuel prices, and sustainability. To address these challenges, which require the development of new products, services, and processes, academic programs are incorporating innovative and practical forms of training. For example, innovation laboratories, research programs, and projects with food companies and equipment manufacturers are being adopted by some universities. In addition, food engineering competitions and competitions from other scientific disciplines are appearing. With the growing demand for safe, sustainable, and healthy food, and for environmentally friendly processes and packaging, there is a large job market for food engineering prospective employees. Food engineers are typically employed by the food industry, academia, government agencies, research centers, consulting firms, pharmaceutical companies, healthcare firms, and entrepreneurial projects. Job descriptions include but are not limited to food engineer, food microbiologist, bioengineering/biotechnology, nutrition, traceability, food safety and quality management. Challenges Sustainability Food engineering has negative impacts on the environment such as the emission of large quantities of waste and the pollution of water and air, which must be addressed by food engineers in the future development of food production and processing operations. Scientists and engineers are experimenting in different ways to create improved processes that reduce pollution, but these must continue to be improved in order to achieve a sustainable food supply chain. Food engineers must reevaluate current practices and technologies to focus on increasing productivity and efficiency while reducing the consumption of water and energy, and decreasing the amount of waste produced. Population growth Even though food supply expands yearly, there has also been an increase in the number of hungry people. The world population is expected to reach 9-10 billion people by 2050 and the problem of malnutrition remains a priority. To achieve food security, food engineers are required to address land and water scarcity to provide enough growth and food for undernourished people. In addition, food production depends on land and water supply, which are under stress as the population size increases. There is a growing pressure on land resources driven by expanding populations, leading to expansions of croplands; this usually involves the destruction of forests and exploitation of arable land. Food engineers face the challenge of finding sustainable ways to produce to adapt to the growing population. Human health Food engineers must adapt food technologies and operations to the recent consumer trend toward the consumption of healthy and nutritious food. To supply foods with these qualities, and for the benefit of human health, food engineers must work collaboratively with professionals in other domains such as medicine, biochemistry, chemistry, and consumerism. New technologies and practices must be developed to increase the production of foods that have a positive impact on human health. See also Pharmaceuticals Food science Food technology Aseptic processing Dietary supplement Food and biological process engineering Food fortification Food preservation Food rheology Food supplements Future food technology Nutraceutical Nutrification Food and Bioprocess Technology Food safety Food chemistry Food physical chemistry Pasteurization Food dehydration Biosensors Biochemistry Microbiology Food quality Stabiliser References Engineering disciplines Food science Food industry
Food engineering
Engineering
2,391
24,339,134
https://en.wikipedia.org/wiki/C18H14Cl4N2O
{{DISPLAYTITLE:C18H14Cl4N2O}} The molecular formula C18H14Cl4N2O (molar mass: 416.13 g/mol, exact mass: 413.9860 u) may refer to: Isoconazole Miconazole Molecular formulas
C18H14Cl4N2O
Physics,Chemistry
69
49,575,742
https://en.wikipedia.org/wiki/Stefan%20Tyszkiewicz
Stefan Eugeniusz Tyszkiewicz, in Polish, Stefan Eugeniusz Maria Tyszkiewicz-Łohojski z Landwarowa, Leliwa coat of arms, (born 24 November 1894 in Warsaw, died 6 February 1976 in London) was a member of the Polish nobility, landowner, engineer, inventor and an early pioneer of the Polish automotive industry. He was a decorated veteran of both World Wars and the Polish–Bolshevik War and social activist. After 1945, he became an exile, turned to publishing and politics as a member of the Polish National Council in London. He was an internationally feted inventor to the end of his life. Background Stefan was the first born son, and second child of four, of Count Władysław Tyszkiewicz and his wife, Princess Krystyna Maria Lubomirska. The family were not merely members of the nobility, but had been magnates, and Stefan was to be the final heir to the Lentvaris Manor in Lithuania. His older sister, Zofia Róża, was famed as a great beauty and married Count Klemens Potocki. Already in childhood, Stefan showed a remarkable aptitude for technology. At the age of 14 years, he managed to gain a professional driving permit in Milan. In 1911, at only 17, he took out patents for two heating systems, one for cars, the other for flying machines. In 1913 he began undergraduate studies in engineering at Oxford University. World War I and the Russian Revolution He was in Poland, during his first summer vacation from Oxford, when World War I was declared. He was never to return to the dreaming spires, but volunteered instead for the Russian branch of the International Red Cross. For conspicuous bravery in saving, under fire, seven gravely injured soldiers, he was awarded the Order of St. George. In 1915 he was conscripted into the army. He passed out of the Page Corps in St Petersburg. From late 1916 he was adjutant to Gen. Grand Duke Nikolay Romanov, commander-in-chief of the Caucasus Army. He met the step-daughter of Grand Duke Nikolay, who was herself related to several royal houses of Europe, Princess Elena of Leuchtenberg (1892–1971). She was the daughter of George Maximilianovich, 6th Duke of Leuchtenberg and Princess Anastasia of Montenegro. They married in Yalta in July 1917. The couple had one surviving child, Countess Natalia Tyszkiewicz (1921–2003), who was to spend much of her later life in Switzerland. Tyszkiewicz and his wife were in Crimea at the outbreak of the October Revolution. Stefan was able to help many of his countrymen to leave there and return to Poland. He himself left the peninsula with his wife aboard a British ship along with other members of the extended Russian royal family, after King George V was able to prevail upon the then resistant British prime minister, Lloyd George, to rescue them. 1920s After a brief stay in Italy, the Tyszkiewicz returned to their Lentvaris Manor in 1919. Shortly after, the Polish–Bolshevik war broke out and Tyszkiewicz volunteered for the cavalry in the Vilnius Region. In 1921 as a delegate of the Chief-of-staff of the Polish Army, he took part in the League of Nations commission charged with defining the new Polish–Lithuanian frontier. Following the birth of his daughter, Tyszkiewicz left for Paris to resume his academic studies interrupted seven years earlier at Oxford. In 1921 he was admitted to the École des Sciences Politiques for a politics degree and simultaneously attended lectures at the Ecole centrale des arts et manufactures on automotive technology. Ralf Stetysz In 1924 he began working on a project to design a car with the idea of developing motorised road transport in Poland. He founded a partnership for the purpose in Boulogne-Billancourt, a Paris suburb. He named it Automobiles Ralf Stetysz (a contraction of his name with the acronym in Polish of: Rolniczo Automobilowo-Lotnicza Fabryka Stefana Tyszkiewicza, translating as: the 'Agricultural-Aero-Automotive Factory of Stefan Tyszkiewicz'). His prototype vehicle used an American engine of the Continental Motors Company. The aim was to construct an all terrain passenger carrier adapted to a very poor road infra-structure and easy to maintain and repair. In 1925 he succeeded in producing two models: The model TC with a 6-cylinder engine of 2760 cm³ capacity and 42 horsepower The model TA with a 4-cylinder engine of 1500 cm³ capacity and 20 horsepower. The car was exhibited at both the 1926 and 1927 international Paris Car Show, where it gained the reputation of a good quality 'Colonial' car. It also met with success at Polish sporting and trial events and it participated in the 1929 Eighth Monte Carlo Rally when it won recognition and a prize for comfort and its adaptability for long-distance travel. In 1928 production was transferred to Warsaw, to an existing factory, 'K. Rudzki & S-ka'. The car-bodies were manufactured by the aero specialist, Plage & Laskiewicz of Lublin. In moving the operation to Poland, Tyszkiewicz had to place his entire family fortune as collateral. In support, his wife parted with her own family heirloom, an 86 carat emerald brooch, which had once belonged to Catherine the Great. On 11 February 1929, the Warsaw factory was destroyed by fire. Six completed cars were lost and 27 nearing completion. One or two were saved. Tyszkiewicz intended to restart production at his estate in Lentvaris, but failed to convince the shareholders of the 'Stetysz' company. In all, 200 Ralf Stetysz vehicles had been produced. So after the fire, Tyszkiewicz resigned himself to importing Mercedes-Benz and Fiat cars into Poland. He co-founded the 'Road League' – 'Liga Drogowa', and became its president in 1933. He also wrote about Motorisation in Poland. World War II The outbreak of war in 1939 found him and his family on the territory of the Lithuanian republic, where he was involved in converting petrol engines into gas-powered ones, owing to the petrol shortage. Thanks to his contacts with the Italian legation in Lithuania, he helped Poles escape the conflict. After the occupation of Lithuania by the Soviet Union in June 1940, Tyszkiewicz was arrested and taken to Moscow where he was invited to collaborate with the NKVD, an offer he rejected requesting instead a laissez-passer to Italy. He was released from Lubianka Prison in October 1941 and made his way to the General Anders' army being formed then in preparation for a mass exodus from the USSR. He was appointed officer-in-charge of the motorised unit of the Polish 2nd Corps. Having traversed into Iran, thence to Palestine and Egypt, as a Rotmistrz – captain – in the 1st Krechowce Uhlan Regiment and as Communications Officer, (or personal adjutant) to gen. Anders, in 1944 he was especially useful connecting with Italian units fighting on the Allied side. During the Italian campaign he was director of the Red Cross in Anders' Army. During the buildup to the Battle of Monte Cassino, he invented a mechanism for discovering and destroying non-magnetic anti-personnel mines. His regiment was awarded the Virtuti Militari Order. Post-War years Having reached the UK with Anders' army after the cessation of hostilities, Tyszkiewicz settled in London. In 1949, with Stanisław Mackiewicz he published a weekly entitled, Lwów i Wilno – 'Lviv and Vilnius', a campaigning publication following the Soviet annexation of these two ethnically diverse cities in the Second Polish Republic and the displacement of hundreds of thousands of their residents, some of whom had found refuge in Britain. His war time alliances drew him to the Poles settled in the UK, but his wife preferred to live in Rome where he visited often. Meanwhile, his daughter chose to live in Geneva and Broż reports that when they met, they conversed in Russian. The family had a villa in Antibes. In the 1950s Tyszkiewicz did a stint in Turin working for Fiat, after which he concentrated on electronics. He took out a number of new patents, for example, 'stenovox', an early recording device, later improved as the 'Stetyphone'. Both inventions earned him the Grand Prix at the Brussels World's Fair in 1958. He also worked on Power assisted steering. He went on to design a wheel-chair, which thanks to an automatically variable axle length, could go up and down stairs, including escalators. Another patented invention was a luggage trolley very like the ones that are ubiquitous in contemporary airports. For these devices he was awarded gold medals at the inventors' fairs in Geneva in 1972 and New York in 1973. He designed an industrial stapler that was noted in Brussels in 1965 and in Geneva in 1972. His improvements to fuel efficiency in combustion engines, the 'Stetair', was rewarded at the Geneva Motor Show in 1974. Tyszkiewicz had a long-standing interest in aeronautics and collaborated with the former European Launcher Development Organisation, on load-bearing rockets, and was later connected to the European Space Research Organisation and European Space Agency projects. He belonged to the Polish National Council. which advised the Polish Government in exile. He was a Knight of Malta and was elected on three occasions to its Grand Council. Stefan Tyszkiewicz died in London in 1976 and was buried in the family plot at London's Brompton Cemetery. See also Karol Anders List of Poles Sovereign Military Order of Malta References http://www.tygodnik.lt/200951/bliska5.htm Narkowicz Liliana 'Uchodźca i bezpaństwowiec', access 2012-04-14, review: Tygodnik Wileńszczyzny vol. 51 (483) December, 2009 Bibliography External links Russian Imperial Corps of Pages. An Online Exhibition Catalog // Rare Book & Manuscript Library (RBML) of Columbia University Patent by Stefano Tyszkiewicz from 1955 for sound recoding machines. Google Patents. Patent by Stefan Tyszkiewicz from 1972 for an industrial paper stapler. Google Patents. Patent by S. E. Tyszkiewicz from 1974 for a luggage trolley. Google Patents. Zdjęcia samochodów Ralf-Stetysz Photographs of Stetysz cars. 1894 births 1976 deaths Businesspeople from Warsaw Polish military personnel in the Imperial Russian Army of World War I 20th-century Polish landowners Polish people of the Polish–Soviet War Polish deportees to Soviet Union Polish Army officers Polish prisoners of war Polish emigrants to the United Kingdom Polish military personnel of World War II 20th-century Polish military personnel Polish anti-communists Polish exiles Polish inventors Mechanical engineers Electrical engineers Automotive industry in Poland Nobility from Warsaw Polish Roman Catholics Stefan Knights of Malta Burials at Brompton Cemetery
Stefan Tyszkiewicz
Engineering
2,252
26,391,771
https://en.wikipedia.org/wiki/Henry%20Stommel%20Research%20Award
The Henry Stommel Research Award is awarded by the American Meteorological Society to researchers in recognition of outstanding contributions to the advancement of the understanding of the dynamics and physics of the ocean. The award is in the form of a medallion and was named for Henry Stommel. Recipients See also List of oceanography awards References Notes A. The information in the table is according to the "Past winners" web page at the official website of the American Meteorological Society, unless otherwise specified by additional citations. (Enter award name only and click submit) External links AMS Awards and Nominations American science and technology awards Meteorology awards American Meteorological Society Oceanography awards
Henry Stommel Research Award
Technology
130
47,390,141
https://en.wikipedia.org/wiki/UZ%20Pyxidis
UZ Pyxidis (HD 75021) is a semiregular variable star in the constellation Pyxis. It is located about 3,600 light-years (1,100 parsecs) away from the Earth. UZ Pyxidis lies directly between α and γ Pyxidis. It has a common proper motion companion, HD 75022, less than 2' away but the two are not listed in double star catalogues. UZ Pyxidis is a carbon star. These types of stars are known for having large amounts of carbon in their atmospheres, forming carbon compounds that make the star appear strikingly red. It was first recognised as having an unusual spectrum in 1893. Under the Morgan–Keenan classification of carbon stars, UZ Pyxidis' spectral type is C55; if it were a normal giant star, this would correspond to a spectral type of about K5. It is also unusual in that it has very strong isotopic bands of C2 and CN. There were hints that the star is variable as early as the late 19th century, and its variability was firmly estabilished by Olin J. Eggen in 1972. The variable star designation UZ Pyxidis was assigned in 1978. UZ Pyxidis is classified as a semiregular variable with a dominant period of 159.6 days. It varies in brightness between magnitude 6.99 and 7.63. References Pyxis Semiregular variable stars Pyxidis, UZ 075021 043093 CD-29 06735 Carbon stars
UZ Pyxidis
Astronomy
332
2,606,371
https://en.wikipedia.org/wiki/Vacuum%20insulated%20evaporator
A vacuum insulated evaporator (VIE) is a form of pressure vessel that allows the bulk storage of cryogenic liquids including oxygen, nitrogen and argon for industrial processes and medical applications. The purpose of the vacuum insulation is to prevent heat transfer between the inner shell, which holds the liquid, and surrounding atmosphere. Without functioning insulation, the stored liquid will rapidly warm and undergo a phase transition to gas, increasing significantly in volume and potentially causing a catastrophic failure to the vessel due to an increase in pressure. To combat such an event, VIEs are installed with a pressure safety valve. To remain a liquid, the vessel contents must be kept at or below its critical temperature. The critical temperature of oxygen is −118 °C; above this temperature, applying more pressure will not result in a liquid, but rather a supercritical fluid. References Pressure vessels Medical equipment
Vacuum insulated evaporator
Physics,Chemistry,Engineering,Biology
176
26,528,420
https://en.wikipedia.org/wiki/ACM/IEEE%20Supercomputing%20Conference
SC (formerly Supercomputing), the International Conference for High Performance Computing, Networking, Storage and Analysis, is the annual conference established in 1988 by the Association for Computing Machinery and the IEEE Computer Society. In 2019, about 13,950 people participated overall; by 2022 attendance had rebounded to 11,830 both in-person and online. The not-for-profit conference is run by a committee of approximately 600 volunteers who spend roughly three years organizing each conference. Sponsorship and Governance SC is sponsored by the Association for Computing Machinery and the IEEE Computer Society. From its formation through 2011, ACM sponsorship was managed through ACM's Special Interest Group on Computer Architecture (SIGARCH). Sponsors are listed on each proceedings page in the ACM DL; see for example. Beginning in 2012, ACM began the process of transitioning sponsorship from SIGARCH to the recently formed Special Interest Group on High Performance Computing (SIGHPC). This transition was completed after SC15, and for SC16 ACM sponsorship was vested exclusively in SIGHPC (IEEE sponsorship remained unchanged). The conference is non-profit. The conference is governed by a steering committee that includes representatives of the sponsoring societies, the current conference general chair, the general chairs of the preceding two years, the general chairs of the next two conference years, and a number of elected members. All steering committee members are volunteers, with the exception of the two representatives of the sponsoring societies, who are employees of those societies. The committee selects the conference general chair, approves each year's conference budget, and is responsible for setting policy and strategy for the conference. Conference Components Although each conference committee introduces slight variations on the program each year, the core components of the conference remain largely unchanged from year to year. Technical Program The SC Technical Program is competitive with an acceptance rate around 20% for papers (see History). Traditionally, the program includes invited talks, panels, research papers, tutorials, workshops, posters, and Birds of a Feather (BoF) sessions. Awards Each year, SC hosts the following conference and sponsoring society awards: ACM Gordon Bell Prize ACM/IEEE-CS George Michael Memorial HPC Fellowship ACM/IEEE-CS Ken Kennedy Award ACM SIGHPC Computational & Data Science Fellowships ACM SIGHPC Outstanding Doctoral Dissertation Award ACM SIGHPC Emerging Woman Leader in Technical Computing Award IEEE-CS Seymour Cray Computer Engineering Award IEEE-CS Sidney Fernbach Memorial Award IEEE CS TCHPC Award for Excellence for Early Career Researchers in HPC Test of Time Award Exhibits In addition to the technical program, SC hosts a research exhibition each year that includes universities, state-sponsored computing research organizations (such as the Federal labs in the US), and vendors of HPC-related hardware and software from many countries around the world. There were 353 exhibitors at SC16 in Salt Lake City, UT. Student Program SC's program for students has gone through a variety of changes and emphases over the years. Beginning with SC15 the program is called "Students@SC", and is oriented toward undergraduate and graduate students in computing related fields, and computing-oriented students in science and engineering. The program includes professional development programs, opportunities to learn from mentors, and engagement with SC's technical sessions. SCinet SCinet is SC's research network. Started in 1991, SCinet features emerging technologies for very high bandwidth, low latency wide area network communications in addition to operational services necessary to provide conference attendees with connectivity to the commodity Internet and to many national research and engineering networks. Name changes Since its establishment in 1988, and until 1995, the full name of the conference was the "ACM/IEEE Supercomputing Conference" (sometimes: "ACM/IEEE Conference on Supercomputing"). The conference's abbreviated (and more commonly used) formal name was "Supercomputing 'XY", where XY denotes the last two digits of the year. In 1996, according to the archived front matter of the conference proceedings, the full name was changed to the ACM/IEEE "International Conference on High Performance Computing and Communications". The latter document further announced that, as of 1997, the conference will undergo a name change and will be called "SC97: High Performance Networking and Computing". The document explained that A 1997 HPC Wire article discussed at length the reasoning, considerations, and concerns that accompanied the decision to change the name of the conference series from "Supercomputing 'XY" to "SC 'XY", stating that Despite these concerns, the abbreviated name of the conference, "SC", is still used today, a reminiscent of the abbreviation of the conference's original name—"Supercomputing Conference". The full name, in contrast, underwent several changes. Between 1997 and 2003, the name "High Performance Networking and Computing" was specified in the front matter of the archived conference proceedings in some years (1997, 1998, 2000, 2002), whereas in other years it was omitted altogether in favor of the abbreviated name (1999, 2001, 2003). In 2004, the stated front matter full name was changed to "High Performance Computing, Networking and Storage Conference". In 2005, this name was replaced by the original name of the conference—"supercomputing"— in the front matter. Finally, in 2006, the current full name, as used today, emerged: "The International Conference for High Performance Computing, Networking, Storage and Analysis". Despite all of the name variances in the proceedings through the years, the digital library of ACM, the co-sponsoring society, records the name of the conference as "The ACM/IEEE Conference on Supercomputing" from 1998 - 2008, when it changes to ""The International Conference for High Performance Computing, Networking, Storage and Analysis". It is these two names that are used in the full citations to the conference proceedings provided in this article. History The table below provides the location, name of the general chair, and acceptance statistics for each year of SC. Note that references for data in these tables apply to data preceding the reference to the left on the same row; for example, for SC17 the single reference substantiates all the information in that row, but for SC05 the source for the convention center and chair is different than the source for the acceptance statistics. Originally slated to be held in Atlanta, GA, SC20 was converted to a fully virtual conference due to the COVID-19 pandemic; the conference agenda spread across two weeks instead of the typical one week for an in-person conference. Over 7,440 attendees participated from 115 countries. SC21 was held as a hybrid conference with both in-person attendance in St. Louis, MO, and virtual attendance options available. Keynote speakers The following table details the keynote speakers during the history of the conference; as of SC23, 16.7% of the keynote speakers have been female, with a mix of speakers from corporate, academic, and national government organizations. See also Gordon Bell Prize Sidney Fernbach Award Seymour Cray Award Ken Kennedy Award TOP500 Green500 HPC Challenge Awards SCinet References External links The SC Conference Website SC Conference Series on YouTube Computer science conferences Computer conferences Recurring events established in 1988 1988 establishments in Florida Annual events in the United States IEEE conferences Association for Computing Machinery conferences Supercomputing
ACM/IEEE Supercomputing Conference
Technology
1,514
11,436,457
https://en.wikipedia.org/wiki/Cercospora%20corylina
Cercospora corylina is a fungal plant pathogen. References corylina Fungal plant pathogens and diseases Fungus species
Cercospora corylina
Biology
25
42,889,559
https://en.wikipedia.org/wiki/Hydronalium
Hydronalium is a family of aluminium-magnesium alloys. It is an alloy predominantly of aluminium, with between 1%-12% of magnesium as the primary alloying ingredient. It also includes a secondary addition of manganese, usually between 0.4%-1%. The Hydronalium alloys originated in Germany in the 1930s and are best known, at least by that name, in Eastern Europe. They were widely used for shipbuilding in Poland. There are many alloys within this family, one standard reference listing over twenty. Applications The alloy family is noted for its resistance to seawater corrosion. As such it is used in sheet form for boatbuilding and light shipbuilding. As castings it is used for marine fittings. The reliable strength of some grades is sufficient for aerospace use and so they are used for wetted components of seaplane aircraft, such as floats and propellers, where marine corrosion resistance is also needed. Some variants of the alloy are ductile enough to be drawn into wire. This, combined with their resistance to corrosion by salty sweat, has led to an application for violin strings as an alternative to silver. See also 5083 aluminium alloy References Aluminium–magnesium alloys Aluminium alloys
Hydronalium
Chemistry
242
10,241,715
https://en.wikipedia.org/wiki/Flying%20Scooters
Flying Scooters, also known simply as Flyers, is an amusement ride consisting of a center post with ride vehicles suspended from arms attached to the center post. The ride dates back to the 1930s and 1940s when Bisch-Rocco manufactured the ride. In the early 2000s, Larson International revived the concept. In the early 2010s, Larson partnered with Majestic Manufacturing, Inc. to create a portable version of the ride. When the ride is in operation, a motor causes the arms to spin, with centrifugal forces causing the ride vehicles to fly outwards. Each ride vehicle is equipped with a large rudder, allowing riders to control the motion of their vehicle. The minimum rider height requirement is usually 36 inches tall or more. Cable snapping Although Flying Scooters are generally considered a mild ride, a skilled rider can "snap" the cables suspending the vehicle, and thus gain a more extreme and out-of-control experience. Snapping is caused by the cables slacking due to quick motions of the vehicle. Snapping is made easier on older and faster Flying Scooters rides such as the Flyer at Knoebels or others manufactured by Bisch-Rocco. Some newer models, such as those manufactured by Larson, are designed to prevent snapping. Snapping is sometimes discouraged due to maintenance and safety reasons, and in the case of some parks, snapping is punishable by the ride cycle being stopped early and the offending rider being removed from the ride. Installations References External links Cable snapping performed on the Flyer at Knoebels Amusement rides
Flying Scooters
Physics,Technology
312
69,963,616
https://en.wikipedia.org/wiki/Anixia%20wallrothii
Anixia wallrothii is a species of fungus belonging to the Anixia genus. It was documented in 1870 by German mycologist Karl Wilhelm Gottlieb Leopold Fuckel. The Anixia is a part of a larger fungal family known for their diverse habitats and role in ecosystem. References A multi-level analysis to evaluate the extinction risk of and conservation strategy for the aquatic fern Marsilea quadrifolia L. in Europehttps://www.researchgate.net/publication/257150377_A_multi-level_analysis_to_evaluate_the_extinction_risk_of_and_conservation_strategy_for_the_aquatic_fern_Marsilea_quadrifolia_L_in_Europe Agaricomycetes Fungi described in 1870 Taxa named by Karl Wilhelm Gottlieb Leopold Fuckel Fungus species
Anixia wallrothii
Biology
184
66,445,175
https://en.wikipedia.org/wiki/NGC%20788
NGC 788 is a lenticular galaxy located in the constellation Cetus. Its velocity with respect to the cosmic microwave background is 3938 ± 30km/s, which corresponds to a Hubble distance of . It was discovered in a sky survey by Wilhelm Herschel on September 10, 1785. Studies of NGC 788 indicate that it, while itself being classified as a Seyfert 2, contains an obscured Seyfert 1 nucleus, following the detection of a broad Hα emission line in the polarized flux spectrum. The observation also indicated the lowest radio luminosities observed in an obscured Seyfert 1. Supernova One supernova has been observed in NGC 788: SN 1998dj (type Ia, mag. 16) was discovered by the Lick Observatory Supernova Search (LOSS) on 8 August 1998. NGC 788 Group NGC 788 is the largest and brightest galaxy in a group of at least five galaxies that bears its name. The other four galaxies in the NGC 788 group (also known as LGG 44) are IC 183, NGC 829, NGC 830 and NGC 842. Image gallery See also List of NGC objects (1–1000) References Lenticular galaxies Cetus 0788 007656 -01-06-025 F01586-0703 Seyfert galaxies
NGC 788
Astronomy
279
60,930,901
https://en.wikipedia.org/wiki/Estradiol%20benzoate/estradiol%20dienanthate/testosterone%20enanthate%20benzilic%20acid%20hydrazone
Estradiol benzoate/estradiol dienanthate/testosterone enanthate benzilic acid hydrazone (EB/EDE/TEBH), sold under the brand names Climacteron, Lactimex, Lactostat, and Amenose, is an injectable combination medication of estradiol benzoate (EB), an estrogen, estradiol dienanthate (EDE), an estrogen, and testosterone enanthate benzilic acid hydrazone (TEBH), an androgen/anabolic steroid, which is used in menopausal hormone therapy for peri- and postmenopausal women and to suppress lactation in postpartum women. Clinical studies have assessed this formulation. Climacteron and Amenose contained 1.0 mg EB, 7.5 mg EDE, and 150 mg TEBH (69 mg free testosterone) and was used to treat menopausal symptoms. They were administered by intramuscular injection typically once every 6 weeks but with a range of every 4 to 8 weeks or less frequently. Climacteron was marketed in Canada in 1961 but was withdrawn in this country in October 2005 due to risk of endometrial hyperplasia and cancer from unopposed estrogen exposure (i.e., no concomitant progestogen) as well as induction of supraphysiological testosterone levels. Lactimex and Lactostat contained 6 mg EB, 15 mg EDE, and 300 mg TEBH in 2 mL of corn oil and were used to suppress lactation. They were administered as a single intramuscular injection after childbirth or during breastfeeding. They were previously available in Germany and Canada. Estradiol and testosterone levels following a single intramuscular injection of EB/EDE/TEBH versus 10 mg estradiol valerate have been studied over 28 days. See also List of combined sex-hormonal preparations References Abandoned drugs Combined estrogen–androgen formulations
Estradiol benzoate/estradiol dienanthate/testosterone enanthate benzilic acid hydrazone
Chemistry
428
35,298,389
https://en.wikipedia.org/wiki/Dalian%20Institute%20of%20Chemical%20Physics
The Dalian Institute of Chemical Physics (DICP) (), also called Huawusuo (), is a research centre specialized in physical chemistry, chemical physics, biophysical chemistry, chemical engineering and materials science belonging to the Chinese Academy of Sciences. It is located in Dalian, Liaoning, China. General Information Having its origin in South Manchuria Railway's research department, which later became the Central Research Centre, the Dalian Institute of Chemical Physics was thus named in 1961 and moved its location from 129 Street (at Zhongshan Road) to the current address in 1995. Dalian Institute of Chemical Physics is one of the leading research institutes in China. In the past half century, the institute has become internationally recognised for its research in catalytic chemistry, chemical engineering, chemical laser and molecular reaction dynamics, organic synthesis and chromatography for modern analytic chemistry and biotechnology. The institute houses one national laboratory, two state key laboratories, and five national engineering research centres. The Dalian National Lab of Clean Energy (DNL) is the first national laboratory in the field of energy research and integrates laboratories across DICP and other institutions. DNL is subdivided into 10 divisions and its research is focused on the efficient conversion and optimal utilisation of fossil energy, clean energy conversion technologies and the economically viable use of solar and biomass energy. DICP's other main laboratories include the Laboratory of Instrumentation and Analytical Chemistry, the Laboratory of Fine Chemicals, the State Key Laboratory of Catalysis, the Laboratory of Chemical Lasers, the State Key Laboratory of Molecular Reaction Dynamics, the Laboratory of Aerospace Catalysis and New Materials, and the Laboratory of Biotechnology. In 1979, Chinese scientists at the Dalian Institute of Chemical Physics first proposed the structure of the nitroamine explosive Hexanitrohexaazaisowurtzitane, an explosive with greater energy then conventional HMX or RDX. In December 2019, a Chinese team involving scientists from the Dalian Institute of Chemical Physics and the company Feye UAV Technology developed a methanol-powered fuel system that kept a drone in the air for 12 hours, the FY-36. Fuel cell research at the institute had first started in the 1960s. Since mid-2010 the Institute and its spin-off company Rongke Power have been the World's leading developer and manufacturer of vanadium redox flow batteries. Basic Data Name: Dalian Institute of Chemical Physics, Chinese Academy of Sciences Established: 1949 Director: Liu Zhongmin () Address: No. 457, Zhongshan Road, Shahekou District, Dalian, Liaoning, China. Postal code: 116023 Transportation Bus: Huawusuo Stop, No. 16, 22, 23, 28, 37, 406, 531, 901 Tramway: Huawusuo Stop, No. 202 Line (between Xinghai Square and Dalian Medical University's No. 2 Hospital) See also South Manchuria Railway Chinese Academy of Sciences Dalian Hi-Tech Zone References External links Education in Dalian Research institutes of the Chinese Academy of Sciences 1949 establishments in China Chemical physics Physics research institutes
Dalian Institute of Chemical Physics
Physics,Chemistry
641
8,251,184
https://en.wikipedia.org/wiki/Basophil%20cell
An anterior pituitary basophil is a type of cell in the anterior pituitary which manufactures hormones. It is called a basophil because it is basophilic (readily takes up bases), and typically stains a relatively deep blue or purple. These basophils are further classified by the hormones they produce. (It is usually not possible to distinguish between these cell types using standard staining techniques.) *Produced only in pregnancy by the developing embryo. See also Chromophobe cell Melanotroph Chromophil Acidophil cell Oxyphil cell Oxyphil cell (parathyroid) Pituitary gland Neuroendocrine cell Basophilic References External links Histology
Basophil cell
Chemistry
151
62,705,949
https://en.wikipedia.org/wiki/Sahasra
A Sahasra (Sanskrit: सहस्र) is a Vedic measure of Count data, which was chiefly used in ancient as well as medieval India. A Sahasra means 1k, i.e. 1000 count data See also Hindu cosmology History of measurement systems in India Hindu units of time Palya Rajju Sayana List of numbers in Hindu scriptures References Customary units in India Hindu astronomy Obsolete units of measurement Units of length
Sahasra
Mathematics
91
4,888,510
https://en.wikipedia.org/wiki/Critical%20point%20%28set%20theory%29
In set theory, the critical point of an elementary embedding of a transitive class into another transitive class is the smallest ordinal which is not mapped to itself. Suppose that is an elementary embedding where and are transitive classes and is definable in by a formula of set theory with parameters from . Then must take ordinals to ordinals and must be strictly increasing. Also . If for all and , then is said to be the critical point of . If is V, then (the critical point of ) is always a measurable cardinal, i.e. an uncountable cardinal number κ such that there exists a -complete, non-principal ultrafilter over . Specifically, one may take the filter to be . Generally, there will be many other <κ-complete, non-principal ultrafilters over . However, might be different from the ultrapower(s) arising from such filter(s). If and are the same and is the identity function on , then is called "trivial". If the transitive class is an inner model of ZFC and has no critical point, i.e. every ordinal maps to itself, then is trivial. References Large cardinals
Critical point (set theory)
Mathematics
254
8,299,558
https://en.wikipedia.org/wiki/Septin
Septins are a group of GTP-binding proteins expressed in all eukaryotic cells except plants. Different septins form protein complexes with each other. These complexes can further assemble into filaments, rings and gauzes. Assembled as such, septins function in cells by localizing other proteins, either by providing a scaffold to which proteins can attach, or by forming a barrier preventing the diffusion of molecules from one compartment of the cell to another, or in the cell cortex as a barrier to the diffusion of membrane-bound proteins. Septins have been implicated in the localization of cellular processes at the site of cell division, and at the cell membrane at sites where specialized structures like cilia or flagella are attached to the cell body. In yeast cells, they compartmentalize parts of the cell and build scaffolding to provide structural support during cell division at the septum, from which they derive their name. Research in human cells suggests that septins build cages around pathogenic bacteria, that immobilize and prevent them from invading other cells. As filament forming proteins, septins can be considered part of the cytoskeleton. Apart from forming non-polar filaments, septins associate with cell membranes, the cell cortex, actin filaments and microtubules. Structure Septins are P-Loop-NTPase proteins that range in weight from 30-65 kDa. Septins are highly conserved between different eukaryotic species. They are composed of a variable-length proline rich N-terminus with a basic phosphoinositide binding motif important for membrane association, a GTP-binding domain, a highly conserved Septin Unique Element domain, and a C-terminal extension including a coiled coil domain of varying length. Septins interact either via their respective GTP-binding domains, or via both their N- and C-termini. Different organisms express a different number of septins, and from those symmetric oligomers are formed. For example, in yeast the octameric complex formed is Cdc11-Cdc12-Cdc3-Cdc10-Cdc10-Cdc3-Cdc12-Cdc11. In humans, hexameric or octameric complexes are possible. Initially, it was indicated that the human complex was Sept7-Sept6-Sept2-Sept2-Sept6-Sept7; but recently this order has been revised to Sept2-Sept6-Sept7-Sept7-Sept6-Sept2 (or Sept2-Sept6-Sept7-Sept3-Sept3-Sept7-Sept6-Sept2 in case of octameric hetero-oligomers). These complexes then associate to form non-polar filaments, filament bundles, cages or ring structures in cells. Occurrence Septins are found in fungi, animals, and some eukaryotic algae but are not found in plants. In yeast There are seven different septins in Saccharomyces cerevisiae. Five of those are involved in mitosis, while two (Spr3 and Spr28) are specific to sporulation. Mitotic septins (Cdc3, Cdc10, Cdc11, Cdc12, Shs1) form a ring structure at the bud neck during cell division. They are involved in the selection of the bud-site, the positioning of the mitotic spindle, polarized growth, and cytokinesis. The sporulating septins (Spr3, Spr28) localize together with Cdc3 and Cdc11 to the edges of prospore membranes. Organization Septins form a specialised region in the cell cortex known as the septin cortex. The septin cortex undergoes several changes throughout the cell cycle: The first visible septin structure is a distinct ring which appears ~15 min before bud emergence. After bud emergence, the ring broadens to assume the shape of an hourglass around the mother-bud neck. During cytokinesis, the septin cortex splits into a double ring which eventually disappears. How can the septin cortex undergo such dramatic changes, although some of its functions may require it to be a stable structure? FRAP analysis has revealed that the turnover of septins at the neck undergoes multiple changes during the cell cycle. The predominant, functional conformation is characterized by a low turnover rate (frozen state), during which the septins are phosphorylated. Structural changes require a destabilization of the septin cortex (fluid state) induced by dephosphorylation prior to bud emergence, ring splitting and cell separation. The composition of the septin cortex does not only vary throughout the cell cycle but also along the mother-bud axis. This polarity of the septin network allows concentration of some proteins primarily to the mother side of the neck, some to the center and others to the bud site. Functions Scaffold The septins act as a scaffold, recruiting many proteins. These protein complexes are involved in cytokinesis, chitin deposition, cell polarity, spore formation, in the morphogenesis checkpoint, spindle alignment checkpoint and bud site selection. Cytokinesis Budding yeast cytokinesis is driven through two septin dependent, redundant processes: recruitment and contraction of the actomyosin ring and formation of the septum by vesicle fusion with the plasma membrane. In contrast to septin mutants, disruption of one single pathway only leads to a delay in cytokinesis, not complete failure of cell division. Hence, the septins are predicted to act at the most upstream level of cytokinesis. Cell polarity After the isotropic-apical switch in budding yeast, cortical components, supposedly of the exocyst and polarisome, are delocalized from the apical pole to the entire plasma membrane of the bud, but not the mother cell. The septin ring at the neck serves as a cortical barrier that prevents membrane diffusion of these factors between the two compartments. This asymmetric distribution is abolished in septin mutants. Some conditional septin mutants do not form buds at their normal axial location. Moreover, the typical localization of some bud-site-selection factors in a double ring at the neck is lost or disturbed in these mutants. This indicates that the septins may serve as anchoring site for such factors in axially budding cells. In filamentous fungi Since their discovery in S. cerevisiae, septin homologues have been found in other eukaryotic species, including filamentous fungi. Septins in filamentous fungi display a variety of different shapes within single cells, where they control aspects of filamentous morphology. Candida albicans The genome of C. albicans encodes homologues to all S. cerevisiae septins. Without Cdc3 and Cdc12 genes Candida albicans cannot proliferate, other septins affect morphology and chitin deposition, but are not essential. Candida albicans can display different morphologies of vegetative growth, which determines the appearance of septin structures. Newly forming hyphae form a septin ring at the base, Double rings form at sites of hyphal septation, and a septin cap forms at hyphal tips. Elongated septin-filaments encircle the spherical chlamydospores. Double rings of septins at the septation site also bear growth polarity, with the growing tip ring disassembling, while the basal ring remaining intact. Aspergillus nidulans Five septins are found in A. nidulans (AnAspAp, AnAspBp, AnAspCp, AnAspDp, AnAspEp). AnAspBp forms single rings at septation sites that eventually split into double rings. Additionally, AnAspBp forms a ring at sites of branch emergence which broadens into a band as the branch grows. Like in C. albicans, double rings reflect polarity of the hypha. In the case of Aspergillus nidulans polarity is conveyed by disassembly of the more basal ring (the ring further away from the hyphal growth tip), leaving the apical ring intact, potentially as a growth guidance cue. Ashbya gossypii The ascomycete A. gossypii possesses homologues to all S. cerevisiae septins, with one being duplicated (AgCDC3, AgCDC10, AgCDC11A, AgCDC11B, AgCDC12, AgSEP7). In vivo studies of AgSep7p-GFP have revealed that septins assemble into discontinuous hyphal rings close to growing tips and sites of branch formation, and into asymmetric structures at the base of branching points. Rings are made of filaments which are long and diffuse close to growing tips and short and compact further away from the tip. During septum formation, the septin ring splits into two to form a double ring. Agcdc3Δ, Agcdc10Δ and Agcdc12Δ deletion mutants display aberrant morphology and are defective for actin-ring formation, chitin-ring formation, and sporulation. Due to the lack of septa, septin deletion mutants are highly sensitive, and damage of a single hypha can result in complete lysis of a young mycelium. In animals In contrast to septins in yeast, and in contrast to other cytoskeletal components of animals, septins do not form a continuous network in cells, but several dispersed ones in the cytoplasm of the cell cortex. These are integrated with actin bundles and microtubules. For example, the actin bundling protein anillin is required for correct spatial control of septin organization. In the sperm cells of mammals, septins form a stable ring called annulus in the tail. In mice (and potentially in humans, too), defective annulus formation leads to male infertility. Human In humans, septins are involved in cytokinesis, cilium formation and neurogenesis through the capability to recruit other proteins or serve as a diffusion barrier. There are 13 different human genes coding for septins. The septin proteins produced by these genes are grouped into four subfamilies each named after its founding member: (i) SEPT2 (SEPT1, SEPT4, SEPT5), (ii) SEPT3 (SEPT9, SEPT12), (iii) SEPT6 (SEPT8, SEPT10, SEPT11, SEPT14), and (iv) SEPT7. Septin protein complexes are assembled to form either hetero-hexamers (incorporating monomers selected from three different groups and the monomer from each group is present in two copies; 3 x 2 = 6) or hetero-octamers (monomers from four different groups, each monomer present in two copies; 4 x 2 = 8). These hetero-oligomers in turn form higher-order structures such as filaments and rings. Septins form cage-like structures around bacterial pathogens, immobilizing harmful microbes and preventing them from invading healthy cells. This cellular defence system could potentially be exploited to create therapies for dysentery and other illnesses. For example, Shigella is a bacterium that causes lethal diarrhoea in humans. To propagate from cell to cell, Shigella bacteria develop actin-polymer 'tails', which propel the microbes and allow them to gain entry into neighbouring host cells. As part of the immune response, human cells produce a cell-signalling protein called TNF-α which trigger thick bundles of septin filaments to encircle the microbes within the infected host cell. Microbes that become trapped in these septin cages are broken down by autophagy. Disruptions in septins and mutations in the genes that code for them could be involved in causing leukaemia, colon cancer and neurodegenerative conditions such as Parkinson's disease and Alzheimer's disease. Potential therapies for these, as well as for bacterial conditions such as dysentery caused by Shigella, might bolster the body’s immune system with drugs that mimic the behaviour of TNF-α and allow the septin cages to proliferate. Caenorhabditis elegans In the nematode worm Caenorhabditis elegans there are two genes coding for septins, and septin complexes contain the two different septins in a tetrameric UNC59-UNC61-UNC61-UNC59 complex. Septins in C.elegans concentrate at the cleavage furrow and the spindle midbody during cell division. Septins are also involved in cell migration and axon guidance in C.elegans. In mitochondria The septin localized in the mitochondria is called mitochondrial septin (M-septin). It was identified as a CRMP/CRAM-interacting protein in the developing rat brain. History The septins were discovered in 1970 by Leland H. Hartwell and colleagues in a screen for temperature-sensitive mutants affecting cell division (cdc mutants) in yeast (Saccharomyces cerevisiae). The screen revealed four mutants which prevented cytokinesis at restrictive temperature. The corresponding genes represent the four original septins, ScCDC3, ScCDC10, ScCDC11, and ScCDC12. Despite disrupted cytokinesis, the cells continued budding, DNA synthesis, and nuclear division, which resulted in large multinucleate cells with multiple, elongated buds. In 1976, analysis of electron micrographs revealed ~20 evenly spaced striations of 10-nm filaments around the mother-bud neck in wild-type but not in septin-mutant cells. Immunofluorescence studies revealed that the septin proteins colocalize into a septin ring at the neck. The localization of all four septins is disrupted in conditional Sccdc3 and Sccdc12 mutants, indicating interdependence of the septin proteins. Strong support for this finding was provided by biochemical studies: The four original septins co-purified on affinity columns, together with a fifth septin protein, encoded by ScSEP7 or ScSHS1. Purified septins from budding yeast, Drosophila, Xenopus, and mammalian cells are able to self associate in vitro to form filaments. How the septins interact in vitro to form hetero-oligomers that assemble into filaments was studied in detail in S. cerevisiae. Micrographs of purified filaments raised the possibility that the septins are organized in parallel to the mother-bud axis. The 10-nm striations seen on electron micrographs may be the result of lateral interaction between the filaments. Mutant strains lacking factors important for septin organization support this view. Instead of continuous rings, the septins form bars oriented along the mother-bud axis in deletion mutants of ScGIN4, ScNAP1 and ScCLA4. References Further reading Cell biology Cell cycle Proteins Cellular processes Cytoskeleton
Septin
Chemistry,Biology
3,206
29,089
https://en.wikipedia.org/wiki/Snuff%20film
A snuff film, snuff movie, or snuff video is a type of film, sometimes defined as being produced for profit or financial gain, that shows, or purports to show, scenes of actual homicide. The concept of snuff films became known to the general public during the 1970s, when a conspiracy theory alleged that a clandestine industry was producing such films for profit. The rumor was amplified in 1976 by the release of a film called Snuff, which capitalized on the legend through a disingenuous marketing campaign. But that film, like others on the topic, relied on special effects to simulate murder. According to the fact-checking website Snopes, there has never been a verified example of a genuine commercially produced snuff film. Videos of actual murders (such as beheading videos) have been made available to the public, generally through the Internet. However, those videos have been made and broadcast by the murderers either for their own gratification or for propaganda purposes, and not for financial gain and thus do not qualify, according to one author, as a "snuff film". Definitions A snuff film is a movie in a purported genre of films in which a person is actually murdered, though some variations of the definition may include films that show people dying by suicide. According to existing definitions, snuff films can be pornographic and are made for financial gain but are supposedly "circulated amongst a jaded few for the purpose of entertainment". The Collins English Dictionary defines a "snuff movie" as "a pornographic film in which an unsuspecting actress or actor is murdered at the climax of the film"; the Cambridge Dictionary defines it more broadly as "a violent film that shows a real murder". Horror film magazine Fangoria defined snuff movies as "films in which a person is killed on camera. The death is premeditated, with the purpose of being filmed in order to make money. Often times, there is a sexual aspect to the murder, either on film (as in, a porn scene that ends horribly) or that the final project is used for sexual gratification." Films featuring deaths that are authentic but accidental "are not considered snuff because the deaths were not planned. Other death on video, such as terrorists beheading victims, are done to fulfill an ideology, not to earn money." Reality Some filmed records of executions and deaths in war exist, but in those cases the death was not specifically staged for financial gain or entertainment. There have been a number of "amateur-made" snuff films available on the Internet. However, such videos are produced by the murderers to make an impact on an audience or for their own satisfaction, and not for financial profit. Some specialized websites show videos of actual killings for profit, as their shock value will attract an audience; but these websites are not operated by the perpetrators of the murders. According to Snopes, the idea of an actual snuff film "industry" clandestinely producing such "entertainment" for monetary gain is preposterous because "capturing a murder on film would be foolhardy at best. Only the most deranged would consider preserving for a jury a perfect video record of a crime they could go to the executioner for. Even if the murderer stays completely out of the camera's way, too much of who the killer is, how the murder was carried out, and where it took place would be part of such a film, and these details would quickly lead police to the right door. Though someone whose mania has caused them to lose touch with reality might skip over this point, those who are supposedly in the business for the money would be all too aware of this. It doesn't make sense to flirt with the electric chair for the profits derived from a video." Furthermore, Fangoria has also described the very concept as a "myth" and "a scare tactic, dreamt up by the media to terrify the public." History of the concept Origins of the urban legend The noun snuff originally meant the part of a candle wick that has already burned; the verb snuff meant to cut this off, and by extension to extinguish or kill. The word has been used in this sense in English slang for hundreds of years. It was defined in 1874 as a "term very common among the lower orders of London, meaning to die from disease or accident". Film studies professor Boaz Hagin argues that the concept of films showing actual murders originated decades earlier than is commonly believed, at least as early as 1907. That year, Polish-French writer Guillaume Apollinaire published the short story "A Good Film" about newsreel photojournalists who stage and film a murder due to public fascination with crime news; in the story, the public believes the murder is real but police determine that the crime was faked. Hagin also proposes that the film Network (1976) contains an explicit (fictional) snuff film depiction when television news executives orchestrate the on-air murder of a news anchor to boost ratings. According to film critic Geoffrey O'Brien, "whether or not commercially distributed 'snuff' movies actually exist, the possibility of such movies is implicit in the stock B-movie motif of the mad artist killing his models, as in A Bucket of Blood (1959), Color Me Blood Red (1965), or Decoy for Terror (1967) also known as Playgirl Killer." Likewise, the protagonist of Peeping Tom (1960) films the murders he commits, though he does so as part of his mania and not for financial gain: a 1979 article in The New York Times described the character's activity as making "private 'snuff' films". The first known use of the term snuff movie is in a 1971 book by Ed Sanders, The Family: The Story of Charles Manson's Dune Buggy Attack Battalion. This book included the interview of an anonymous one-time member of Charles Manson's "Family", who claimed that the group once made such a film in California, by recording the murder of a woman. However, the interviewee later added that he had not watched the film himself and had just heard rumors of its existence. In later editions of the book, Sanders clarified that no films depicting real murders or murder victims had been found. During the first half of the 1970s, urban legends started to allege that snuff films were being produced in South America for commercial gain, and circulated clandestinely in the United States. Snuff controversy (1976) The idea of movies showing actual murders for profit became more widely known in 1976 with the release of the exploitation film Snuff. This low-budget horror film, loosely based on the Manson murders and originally titled Slaughter, was shot in Argentina by Michael and Roberta Findlay. The film's distribution rights were bought by Allan Shackleton, who eventually found the picture unfit for release and shelved it. Several years later, Shackleton read about snuff films being imported from South America and decided to cash in on the rumor as an attempt to recoup his investment in Slaughter. Shackleton retitled Slaughter to Snuff and released it with a new ending that purported to depict an actual murder committed on a film set. Snuff'''s promotional material suggested, without stating outright, that the film featured the real murder of a woman, which amounted to false advertising. The film's slogan read: "The film that could only be made in South America... where life is CHEAP". Shackleton put out false newspaper clippings that reported a citizens group's crusading against the film, and hired people to act as protesters to picket screenings. Shackleton's efforts succeeded in generating a media frenzy about the film: real feminist and citizens groups eventually started protesting the movie and picketing theaters.David A. Cook, Lost Illusions: American Cinema in The Shadow of Watergate and Vietnam, page 233 (University of California Press, Ltd., 2000). As a result, New York District Attorney Robert M. Morgenthau investigated the picture, establishing that it was a hoax.Charles Lyons, The New Censors: Movies and the Culture Wars, Temple University Press, 1997, pages 64-70 The controversy nevertheless made the film financially profitable. Rumors related to serial killers and other controversies In subsequent years, more urban legends emerged about snuff movies. Notably, multiple serial killers were rumored to have produced snuff films: however, no such videos were proven to exist. Henry Lee Lucas and his accomplice Otis Toole claimed to have filmed their crimes, but both men were "pathological liars" and the purported films were never found. Charles Ng and Leonard Lake videotaped their interactions with some of their future victims, but not the murders. Lawrence Bittaker and Roy Norris made an audio recording of their encounter with one victim, though not of her death. Likewise, Paul Bernardo and Karla Homolka made videos of Bernardo sexually abusing two victims, but did not film the murders. In all those cases, the recordings were not intended for public consumption and were used as evidence during the murderers' trials. Over the years, several films were suspected of being "snuff movies", though none of these accusations turned out to be true. A similar controversy concerned the filming of the video for the 1989 song "Down in It" by Nine Inch Nails, in which Trent Reznor acted in a scene which ended with the implication that Reznor's character had fallen off a building and died. To film the scene, a camera was tied to a balloon with ropes. Minutes after filming started, the ropes snapped and the balloons and camera flew away, eventually landing on a farmer's field in Michigan. The farmer later handed it to the FBI, who began investigating whether the footage was a snuff film portraying a person committing suicide.Welcome to the Machine (transcript). Industrial Introspection (June 1991). Retrieved 2011-06-18. The FBI identified Reznor and the investigation ended when it was confirmed that Reznor was alive and the footage was not related to crime. Internet age The advent of the Internet, by allowing anyone to broadcast self-made videos to an international audience, also changed the means of production of films that may be categorized as "snuff". There have been several cases of murders being filmed by their perpetrators and later finding their way online. These include videos made by Mexican cartels or jihadist groups, at least one of the videos shot by the Dnepropetrovsk maniacs in mid-2000s Ukraine, the video shot by Luka Magnotta in 2012, the video shot by Vester Lee Flanagan II in 2015, as well as cases of livestreamed murders, including videos made by mass shooters. Author Steve Lillebuen, who wrote a book on the Magnotta case, commented that social media had created a new trend in crime where killers who crave an audience can become "online broadcasters" by showing their crimes to the world.Fangoria commented that Magnotta's 2012 video, which showed him mutilating the corpse of his victim, was the closest thing in existence to an actual snuff movie, especially as Magnotta had done some crude editing and used a song as a soundtrack, which amounted to minimal production values. However, it did not show the murder itself and was originally published to attract attention and not for monetary gain. The charges of which Magnotta was found guilty included "publishing obscene materials". In 2016, the owner of Bestgore.com, the website that originally hosted Magnotta's video, pleaded guilty to an obscenity charge and was sentenced to a six-month conditional sentence, half of which was served under house arrest. In fiction Since the concept became familiar to the general public, snuff films being made for profit or entertainment have been used as a core plot element or at least mentioned in numerous works of fiction, including the 1979 films Hardcore and Bloodline, and Bret Easton Ellis's 1985 novel Less than Zero. The making or discovery of one or several snuff films is the premise of various horror, thriller or crime films, such as Last House on Dead End Street (1977), Videodrome (1983), Tesis (1996), 8mm (1999), Vacancy (2007), Snuff 102 (2007), A Serbian Film (2010), Sinister (2012), The Counselor (2013), Luther: The Fallen Sun (2023), and the episode "The Devil of Christmas" (2016) in the black comedy series Inside No. 9. The 2003 video game Manhunt sees the main character being forced to participate in a series of snuff films to guarantee his freedom. The 2005 video game Grand Theft Auto: Liberty City Stories features a mission titled "Snuff", where the main character kills a few gangsters while unknowingly being filmed for a snuff movie by a third party, which may be a reference to Manhunt. Also, pretend snuff porn is sometimes filmed as a fetish. Several horror films such as Cannibal Holocaust (1980) and August Underground (2001) have depicted "snuff movie" situations, coupled with found footage aesthetics used as a narrative device. Though some of these films have generated controversy as to their nature and content, none were, nor have officially purported to be, actual snuff movies. False snuff films Faces of Death The 1978 pseudo-documentary film Faces of Death, which spawned several sequels, is one of the films most commonly associated with the "snuff movie" concept, even though it was not produced by murderers nor clandestinely distributed. Purporting to be an educational film about death, it mixed footage of actual deadly accidents, suicides, autopsies, or executions, with "outright fake scenes" obtained with the help of special effects. The Guinea Pig films The first two films in the Japanese Guinea Pig series, Guinea Pig: Devil's Experiment and Guinea Pig 2: Flower of Flesh and Blood (both released in 1985) are designed to look like snuff films; the video is grainy and unsteady, as if recorded by amateurs, and extensive practical and special effects are used to imitate such features as internal organs and graphic wounds. The sixth film in the series, Mermaid in a Manhole (1988), allegedly served as an inspiration for Japanese serial killer Tsutomu Miyazaki, who murdered several preschool girls in the late 1980s. In 1991, actor Charlie Sheen became convinced that Flower of Flesh and Blood depicted an actual homicide and contacted the FBI. The FBI initiated an investigation but closed it after the series' producers released a "making of" film demonstrating the special effects used to simulate the murders. Cannibal Holocaust The Italian director Ruggero Deodato was charged after rumors that the depictions of the killing of the main actors in his film Cannibal Holocaust (1980) were real. He was able to clear himself of the charges after the actors made an appearance in court and on television. Other than graphic gore, the film contains several scenes of sexual violence and the genuine deaths of six animals onscreen and one off screen, issues which find Cannibal Holocaust in the midst of controversy to this day. It has also been claimed that Cannibal Holocaust is banned in over 50 countries, although this has never been verified. In 2006, Entertainment Weekly magazine named Cannibal Holocaust as the 20th most controversial film of all time. August Underground trilogy This trilogy of horror films, which depict graphic tortures and murders, is shot as if it were amateur footage made by a serial killer and his accomplices. In 2005, director and lead actor Fred Vogel, who was traveling with copies of the first two films to attend a horror film festival in Canada, was arrested by Canadian customs pending charges of transporting obscene materials into the country. The charges were eventually dropped after Vogel had spent ten hours in custody. See also Ero guro Shock site Livestreamed crime Hurtcore Crush film Splatter film Cannibal film Dnepropetrovsk maniacs Martyrdom video Mondo films Beheading video Ricardo López, celebrity stalker who filmed himself committing suicide R. Budd Dwyer, politician who committed suicide during a live press conference Murder of Jun Lin, committed by Luka Magnotta who filmed himself mutilating the victim's corpse Peter Scully, Australian sex offender and murderer who made a film featuring the torture and rape of three children Extreme cinema Shot-on-video film Analog horror References Further reading David Kerekes and David Slater. Killing for Culture: From Edison to ISIS: A New History of Death on Film''. London: Headpress, 2016. External links 1971 neologisms 1976 controversies in the United States Film genres Filmed killings Filmed suicides Films about murder Film controversies Obscenity controversies in film Urban legends Violence
Snuff film
Biology
3,505
7,829,039
https://en.wikipedia.org/wiki/Digital%20program%20insertion
Digital program insertion (DPI) allows cable headends and broadcast affiliates to insert locally generated commercials and short programs into remotely distributed regional programs before they are delivered to home viewers. Digital program insertion also refers to a specific technology which allows an MPEG transport stream to be spliced into a currently flowing MPEG transport stream seamlessly and with little or no artifacts. The controlling signaling used to initiate an MPEG is referred to as an SCTE-35 message. The communication API between MPEG splicers and content delivery servers or ad insertion servers is referred to as SCTE30 messages. References Television technology
Digital program insertion
Technology
125
7,876,585
https://en.wikipedia.org/wiki/Exchange%20matrix
In mathematics, especially linear algebra, the exchange matrices (also called the reversal matrix, backward identity, or standard involutory permutation) are special cases of permutation matrices, where the 1 elements reside on the antidiagonal and all other elements are zero. In other words, they are 'row-reversed' or 'column-reversed' versions of the identity matrix. Definition If is an exchange matrix, then the elements of are Properties Premultiplying a matrix by an exchange matrix flips vertically the positions of the former's rows, i.e., Postmultiplying a matrix by an exchange matrix flips horizontally the positions of the former's columns, i.e., Exchange matrices are symmetric; that is: For any integer : In particular, is an involutory matrix; that is, The trace of is 1 if is odd and 0 if is even. In other words: The determinant of is: As a function of , it has period 4, giving 1, 1, −1, −1 when is congruent modulo 4 to 0, 1, 2, and 3 respectively. The characteristic polynomial of is: its eigenvalues are 1 (with multiplicity ) and -1 (with multiplicity ). The adjugate matrix of is: (where is the sign of the permutation of elements). Relationships An exchange matrix is the simplest anti-diagonal matrix. Any matrix satisfying the condition is said to be centrosymmetric. Any matrix satisfying the condition is said to be persymmetric. Symmetric matrices that satisfy the condition are called bisymmetric matrices. Bisymmetric matrices are both centrosymmetric and persymmetric. See also Pauli matrices (the first Pauli matrix is a 2 × 2 exchange matrix) References Matrices
Exchange matrix
Mathematics
380
26,006,858
https://en.wikipedia.org/wiki/Fatal%20System%20Error
Fatal System Error (2010) is a book by Joseph Menn, an investigative technology reporter at The Washington Post, and previously with Reuters, the Financial Times and Los Angeles Times. The book investigates the espionage network of international mobsters and hackers who use the Internet to extort money from businesses, steal from tens of millions of consumers, and attack government networks. The main focus of the book is on Barrett Lyon and Andy Crocker and the capture of cybercriminals Ivan Maksakov, Alexander Petrov, and Denis Stepanov. References External links STAYING SAFER ONLINE fserror.com Fatal System Error PublicAffairs Books Interview with Joseph Menn WNYC The Leonard Lopate Show Fighting Cybercrime, One Digital Thug At A Time NPR 2010 non-fiction books
Fatal System Error
Technology
163
53,831
https://en.wikipedia.org/wiki/Missionary
A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. In the Latin translation of the Bible, Jesus Christ says the word when he sends the disciples into areas and commands them to preach the gospel in his name. The term is most commonly used in reference to Christian missions, but it can also be used in reference to any creed or ideology. The word mission originated in 1598 when Jesuits, the members of the Society of Jesus sent members abroad, derived from the Latin (nom. ), meaning 'act of sending' or , meaning 'to send'. By religion Buddhist missions The first Buddhist missionaries were called "Dharma Bhanaks", and some see a missionary charge in the symbolism behind the Buddhist wheel, which is said to travel all over the earth bringing Buddhism with it. The Emperor Ashoka was a significant early Buddhist missioner. In the 3rd century BCE, Dharmaraksita—among others—was sent out by emperor Ashoka to proselytize and initially the Buddhist tradition through the Indian Maurya Empire, but later into the Mediterranean as far as Greece. Gradually, all India and the neighboring island of Ceylon were converted. Then, in later periods, Buddhism spread eastward and southeastward to the present lands of Burma, Thailand, Laos, Cambodia, Vietnam, and Indonesia. Buddhism was spread among the Turkic people during the 2nd and 3rd centuries BCE into modern-day Pakistan, Kashmir, Afghanistan, eastern and coastal Iran, Uzbekistan, Turkmenistan, and Tajikistan. It was also taken into China brought by Kasyapa Matanga in the 2nd century CE, Lokaksema and An Shigao translated Buddhist sutras into Chinese. Dharmarakṣa was one of the greatest translators of Mahayana Buddhist scriptures into Chinese. Dharmaraksa came to the Chinese capital of Luoyang in 266 CE, where he made the first known translations of the Lotus Sutra and the Dasabhumika Sutra, which were to become some of the classic texts of Chinese Mahayana Buddhism. Altogether, Dharmaraksa translated around 154 Hīnayāna and Mahāyāna sutras, representing most of the important texts of Buddhism available in the Western Regions. His proselytizing is said to have converted many to Buddhism in China, and made Chang'an, present-day Xi'an, a major center of Buddhism. Buddhism expanded rapidly, especially among the common people, and by 381 most of the people of northwest China were Buddhist. Winning converts also among the rulers and scholars, by the end of the Tang dynasty Buddhism was found everywhere in China. Marananta brought Buddhism to the Korean Peninsula in the 4th century. Seong of Baekje, known as a great patron of Buddhism in Korea, built many temples and welcomed priests bringing Buddhist texts directly from India. In 528, Baekje officially adopted Buddhism as its state religion. He sent tribute missions to Liang in 534 and 541, on the second occasion requesting artisans as well as various Buddhist works and a teacher. According to Chinese records, all these requests were granted. A subsequent mission was sent in 549, only to find the Liang capital in the hands of the rebel Hou Jing, who threw them in prison for lamenting the fall of the capital. He is credited with having sent a mission in 538 to Japan that brought an image of Shakyamuni and several sutras to the Japanese court. This has traditionally been considered the official introduction of Buddhism to Japan. An account of this is given in Gangōji Garan Engi. First supported by the Soga clan, Buddhism rose over the objections of the pro-Shinto Mononobe and Buddhism entrenched itself in Japan with the conversion of Prince Shotoku Taishi. When in 710 Emperor Shomu established a new capital at Nara with urban grid plan modeled after the capital of China, Buddhism received official support and began to flourish. Padmasambhava, The Lotus Born, was a sage guru from Oḍḍiyāna who is said to have transmitted Vajrayana Buddhism to Bhutan and Tibet and neighbouring countries in the 8th century. The use of missions, councils, and monastic institutions influenced the emergence of Christian missions and organizations, which developed similar structures in places that were formerly Buddhist missions. During the 19th and 20th centuries, Western intellectuals such as Schopenhauer, Henry David Thoreau, Max Müller, and esoteric societies such as the Theosophical Society of H.P. Blavatsky, The Buddhist Society of Great Britain and Ireland and the Buddhist Society, London spread interest in Buddhism. Writers such as Hermann Hesse and Jack Kerouac, in the West, and the hippie generation of the late 1960s and early 1970s led to a re-discovery of Buddhism. During the 20th and 21st centuries Buddhism has again been propagated by missionaries into the West such as Ananda Metteyya (Theravada Buddhism), Suzuki Daisetsu Teitarō (Zen Buddhism), the Dalai Lama and monks including Lama Surya Das (Tibetan Buddhism). Tibetan Buddhism has been significantly active and successful in the West since the Chinese takeover of Tibet in 1959. Today Buddhists make a decent proportion of several countries in the West such as New Zealand, Australia, Canada, the Netherlands, France, and the United States. In Canada, the immense popularity and goodwill ushered in by Tibet's Dalai Lama (who has been made honorary Canadian citizen) put Buddhism in a favourable light in the country. Many non-Asian Canadians embraced Buddhism in various traditions and some have become leaders in their respective sanghas. In the early 1990s, the French Buddhist Union (UBF, founded in 1986) estimated that there are 600,000 to 650,000 Buddhists in France, with 150,000 French converts among them. In 1999, sociologist Frédéric Lenoir estimated there are 10,000 converts and up to five million "sympathizers", although other researchers have questioned these numbers. Taisen Deshimaru was a Japanese Zen Buddhist who founded numerous zendos in France. Thich Nhat Hanh, a Nobel Peace Prize-nominated, Vietnamese-born Zen Buddhist, founded the Unified Buddhist Church (Eglise Bouddhique Unifiée) in France in 1969. The Plum Village Monastery in the Dordogne in southern France was his residence and the headquarters of his international sangha. In 1968 Leo Boer and Wener van de Wetering founded a Zen group, and through two books made Zen popular in the Netherlands. The guidance of the group was taken over by Erik Bruijn, who is still in charge of a flourishing community. The largest Zen group now is the Kanzeon Sangha, led by Nico Tydeman under the supervision of the American Zen master Dennis Genpo Merzel, Roshi, a former student of Maezumi Roshi in Los Angeles. This group has a relatively large centre where a teacher and some students live permanently. Many other groups are also represented in the Netherlands, like the Order of Buddhist Contemplatives in Apeldoorn, the Thich Nhat Hanh Order of Interbeing and the International Zen Institute Noorderpoort monastery/retreat centre in Drenthe, led by Jiun Hogen Roshi. Perhaps the most widely visible Buddhist leader in the world is Tenzin Gyatso, the current Dalai Lama, who first visited the United States in 1979. As the exiled political leader of Tibet, he has become a popular cause célèbre. His early life was depicted in Hollywood films such as Kundun and Seven Years in Tibet. He has attracted celebrity religious followers such as Richard Gere and Adam Yauch. The first Western-born Tibetan Buddhist monk was Robert A. F. Thurman, now an academic supporter of the Dalai Lama. The Dalai Lama maintains a North American headquarters at Namgyal Monastery in Ithaca, New York. Lewis M. Hopfe in his "Religions of the World" suggested that "Buddhism is perhaps on the verge of another great missionary outreach" (1987:170). Christian missions A Christian missionary can be defined as "one who is to witness across cultures". The Lausanne Congress of 1974, defined the term, related to Christian mission as, "to form a viable indigenous church-planting movement". Missionaries can be found in many countries around the world. In the Bible, Jesus Christ is recorded as instructing the apostles to make disciples of all nations (, ). This verse is referred to by Christian missionaries as the Great Commission and inspires missionary work. Historic The Christian Church expanded throughout the Roman Empire already in New Testament times and is said by tradition to have reached even further, to Persia (Church of the East) and to India (Saint Thomas Christians). During the Middle Ages, the Christian monasteries and missionaries such as Saint Patrick (5th century), and Adalbert of Prague (c. 956–997) propagated learning and religion beyond the European boundaries of the old Roman Empire. In 596, Pope Gregory the Great (in office 590–604) sent the Gregorian Mission (including Augustine of Canterbury) into England. In their turn, Christians from Ireland (the Hiberno-Scottish mission) and from Britain (Saint Boniface (c. 675–754), and the Anglo-Saxon mission, for example) became prominent in converting the inhabitants of central Europe. During the Age of Discovery, the Catholic Church established a number of missions in the Americas and in other Western colonies through the Augustinians, Franciscans, and Dominicans to spread Christianity in the New World and to convert the Native Americans and other indigenous people. About the same time, missionaries such as Francis Xavier (1506–1552) as well as other Jesuits, Augustinians, Franciscans, and Dominicans reached Asia and the Far East, and the Portuguese sent missions into Africa. Emblematic in many respects is Matteo Ricci's Jesuit mission to China from 1582, which was totally peaceful and non-violent. These missionary movements should be distinguished from others, such as the Baltic Crusades of the 12th and 13th centuries, which were arguably compromised in their motivation by designs of military conquest. Much contemporary Catholic missionary work has undergone profound change since the Second Vatican Council of 1962–1965, with an increased push for indigenization and inculturation, along with social justice issues as a constitutive part of preaching the Gospel. As the Catholic Church normally organizes itself along territorial lines and had the human and material resources, religious orders, some even specializing in it, undertook most missionary work, especially in the era after the collapse of the Roman Empire in the West. Over time, the Holy See gradually established a normalized Church structure in the mission areas, often starting with special jurisdictions known as apostolic prefectures and apostolic vicariates. At a later stage of development these foundations are raised to regular diocesan status with a local bishops appointed. On a global front, these processes were often accelerated in the later 1960s, in part accompanying political decolonization. In some regions, however, they are still in course. Just as the Bishop of Rome had jurisdiction also in territories later considered to be in the Eastern sphere, so the missionary efforts of the two 9th-century saints Cyril and Methodius were largely conducted in relation to the West rather than the East, though the field of activity was central Europe. The Eastern Orthodox Church, under the Orthodox Church of Constantinople undertook vigorous missionary work under the Roman Empire and its successor the Byzantine Empire. This had lasting effects and in some sense is at the origin of the present relations of Constantinople with some sixteen Orthodox national churches including the Romanian Orthodox Church, the Georgian Orthodox and Apostolic Church, and the Ukrainian Orthodox Church (both traditionally said to have been founded by the missionary Apostle Andrew), the Bulgarian Orthodox Church (said to have been founded by the missionary Apostle Paul). The Byzantines expanded their missionary work in Ukraine after the mass baptism in Kiev in 988. The Serbian Orthodox Church had its origins in the conversion by Byzantine missionaries of the Serb tribes when they arrived in the Balkans in the 7th century. Orthodox missionaries also worked successfully among the Estonians from the 10th to the 12th centuries, founding the Estonian Orthodox Church. Under the Russian Empire of the 19th century, missionaries such as Nicholas Ilminsky (1822–1891) moved into the subject lands and propagated Orthodoxy, including through Belarus, Latvia, Moldova, Finland, Estonia, Ukraine, and China. The Russian St. Nicholas of Japan (1836–1912) took Eastern Orthodoxy to Japan in the 19th century. The Russian Orthodox Church also sent missionaries to Alaska beginning in the 18th century, including Saint Herman of Alaska (died 1836), to minister to the Natives. The Russian Orthodox Church Outside Russia continued missionary work outside Russia after the 1917 Russian Revolution, resulting in the establishment of many new dioceses in the diaspora, from which numerous converts have been made in Eastern Europe, North America, and Oceania. Early Protestant missionaries included John Eliot and contemporary ministers including John Cotton and Richard Bourne, who ministered to the Algonquin natives who lived in lands claimed by representatives of the Massachusetts Bay Colony in the early 17th century. Quaker "publishers of truth" visited Boston and other mid-17th century colonies, but were not always well received. The Danish government began the first organized Protestant mission work through its College of Missions, established in 1714. This funded and directed Lutheran missionaries such as Bartholomaeus Ziegenbalg in Tranquebar, India, and Hans Egede in Greenland. In 1732, while on a visit in 1732 to Copenhagen for the coronation of his cousin King Christian VI, the Moravian Church's patron Nicolas Ludwig, Count von Zinzendorf, was very struck by its effects, and particularly by two visiting Inuit children converted by Hans Egede. He also got to know a slave from the Danish colony in the West Indies. When he returned to Herrnhut in Saxony, he inspired the inhabitants of the villageit had fewer than thirty houses thento send out "messengers" to the slaves in the West Indies and to the Moravian missions in Greenland. Within thirty years, Moravian missionaries had become active on every continent, and this at a time when there were fewer than three hundred people in Herrnhut. They are famous for their selfless work, living as slaves among the slaves and together with Native Americans, including the Lenape and Cherokee Indian tribes. Today, the work in the former mission provinces of the worldwide Moravian Church is carried on by native workers. The fastest-growing area of the work is in Tanzania in Eastern Africa. The Moravian work in South Africa inspired William Carey and the founders of the British Baptist missions. , seven of every ten Moravians live in a former mission field and belong to a race other than Caucasian. Much Anglican mission work came about under the auspices of the Society for the Propagation of the Gospel in Foreign Parts (SPG, founded in 1701), the Church Missionary Society (CMS, founded 1799) and of the Intercontinental Church Society (formerly the Commonwealth and Continental Church Society, originating in 1823). Modern With a dramatic increase in efforts since the 20th century, and a strong push since the Lausanne I: The International Congress on World Evangelization in Switzerland in 1974, modern evangelical groups have focused efforts on sending missionaries to every ethnic group in the world. While this effort has not been completed, increased attention has brought larger numbers of people distributing Bibles, Jesus videos, and establishing evangelical churches in more remote areas. Internationally, the focus for many years in the later 20th century was on reaching every "people group" with Christianity by 2000. Bill Bright's leadership with Campus Crusade, the Southern Baptist International Mission Board, The Joshua Project, and others brought about the need to know who these "unreached people groups" are and how those wanting to tell about the Christian God and share a Christian Bible could reach them. The focus for these organizations transitioned from a "country focus" to a "people group focus". (From "What is a People Group?" by Dr. Orville Boyd Jenkins: A "people group" is an ethnolinguistic group with a common self-identity that is shared by the various members. There are two parts to that word: ethno and linguistic. Language is a primary and dominant identifying factor of a people group. But there are other factors that determine or are associated with ethnicity.) What can be viewed as a success by those inside and outside the church from this focus is a higher level of cooperation and friendliness among churches and denominations. It is very common for those working on international fields to not only cooperate in efforts to share their gospel message, but view the work of their groups in a similar light. Also, with the increased study and awareness of different people groups, western mission efforts have become far more sensitive to the cultural nuances of those they are going to and those they are working with in the effort. Over the years, as indigenous churches have matured, the church of the Global South (Africa, Asia, and Latin America) has become the driving force in missions. Korean and African missionaries can now be found all over the world. These missionaries represent a major shift in church history where the nations they came from were not historically Christian. Another major shift in the form of modern missionary work takes shape in the conflation of spiritual with contemporary military metaphors and practices. Missionary work as spiritual warfare (Ephesians, Chapter 6) weapons of a spiritual sense, is the primary concept in a long-standing relationship between Christian missions and militarization. Though when the Church establishes a governance, usually this results in a formation of a national or regional military. (Romans, Chapter 13) Despite the seeming opposition between the submissive and morally upstanding associations with prayer and violence associated with militarism, these two spheres interact in a dialectical way. Yet they when properly implemented they are entangled to support one another in the upholding of a civilizations morality and the prosecution and punishment of criminals. In some cases a nations military may fail to operate according to Godly principles and is not supported by the Church or missionaries, in other cases the military is made up of the Church congregants. The results of spiritual conflict are then present in different ways as prayer can be strategically used, for or against a military. Nigeria, and other countries have had large numbers of their Christian adherents go to other countries and start churches. These non-western missionaries often have unparalleled success; because, they need few western resources and comforts to sustain their livelihood while doing the work they have chosen among a new culture and people. One of the first large-scale missionary endeavors of the British colonial age was the Baptist Missionary Society, founded in 1792 as the Particular Baptist Society for the Propagation of the Gospel Amongst the Heathen. The London Missionary Society was an evangelical organisation, bringing together from its inception both Anglicans and Nonconformists; it was founded in England in 1795 with missions in Africa and the islands of the South Pacific. The Colonial Missionary Society was created in 1836, and directed its efforts towards promoting Congregationalist forms of Christianity among "British or other European settlers" rather than indigenous peoples. Both of these merged in 1966, and the resultant organisation is now known as the Council for World Mission. The Church Mission Society, first known as the Society for Missions to Africa and the East, was founded in 1799 by evangelical Anglicans centred around the anti-slavery activist William Wilberforce. It bent its efforts to the Coptic Church, the Ethiopian Church, and India, especially Kerala; it continues to this day. Many of the network of churches they established became the Anglican Communion. In 1809, the London Society for Promoting Christianity Amongst the Jews was founded, which pioneered mission amongst the Jewish people; it continues today as the Church's Ministry Among Jewish People. In 1865, the China Inland Mission was founded, going well beyond British controlled areas; it continues as the OMF, working throughout East Asia. The Church of Jesus Christ of Latter-day Saints (LDS Church) has an active missionary program. Young men between the ages of eighteen and twenty-five are encouraged to prepare themselves to serve a two-year, self-funded, full-time proselytizing mission. Young women who desire to serve as missionaries can serve starting at the age of nineteen, for one and a half years. Retired couples also have the option of serving a mission. Missionaries typically spend two weeks in a Missionary Training Center (or two to three months for those learning a new language) where they study the scriptures along with the Book of Mormon, learn new languages when applicable, prepare themselves to teach the Gospel of Jesus Christ, and learn more about the culture and the people they live among. As of December 2019, the LDS Church had over 67,000 full-time missionaries worldwide and over 31,000 Service Missionaries. Maryknoll In Montreal in 1910, Father James Anthony Walsh, a priest from Boston, met Father Thomas Frederick Price, from North Carolina. They agreed on the need to build a seminary for the training of young American men for the foreign Missions. Countering arguments that the Church needed workers here, Fathers Walsh and Price insisted the Church would not flourish until it sent missioners overseas. Independently, the men had written extensively about the concept, Father Price in his magazine Truth, and Father Walsh in the pages of A Field Afar, an early incarnation of Maryknoll Magazine. Winning the approval of the American hierarchy, the two priests traveled to Rome in June 1911 to receive final approval from Pope Pius X for the formation of the Catholic Foreign Mission Society of America, now better known as the Maryknoll Fathers and Brothers. Hindu missions Hinduism was introduced into Java by travellers from India in ancient times. Several centuries ago, many Hindus left Java for Bali rather than convert to Islam. Hinduism has survived in Bali ever since. Dang Hyang Nirartha was responsible for facilitating a refashioning of Balinese Hinduism. He was an important promoter of the idea of moksha in Indonesia. He founded the Shaivite priesthood that is now ubiquitous in Bali, and is now regarded as the ancestor of all Shaivite pandits. Shantidas Adhikari was a Hindu preacher from Sylhet who converted King Pamheiba of Manipur to Hinduism in 1717. Historically, Hinduism has only recently had a large influence in western countries such as the United Kingdom, New Zealand, and Canada. Since the 1960s, many westerners attracted by the world view presented in Asian religious systems have converted to Hinduism. Many native-born Canadians of various ethnicities have converted during the last 50 years through the actions of the Ramakrishna Mission, ISKCON, Arya Samaj and other missionary organizations as well as due to the visits and guidance of Indian gurus such as Guru Maharaj, Sai Baba, and Rajneesh. The International Society for Krishna Consciousness has a presence in New Zealand, running temples in Auckland, Hamilton, Wellington and Christchurch. Paramahansa Yogananda, an Indian yogi and guru, introduced many westerners to the teachings of meditation and Kriya Yoga through his book, Autobiography of a Yogi. Swami Vivekananda, the founder of the Ramakrishna Mission is one of the greatest Hindu missionaries to the West. Ananda Marga missions Ānanda Mārga, organizationally known as Ānanda Mārga Pracaraka Samgha (AMPS), meaning the samgha (organization) for the propagation of the marga (path) of ananda (bliss), is a social and spiritual movement founded in Jamalpur, Bihar, India, in 1955 by Prabhat Ranjan Sarkar (1921–1990), also known by his spiritual name, Shrii Shrii Ánandamúrti. Ananda Marga counts hundreds of missions around the world through which its members carry out various forms of selfless service on Relief. (The social welfare and development organization under AMPS is Ananda Marga Universal Relief Team, or AMURT.) Education and women's welfare The service activities of this section founded in 1963 are focused on: Education: creating and managing primary, post-primary, and higher schools, research institutes Relief: creating and managing children's and students' homes for destitute children and for poor students, cheap hostels, retiring homes, academies of light for deaf dumb and crippled, invalid homes, refugee rehabilitation Tribal: tribal welfare units, medical camps Women's welfare: women welfare units, women's homes, nursing homes Islamic missions Dawah means to "invite" (in Arabic, literally "calling") to Islam, which is the second largest religion with 2.0 billion members. From the 7th century, it spread rapidly from the Arabian Peninsula to the rest of the world through the initial Muslim conquests and subsequently with traders and explorers after the death of Muhammad. Initially, the spread of Islam came through the Dawah efforts of Muhammad and his followers. After his death in 632 CE, much of the expansion of the empire came through conquest such as that of North Africa and later Iberia (Al-Andalus). The Islamic conquest of Persia put an end to the Sassanid Empire and spread the reach of Islam to as far east as Khorasan, which would later become the cradle of Islamic civilization during the Islamic Golden Age (622–1258 CE) and a stepping-stone towards the introduction of Islam to the Turkic tribes living in and bordering the area. The missionary movement peaked during the Islamic Golden Age, with the expansion of foreign trade routes, primarily into the Indo-Pacific and as far south as the isle of Zanzibar as well as the Southeastern shores of Africa. With the coming of the Sufism tradition, Islamic missionary activities increased. Later, the Seljuk Turks' conquest of Anatolia made it easier for missionaries to go lands that formerly belonged to the Byzantine Empire. In the earlier stages of the Ottoman Empire, a Turkic form of Shamanism was still widely practiced in Anatolia, but soon lost ground to Sufism. During the Ottoman presence in the Balkans, missionary movements were taken up by people from aristocratic families hailing from the region, who had been educated in Constantinople or other major city within the Empire such as the famed madrassahs and kulliyes. Primarily, individuals were sent back to the place of their origin and were appointed important positions in the local governing body. This approach often resulted in the building of mosques and local kulliyes for future generations to benefit from, as well as spreading the teachings of Islam. The spread of Islam towards Central and West Africa had until the early 19th century has been consistent but slow. Previously, the only connection was through Trans-Saharan trade routes. The Mali Empire, consisting predominantly of African and Berber tribes, stands as a strong example of the early Islamic conversion of the Sub-Saharan region. The gateways prominently expanded to include the aforementioned trade routes through the Eastern shores of the African continent. With the European colonization of Africa, missionaries were almost in competition with the European Christian missionaries operating in the colonies. There is evidence of Arab Muslim traders entering Indonesia as early as the 8th century. Indonesia's early people were animists, Hindus, and Buddhists. However it was not until the end of the 13th century that the process of Islamization began to spread throughout the areas local communities and port towns. The spread, although at first introduced through Arab Muslim traders, continued to saturate through the Indonesian people as local rulers and royalty began to adopt the religion subsequently leading their subjects to mirror their conversion. Recently, Muslim groups have engaged in missionary work in Malawi. Much of this is performed by the African Muslim Agency based in Angola. The Kuwait-sponsored AMA has translated the Qur'an into Chichewa (Cinyanja), one of the official languages of Malawi, and has engaged in other missionary work in the country. All of the major cities in the country have mosques and there are several Islamic schools. Several South African, Kuwaiti, and other Muslim agencies are active in Mozambique, with one important one being the African Muslim Agency. The spread of Islam into West Africa, beginning with ancient Ghana in the 9th century, was mainly the result of the commercial activities of North African Muslims. The empires of both Mali and Songhai that followed ancient Ghana in the Western Sudan adopted the religion. Islam made its entry into the northern territories of modern Ghana around the 15th century. Mande speakers (who in Ghana are known as Wangara) traders and clerics carried the religion into the area. The northeastern sector of the country was also influenced by an influx of Hausa Muslim traders from the 16th century onwards Islamic influence first occurred in India in the early 7th century with the advent of Arab traders. Trade relations have existed between Arabia and the Indian subcontinent from ancient times. Even in the pre-Islamic era, Arab traders used to visit the Malabar region, which linked them with the ports of Southeast Asia. According to Historians Elliot and Dowson in their book The History of India as told by its own Historians, the first ship bearing Muslim travelers was seen on the Indian coast as early as 630 CE H. G. Rawlinson, in his book: Ancient and Medieval History of India claims the first Arab Muslims settled on the Indian coast in the last part of the 7th century. Shaykh Zainuddin Makhdum's "Tuhfat al-Mujahidin" also is a reliable work. This fact is corroborated, by J. Sturrock in his South Kanara and Madras Districts Manuals, and also by Haridas Bhattacharya in Cultural Heritage of India Vol. IV. It was with the advent of Islam that the Arabs became a prominent cultural force in the world. The Arab merchants and traders became the carriers of the new religion, and they propagated it wherever they went. Islam in Bulgaria can be traced back to the mid-ninth century when there were Islamic missionaries in Bulgaria, evidenced by a letter from Pope Nicholas to Boris of Bulgaria calling for the extirpation of Saracens. Pioneer Muslim missionaries to the Kenyan interior were largely Tanganyikan, who coupled their missionary work with trade, along the centres began along the railway line such as Kibwezi, Makindu, and Nairobi. Outstanding among them was Maalim Mtondo Islam in Kenya, a Tanganyikan credited with being the first Muslim missionary to Nairobi. Reaching Nairobi at the close of the 19th century, he led a group of other Muslims, and enthusiastic missionaries from the coast to establish a "Swahili village" in present-day Pumwani. A small mosque was built to serve as a starting point and he began preaching Islam in earnest. He soon attracted several Kikuyus and Wakambas, who became his disciples. In 1380, Karim ul' Makhdum the first Arabian Islamic missionary reached the Sulu Archipelago and Jolo in the Philippines and established Islam in the country. In 1390, the Minangkabau's Prince Rajah Baguinda and his followers preached Islam on the islands. The Sheik Karimal Makdum Mosque was the first mosque established in the Philippines on Simunul in Mindanao in the 14th century. Subsequent settlements by Arab missionaries traveling to Malaysia and Indonesia helped strengthen Islam in the Philippines and each settlement was governed by a Datu, Rajah, and a Sultan. Islamic provinces founded in the Philippines included the Sultanate of Maguindanao, Sultanate of Sulu, and other parts of the southern Philippines. Modern missionary work in the United States has increased greatly in the last one hundred years, with much of the recent demographic growth driven by conversion. Up to one-third of American Muslims are African Americans who have converted to Islam during the last seventy years. Conversion to Islam in prisons, and in large urban areas has also contributed to Islam's growth over the years. An estimated US$45 billion has been spent by the Saudi Arabian government financing mosques and Islamic schools in foreign countries. Ain al-Yaqeen, a Saudi newspaper, reported in 2002 that Saudi funds may have contributed to building as many as 1,500 mosques and 2,000 other Islamic centers. Early Islamic missionaries during Muhammad's era During the Expedition of Al Raji in 625, the Islamic Prophet Muhammad sent some men as missionaries to various different tribes. Some men came to Muhammad and requested that Muhammad send instructors to teach them Islam, but the men were bribed by the two tribes of Khuzaymah who wanted revenge for the assassination of Khalid bin Sufyan (Chief of the Banu Lahyan tribe) by Muhammad's followers 8 Muslim Missionaires were killed in this expedition., another version says 10 Muslims were killed Then during the Expedition of Bir Maona in July 625 Muhammad sent some Missionaries at request of some men from the Banu Amir tribe, but the Muslims were again killed as revenge for the assassination of Khalid bin Sufyan by Muhammad's followers 70 Muslims were killed during this expedition During the Expedition of Khalid ibn al-Walid (Banu Jadhimah) in January 630, Muhammad sent Khalid ibn Walid to invite the Banu Jadhimah tribe to Islam. This is mentioned in the Sunni Hadith . Ahmadiyya Islam missions Missionaries belonging to the Ahmadiyya thought of Islam often study at International Islamic seminaries and educational institutions, known as Jamia Ahmadiyya. Upon completion of their degrees, they are sent to various parts of the world including South America, Africa, North America, Europe, and the Far East as appointed by Mirza Masroor Ahmad, present head and Caliph of the worldwide Ahmadiyya Muslim community. Jamia students may be appointed by the Caliph either as Missionaries of the community (often called Murrabi, Imam, or Mawlana) or as Qadis or Muftis of the Ahmadiyya Muslim community with a specialisation in matters of fiqh (Islamic Jurisprudence). Some Jamia alumni have also become Islamic historians such as the late Dost Muhammad Shahid, former Official Historian of the Ahmadiyya Muslim community, with a specialisation in tarikh (Islamic historiography). Missionaries stay with their careers as appointed by the Caliph for the rest of their lives, as per their commitment to the community. Jain missions According to Jaina tradition, Mahavira's following had swelled to 14,000 monks and 36,000 nuns by the time of his death in 527 BCE For some two centuries the Jains remained a small community of monks and followers. However, in the 4th century BCE, they gained strength and spread from Bihar to Orissa, then so South India and westwards to Gujarat and the Punjab, where Jain communities became firmly established, particularly among the mercantile classes. The period of the Mauryan dynasty to the 12th century was the period of Jainism's greatest growth and influence. Thereafter, the Jainas in the South and Central regions lost ground in face of rising Hindu devotional movements. Jainism retreated to the West and Northwest, which have remained its stronghold to the present. Emperor Samprati is regarded as the "Jain Ashoka" for his patronage and efforts to spreading Jainism in east India. Samprati, according to Jain historians, is considered more powerful and famous than Ashoka himself. Samprati built thousands of Jain Temples in India, many of which remain in use, such as the Jain temples at Viramgam and Palitana (Gujarat), Agar Malwa (Ujjain). Within three and a half years, he got one hundred and twenty-five thousand new temples built, thirty-six thousand repaired, twelve and a half million murtis, holy statues, consecrated and ninety-five thousand metal murtis prepared. Samprati is said to have erected Jain temples throughout his empire. He founded Jain monasteries even in non-Aryan territory, and almost all ancient Jain temples or monuments of unknown origin are popularly attributed to him. It may be noted that all the Jain monuments of Rajasthan and Gujarat, with unknown builders are also attributed to Emperor Samprati. Virachand Gandhi (1864–1901) from Mahuva represented Jains at the first Parliament of the World's Religions in Chicago in 1893 and won a silver medal. Gandhi was most likely the first Jain and the first Gujarati to travel to the United States, and his statue still stands at the Jain temple in Chicago. In his time he was a world-famous personality. Gandhi represented Jains in Chicago because the Great Jain Saint Param Pujya Acharya Vijayanandsuri, also known as Acharya Atmaram, was invited to represent the Jain religion at the first World Parliament of Religions. As Jain monks do not travel overseas, he recommended the bright young scholar Virchand Gandhi to be the emissary for the religion. Today there are 100,000 Jains in the United States. There are also tens of thousands of Jains located in the UK and Canada. Judaism Historically, various Jewish sects and movements have been consistent in avoiding or even forbidding proselytization (religion-to-religion conversion propaganda) to convert gentiles (non-Jews). They believe that gentiles do not need to convert to Judaism, due to Abrahamic religions being already under the Seven Laws of Noah. Chabad Lubavitch has a sub-sect that has engaged in an effort to spread Noahidism (Seven Laws of Noah) among non-Jews who follow none of the existing Abrahamic religions. Orthodox Judaism outreach (kiruv) encourages non-practicing Jews to become more knowledgeable and observant of halakha (Jewish law). Outreach is done worldwide, by organizations such as Chabad Lubavitch, Aish HaTorah, Ohr Somayach, and Partners In Torah. Members of Reform Judaism began a program to convert to their brand of Judaism the non-Jewish spouses of its intermarried members and non-Jews who have an interest in Reform Judaism. Their rationale is that so many Jews were lost during the Holocaust that newcomers must be sought out and welcomed. This approach has been rejected by both Orthodox Judaism and Conservative Judaism as unrealistic and posing a danger on the entire Jewish faith. Sikh missions According to Sikhs, when he was twenty-eight, Guru Nanak went as usual down to the river to bathe and meditate. It was said that he was gone for three days. When he reappeared, it is said he was "filled with the spirit of God". His first words after his re-emergence were: "there is no Hindu, there is no Muslim". With this secular principle he began his missionary work. He made four distinct major journeys, in the four different directions, which are called Udasis, spanning many thousands of kilometres, preaching the message of God. Currently there are gurdwaras in over 50 countries. Of missionary organizations, the most famous is probably The Sikh Missionary Society UK. The aim of the Sikh Missionary Society is the Advancement of the Sikh faith in the U.K. and abroad, engages in various activities: Produce and distribute books on the Sikh faith in English and Panjabi, and other languages to enlighten the younger generation of Sikhs as well as non-Sikhs. Advise and support young students in schools, colleges, and universities on Sikh issues and Sikh traditions. Arrange classes, lectures, seminars, conferences, Gurmat camps and the celebration of holy Sikh events, the basis of their achievement and interest in the field of the Sikh faith and the Panjabi language. Make available all Sikh artifacts, posters, literature, music, educational videos, DVDs, and multimedia CD-ROMs. There have been several Sikh missionaries: Bhai Gurdas (1551–1636), Punjabi Sikh writer, historian, missionary, and religious figure; the original scribe of the Guru Granth Sahib and a companion of four of the Sikh Gurus Giani Pritam Singh Dhillon, Indian freedom fighter Bhai Amrik Singh, devoted much of his life to Sikh missionary activities; one of the Sikh community's most prominent leaders along with Sant Jarnail Singh Bhindranwale Jathedar Sadhu Singh Bhaura (1905–1984), Sikh missionary who rose to be the Jathedar or high priest of Sri Akal Takhat, Amritsar Sikhs have emigrated to many countries of the world since Indian independence in 1947. Sikh communities exist in Britain, East Africa, Canada, the United States, Malaysia, and most European countries. Tenrikyo missions Tenrikyo conducts missionary work in approximately forty countries. Its first missionary was a woman named Kokan who worked on the streets of Osaka. In 2003, it operated approximately twenty thousand mission stations worldwide. Criticism Contact of Christian missionaries with isolated tribes is asserted as a cause of the extinction of some tribes, such as extinction from infections and even simple diseases such as flu, to which many tribes have no immunity. Documented cases of European contact with isolated tribes show rapid health deterioration, such as the Nambikwara tribe. Christian missionary work is criticized as a form of colonialism. Some Christian missionary thinkers have recognized complicity between colonialism and missions with roots in 'colonial paternalism'. Aspects of Christian missionary activity have come under criticism, including concerns about a lack of respect for other cultures. The potential destruction of social structure among the converts is also a concern. The Huaorani people of Amazonian Ecuador have had a well-documented mixed relation with Evangelical Christian missionaries and the contacts they brought to their communities, which some have argued led to the dissolving of unique Huaorani tribes and cultural practices. Impact of missions A 2020 study by Elena Nikolova and Jakub Polansky replicates Woodberry's analysis using twenty-six alternative democracy measures and extends the time period over which the democracy measures are averaged. These two simple modifications lead to the breakdown of Woodberry's results. Overall, no significant relationship between Protestant missions and the development of democracy can be established. A 2017 study found that areas of colonial Mexico that had Mendicant missions have higher rates of literacy and educational attainment today than regions that did not have missions. Areas that had Jesuit missions are today indistinct from the areas that had no missions. The study also found that "the share of Catholics is higher in regions where Catholic missions of any kind were a historical present." A 2016 study found that regions in Sub-Saharan Africa that Protestant missionaries brought printing presses to are today "associated with higher newspaper readership, trust, education, and political participation." Missionaries have also made significant contributions to linguistics and the description and documentation of many languages. "Many languages today exist only in missionary records. More than anywhere else, our knowledge of the native languages in South America has been the product of missionary activity… Without missionary documentation the reclamation [of several languages] would have been completely impossible" "A satisfactory history of linguistics cannot be written before the impressive contribution of missionaries is recognised." Lists of prominent missionaries American missionaries Gerónimo Boscana, (Roman Catholic Franciscan) missionary Isabel Crawford, (Baptist) missionary Antonio de Olivares, (Roman Catholic Franciscan) missionary Anton Docher, (Roman Catholic) missionary Elisabeth Elliot, American Protestant missionary in Ecuador, author and speaker, widow of Jim Elliot of Operation Auca Mary H. Fulton, female medical missionary to China, founder of Hackett Medical College for Women (夏葛女子醫學院) in Guangzhou, China Adoniram Judson, first significant missionary in Burma Eusebio Kino, (Roman Catholic Jesuit) missionary Zenas Sanford Loftis, medical missionary to Tibet Ajahn Sumedho, Theravada monk and established Thai Forest Tradition in UK Robert E. Longacre, Christian linguist missionary to Mexico Dada Maheshvarananda, Ananda Marga yoga missionary Fred Prosper Manget, medical missionary to China, founder of Houzhou General Hospital, Houzhou, China, also a doctor with the Flying Tigers and U.S. Army in Kunming, China, during World War II Lottie Moon, Baptist missionary to China Arthur Lewis Piper, medical missionary to the Belgian Congo Dada Pranakrsnananda, Ananda Marga yoga missionary Darlene Rose, missionary in Papua New Guinea John Stewart, (Methodist) missionary José de Anchieta, (Roman Catholic Jesuit) missionary Peter of Saint Joseph de Betancur, (Roman Catholic Franciscan) missionary John Allen Chau, (evangelical Christian) missionary killed while attempting to convert the uncontacted Sentinelese British Christian missionaries John Hobbis Harris, with his wife Alice Seeley, he used photography to expose colonial abuses Benjamin Hobson, medical missionary to China, set up a highly successful Wai Ai Clinic (惠愛醫館) in Guangzhou, China. Teresa Kearney, Sister in Uganda Olive Hilda Miller, missionary to Jamaica and the Cayman Islands William Milne, Bible translator to China Robert Morrison, Bible translator to China George Piercy, Methodist missionary to China Sam Pollard, Bible translator to China James Hudson Taylor, missionary to China, insist on going into the inland of China. John Wesley Thomas Henry Sparshott, missionary to East Africa. See also John McKendree Springer – Pioneer missionary in Africa List of Protestant missionaries in China List of Protestant missionaries in India List of Roman Catholic missionaries List of Roman Catholic missionaries in China List of Roman Catholic missionaries in India List of Eastern Orthodox missionaries List of missionaries to Hawaii List of missionaries to the South Pacific List of Slovenian missionaries List of Russian Orthodox missionaries List of Protestant missionaries to Southeast Asia List of Roman Catholic missions in Africa Christian missionaries in New Zealand Christian missionaries in Oceania Timeline of Christian missions Catholic missions Christianity and colonialism Christian missionaries Christianisation Evangelism History of Christian missions Indigenous church mission theory Mission (Christianity) Missiology Missionary kid Missionary religious institutes and societies Portuguese Inquisition in Goa and Bombay-Bassein Religious conversion Short-term mission Timeline of Christian missions References Further reading Dunch, Ryan. "Beyond cultural imperialism: Cultural theory, Christian missions, and global modernity." History and Theory 41.3 (2002): 301–325. online Dwight, Henry Otis et al. eds., The Encyclopedia of Missions (2nd ed. 1904) Online, Global coverage Of Protestant and Catholic missions. Robinson, David Muslim Societies in African History (The Press Syndicate of the University of Cambridge Cambridge, UK 2004) Sharma, Arvind (2014). Hinduism as a missionary religion. New Delhi: Dev Publishers & Distributors. Shourie, Arun. (2006). Missionaries in India: Continuities, changes, dilemmas. New Delhi: Rupa. Madhya Pradesh (India)., & Niyogi, M. B. (1956). Vindicated by time: The Niyogi Committee report on Christian missionary activities. Nagpur: Government Printing, Madhya Pradesh. External links Missionary eTexts Project on Religion and Economic Change, Protestant Mission Stations LFM. Social sciences & Missions Henry Martyn Centre for the study of mission & world Christianity William Carey Library, Mission Resources Hiney, Thomas: On the Missionary Trail, New York: Atlantic Monthly Press (2000), pp. 5–22. EtymologyOnLine (word history) Christian terminology Religious practices Religious occupations
Missionary
Biology
9,680
3,024,922
https://en.wikipedia.org/wiki/Computer%20security%20model
A computer security model is a scheme for specifying and enforcing security policies. A security model may be founded upon a formal model of access rights, a model of computation, a model of distributed computing, or no particular theoretical grounding at all. A computer security model is implemented through a computer security policy. For a more complete list of available articles on specific security models, see :Category:Computer security models. Selected topics Access control list (ACL) Attribute-based access control (ABAC) Bell–LaPadula model Biba model Brewer and Nash model Capability-based security Clark-Wilson model Context-based access control (CBAC) Graham-Denning model Harrison-Ruzzo-Ullman (HRU) High-water mark (computer security) Lattice-based access control (LBAC) Mandatory access control (MAC) Multi-level security (MLS) Non-interference (security) Object-capability model Protection ring Relationship-based access control (ReBAC) Role-based access control (RBAC) Take-grant protection model Discretionary access control (DAC) See also Security modes Protection mechanism References Krutz, Ronald L. and Vines, Russell Dean, The CISSP Prep Guide; Gold Edition, Wiley Publishing, Inc., Indianapolis, Indiana, 2003. CISSP Boot Camp Student Guide, Book 1 (v.082807), Vigilar, Inc.
Computer security model
Engineering
287
29,649,460
https://en.wikipedia.org/wiki/Aschbacher%20block
In finite group theory, a branch of mathematics, a block, sometimes called Aschbacher block, is a subgroup giving an obstruction to Thompson factorization and pushing up. Blocks were introduced by Michael Aschbacher. Definition A group L is called short if it has the following properties : L has no subgroup of index 2 The generalized Fitting subgroup F*(L) is a 2-group O2(L) The subgroup U = [O2(L), L] is an elementary abelian 2-group in the center of O2(L) L/O2(L) is quasisimple or of order 3 L acts irreducibly on U/CU(L) An example of a short group is the semidirect product of a quasisimple group with an irreducible module over the 2-element field F2 A block of a group G is a short subnormal subgroup. References Finite groups
Aschbacher block
Mathematics
195
767,086
https://en.wikipedia.org/wiki/Cotinine
Cotinine is an alkaloid found in tobacco and is also the predominant metabolite of nicotine, typically used as a biomarker for exposure to tobacco smoke. Cotinine is currently being studied as a treatment for depression, post-traumatic stress disorder (PTSD), schizophrenia, Alzheimer's disease and Parkinson's disease. Cotinine was developed as an antidepressant as a fumaric acid salt, cotinine fumarate, to be sold under the brand name Scotine, but it was never marketed. Similarly to nicotine, cotinine binds to, activates, and desensitizes neuronal nicotinic acetylcholine receptors, though at much lower potency in comparison. It has demonstrated nootropic and antipsychotic-like effects in animal models. Cotinine treatment has also been shown to reduce depression, anxiety, and fear-related behavior as well as memory impairment in animal models of depression, post-traumatic stress disorder, and Alzheimer's disease. Nonetheless, treatment with cotinine in humans was reported to have no significant physiologic, subjective, or performance effects in one study, though others suggest that this may not be the case. Because cotinine is the main metabolite to nicotine and has been shown to be pharmacologically active, it has been suggested that some of nicotine's effects in the nervous system may be mediated by cotinine and/or complex interactions with nicotine itself. Pharmacology A few studies indicate that the affinity for cotinine to the nicotinic acetylcholine receptors (nAChRs) is about 100 times lower than nicotine's. Some work suggests that cotinine may be a positive allosteric modulator of α7 nAChRs. If this is true, cotinine would facilitate endogenous neurotransmission without directly stimulating nAChRs. Pharmacokinetics Cotinine has an in vivo half-life of approximately 20 hours, and is typically detectable for several days (up to one week) after the use of tobacco. The level of cotinine in the blood, saliva, and urine is proportionate to the amount of exposure to tobacco smoke, so it is a valuable indicator of tobacco smoke exposure, including secondary (passive) smoke. People who smoke menthol cigarettes may retain cotinine in the blood for a longer period because menthol can compete with enzymatic metabolism of cotinine. African American smokers generally have higher plasma cotinine levels than Caucasian smokers. Males generally have higher plasma cotinine levels than females. These systematic differences in cotinine levels were attributed to variation in CYP2A6 activity. At steady state, plasma cotinine levels are determined by the amount of cotinine formation and the rate of cotinine removal, which are both mediated by the enzyme CYP2A6. Since CYP2A6 activity differs by sex (estrogen induces CYP2A6) and genetic variation, cotinine accumulates in individuals with slower CYP2A6 activity, resulting in substantial differences in cotinine levels for a given tobacco exposure. Detection in body fluids Drug tests can detect cotinine in the blood, urine, or saliva. Salivary cotinine concentrations are highly correlated to blood cotinine concentrations, and can detect cotinine in a low range, making it the preferable option for a less invasive method of tobacco exposure testing. Urine cotinine concentrations average four to six times higher than those in blood or saliva, making urine a more sensitive matrix to detect low-concentration exposure. Cotinine levels <10 ng/mL are considered to be consistent with no active smoking. Values of 10 ng/mL to 100 ng/mL are associated with light smoking or moderate passive exposure, and levels above 300 ng/mL are seen in heavy smokers — more than 20 cigarettes a day. In urine, values between 11 ng/mL and 30 ng/mL may be associated with light smoking or passive exposure, and levels in active smokers typically reach 500 ng/mL or more. In saliva, values between 1 ng/mL and 30 ng/mL may be associated with light smoking or passive exposure, and levels in active smokers typically reach 100 ng/mL or more. Cotinine assays provide an objective quantitative measure that is more reliable than smoking histories or counting the number of cigarettes smoked per day. Cotinine also permits the measurement of exposure to second-hand smoke (passive smoking). However, tobacco users attempting to quit with the help of nicotine replacement therapies (i.e., gum, lozenge, patch, inhaler, and nasal spray) will also test positive for cotinine, since all common NRT therapies contain nicotine that is metabolized in the same way. Therefore, the presence of cotinine is not a conclusive indication of tobacco use. Cotinine levels can be used in research to explore the question of the amount of nicotine delivered to the user of e-cigarettes, where laboratory smoking machines have many problems replicating real-life conditions. Serum cotinine concentration has been used for decades in US population surveys of the Centers for Disease Control and Prevention to monitor tobacco use, to monitor levels and trends in exposure to environmental tobacco smoke, and to study the relationship between tobacco smoke and chronic health conditions. An estimated one in four nonsmokers (approximately 58 million persons) were exposed to secondhand smoke during 2013-2014. Nearly 40% of children aged 3–11 years were exposed as were 50% of non-Hispanic blacks. References Pyrrolidones Alkaloids found in Nicotiana Nicotinic agonists Pyridine alkaloids Recreational drug metabolites Biomarkers 3-Pyridyl compounds
Cotinine
Chemistry,Biology
1,223
30,498,118
https://en.wikipedia.org/wiki/UTRdb
UTRdb is a database of 5' and 3' untranslated sequences of eukaryotic mRNAs. See also Five prime untranslated region Three prime untranslated region UTRome References External links data Biological databases RNA Gene expression
UTRdb
Chemistry,Biology
55
2,245,430
https://en.wikipedia.org/wiki/Intrinsic%20semiconductor
An intrinsic semiconductor, also called a pure semiconductor, undoped semiconductor or i-type semiconductor, is a semiconductor without any significant dopant species present. The number of charge carriers is therefore determined by the properties of the material itself instead of the amount of impurities. In intrinsic semiconductors the number of excited electrons and the number of holes are equal: n = p. This may be the case even after doping the semiconductor, though only if it is doped with both donors and acceptors equally. In this case, n = p still holds, and the semiconductor remains intrinsic, though doped. This means that some conductors are both intrinsic as well as extrinsic but only if n (electron donor dopant/excited electrons) is equal to p (electron acceptor dopant/vacant holes that act as positive charges). The electrical conductivity of chemically pure semiconductors can still be affected by crystallographic defects of technological origin (like vacancies), some of which can behave similar to dopants. Their effect can often be neglected, though, and the number of electrons in the conduction band is then exactly equal to the number of holes in the valence band. The conduction of current of intrinsic semiconductor is enabled purely by electron excitation across the band-gap, which is usually small at room temperature except for narrow-bandgap semiconductors, like . The conductivity of a semiconductor can be modeled in terms of the band theory of solids. The band model of a semiconductor suggests that at ordinary temperatures there is a finite possibility that electrons can reach the conduction band and contribute to electrical conduction. A silicon crystal is different from an insulator because at any temperature above absolute zero, there is a non-zero probability that an electron in the lattice will be knocked loose from its position, leaving behind an electron deficiency called a "hole". If a voltage is applied, then both the electron and the hole can contribute to a small current flow. Electrons and holes In an intrinsic semiconductor such as silicon at temperatures above absolute zero, there will be some electrons which are excited across the band gap into the conduction band and these electrons can support charge flowing. When the electron in pure silicon crosses the gap, it leaves behind an electron vacancy or "hole" in the regular silicon lattice. Under the influence of an external voltage, both the electron and the hole can move across the material. In an n-type semiconductor, the dopant contributes extra electrons, dramatically increasing the conductivity. In a p-type semiconductor, the dopant produces extra vacancies or holes, which likewise increase the conductivity. It is however the behavior of the p-n junction which is the key to the enormous variety of solid-state electronic devices Semiconductor current The current which will flow in an intrinsic semiconductor consists of both electron and hole current. That is, the electrons which have been freed from their lattice positions into the conduction band can move through the material. In addition, other electrons can hop between lattice positions to fill the vacancies left by the freed electrons. This additional mechanism is called hole conduction because it is as if the holes are migrating across the material in the direction opposite to the free electron movement. The current flow in an intrinsic semiconductor is influenced by the density of energy states which in turn influences the electron density in the conduction band. This current is highly temperature dependent. References See also Extrinsic semiconductor N-type semiconductor P-type semiconductor Semiconductor material types it:Semiconduttore#Semiconduttori intrinseci
Intrinsic semiconductor
Chemistry
731
66,099,877
https://en.wikipedia.org/wiki/Point%20Processes
Point Processes is a book on the mathematics of point processes, randomly located sets of points on the real line or in other geometric spaces. It was written by David Cox and Valerie Isham, and published in 1980 by Chapman & Hall in their Monographs on Applied Probability and Statistics book series. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. Topics Although Point Processes covers some of the general theory of point processes, that is not its main focus, and it avoids any discussion of statistical inference involving these processes. Instead, its aim is to present the properties and descriptions of several specific processes arising in applications of this theory, which had not been previously collected in texts in this area. Three of its six chapters concern more general material, while the final three are more specific. The first chapter includes introductory material on standard processes: Poisson point processes, renewal processes, self-exciting processes, and doubly stochastic processes. The second chapter provides some general theory including stationarity, orderliness (meaning that the probability of multiple arrivals in short intervals is sublinear in the interval length), Palm distributions, Fourier analysis, and probability-generating functions. Chapter four (the third of the more general chapters) concerns point process operations, methods of modifying or combining point processes to generate other processes. Chapter three, the first of the three chapters on more specific models, is titled "Special models". The special models that it covers include non-stationary Poisson processes, compound Poisson processes, and the Moran process, along with additional treatment of doubly stochastic processes and renewal processes. Until this point, the book focuses on point processes on the real line (possibly also with a time dimension), but the two final chapters concern multivariate processes and on point processes for higher dimensional spaces, including spatio-temporal processes and Gibbs point processes. Audience and reception The book is primarily a reference for researchers. It could also be used to provide additional examples for a course on stochastic processes, or as the basis for an advanced seminar. Although it uses relatively little advanced mathematics, readers are expected to understand advanced calculus and have some familiarity with probability theory and Markov chains. Writing some ten years after its original publication, reviewer Fergus Daly of The Open University writes that his copy has been well used, and that it "still is a very good book: lucid, relevant and still not matched in its approach by any other text". References Mathematics books 1980 non-fiction books Point processes
Point Processes
Mathematics
515
25,069,467
https://en.wikipedia.org/wiki/Bennett%20acceptance%20ratio
The Bennett acceptance ratio method (BAR) is an algorithm for estimating the difference in free energy between two systems (usually the systems will be simulated on the computer). It was suggested by Charles H. Bennett in 1976. Preliminaries Take a system in a certain super (i.e. Gibbs) state. By performing a Metropolis Monte Carlo walk it is possible to sample the landscape of states that the system moves between, using the equation where ΔU = U(Statey) − U(Statex) is the difference in potential energy, β = 1/kT (T is the temperature in kelvins, while k is the Boltzmann constant), and is the Metropolis function. The resulting states are then sampled according to the Boltzmann distribution of the super state at temperature T. Alternatively, if the system is dynamically simulated in the canonical ensemble (also called the NVT ensemble), the resulting states along the simulated trajectory are likewise distributed. Averaging along the trajectory (in either formulation) is denoted by angle brackets . Suppose that two super states of interest, A and B, are given. We assume that they have a common configuration space, i.e., they share all of their micro states, but the energies associated to these (and hence the probabilities) differ because of a change in some parameter (such as the strength of a certain interaction). The basic question to be addressed is, then, how can the Helmholtz free energy change (ΔF = FB − FA) on moving between the two super states be calculated from sampling in both ensembles? The kinetic energy part in the free energy is equal between states so can be ignored. Also the Gibbs free energy corresponds to the NpT ensemble. The general case Bennett shows that for every function f satisfying the condition (which is essentially the detailed balance condition), and for every energy offset C, one has the exact relationship where UA and UB are the potential energies of the same configurations, calculated using potential function A (when the system is in superstate A) and potential function B (when the system is in the superstate B) respectively. The basic case Substituting for f the Metropolis function defined above (which satisfies the detailed balance condition), and setting C to zero, gives The advantage of this formulation (apart from its simplicity) is that it can be computed without performing two simulations, one in each specific ensemble. Indeed, it is possible to define an extra kind of "potential switching" Metropolis trial move (taken every fixed number of steps), such that the single sampling from the "mixed" ensemble suffices for the computation. The most efficient case Bennett explores which specific expression for ΔF is the most efficient, in the sense of yielding the smallest standard error for a given simulation time. He shows that the optimal choice is to take , which is essentially the Fermi–Dirac distribution (satisfying indeed the detailed balance condition). . This value, of course, is not known (it is exactly what one is trying to compute), but it can be approximately chosen in a self-consistent manner. Some assumptions needed for the efficiency are the following: The densities of the two super states (in their common configuration space) should have a large overlap. Otherwise, a chain of super states between A and B may be needed, such that the overlap of each two consecutive super states is adequate. The sample size should be large. In particular, as successive states are correlated, the simulation time should be much larger than the correlation time. The cost of simulating both ensembles should be approximately equal - and then, in fact, the system is sampled roughly equally in both super states. Otherwise, the optimal expression for C is modified, and the sampling should devote equal times (rather than equal number of time steps) to the two ensembles. Multistate Bennett acceptance ratio The multistate Bennett acceptance ratio (MBAR) is a generalization of the Bennett acceptance ratio that calculates the (relative) free energies of several multi states. It essentially reduces to the BAR method when only two super states are involved. Relation to other methods The perturbation theory method This method, also called Free energy perturbation (or FEP), involves sampling from state A only. It requires that all the high probability configurations of super state B are contained in high probability configurations of super state A, which is a much more stringent requirement than the overlap condition stated above. The exact (infinite order) result or This exact result can be obtained from the general BAR method, using (for example) the Metropolis function, in the limit . Indeed, in that case, the denominator of the general case expression above tends to 1, while the numerator tends to . A direct derivation from the definitions is more straightforward, though. The second order (approximate) result Assuming that and Taylor expanding the second exact perturbation theory expression to the second order, one gets the approximation Note that the first term is the expected value of the energy difference, while the second is essentially its variance. The first order inequalities Using the convexity of the log function appearing in the exact perturbation analysis result, together with Jensen's inequality, gives an inequality in the linear level; combined with the analogous result for the B ensemble one gets the following version of the Gibbs-Bogoliubov inequality: Note that the inequality agrees with the negative sign of the coefficient of the (positive) variance term in the second order result. The thermodynamic integration method writing the potential energy as depending on a continuous parameter, one has the exact result This can either be directly verified from definitions or seen from the limit of the above Gibbs-Bogoliubov inequalities when . we can therefore write which is the thermodynamic integration (or TI) result. It can be approximated by dividing the range between states A and B into many values of λ at which the expectation value is estimated, and performing numerical integration. Implementation The Bennett acceptance ratio method is implemented in modern molecular dynamics systems, such as Gromacs. Python-based code for MBAR and BAR is available for download at . See also Parallel tempering References External links Bennett Acceptance Ratio from AlchemistryWiki. Multistate Bennett Acceptance Ratio from AlchemistryWiki. Weighted Histogram Analysis Method (MBAR being the unbinned case) from AlchemistryWiki. Thermodynamics Statistical mechanics
Bennett acceptance ratio
Physics,Chemistry,Mathematics
1,325
20,559,667
https://en.wikipedia.org/wiki/Photoacoustic%20Doppler%20effect
The photoacoustic Doppler effect is a type of Doppler effect that occurs when an intensity modulated light wave induces a photoacoustic wave on moving particles with a specific frequency. The observed frequency shift is a good indicator of the velocity of the illuminated moving particles. A potential biomedical application is measuring blood flow. Specifically, when an intensity modulated light wave is exerted on a localized medium, the resulting heat can induce an alternating and localized pressure change. This periodic pressure change generates an acoustic wave with a specific frequency. Among various factors that determine this frequency, the velocity of the heated area and thus the moving particles in this area can induce a frequency shift proportional to the relative motion. Thus, from the perspective of an observer, the observed frequency shift can be used to derive the velocity of illuminated moving particles. Theory To be simple, consider a clear medium firstly. The medium contains small optical absorbers moving with velocity vector . The absorbers are irradiated by a laser with intensity modulated at frequency . Thus, the intensity of the laser could be described by: When is zero, an acoustic wave with the same frequency as the light intensity wave is induced. Otherwise, there is a frequency shift in the induced acoustic wave. The magnitude of the frequency shift depends on the relative velocity , the angle between the velocity and the photon density wave propagation direction, and the angle between the velocity and the ultrasonic wave propagation direction. The frequency shift is given by: Where is the speed of light in the medium and is the speed of sound. The first term on the right side of the expression represents the frequency shift in the photon density wave observed by the absorber acting as a moving receiver. The second term represents the frequency shift in the photoacoustic wave due to the motion of the absorbers observed by the ultrasonic transducer. In practice, since and , only the second term is detectable. Therefore, the above equation reduces to: In this approximation, the frequency shift is not affected by the direction of the optical radiation. It is only affected by the magnitude of velocity and the angle between the velocity and the acoustic wave propagation direction. This equation also holds for a scattering medium. In this case, the photon density wave becomes diffusive due to light scattering. Although the diffusive photon density wave has a slower phase velocity than the speed of light, its wavelength is still much longer than the acoustic wave. Experiment In the first demonstration of the Photoacoustic Doppler effect, a continuous wave diode laser was used in a photoacoustic microscopy setup with an ultrasonic transducer as the detector. The sample was a solution of absorbing particles moving through a tube. The tube was in a water bath containing scattering particles Figure 2 shows a relationship between average flow velocity and the experimental photoacoustic Doppler frequency shift. In a scattering medium, such as the experimental phantom, fewer photons reach the absorbers than in an optically clear medium. This affects the signal intensity but not the magnitude of the frequency shift. Another demonstrated feature of this technique is that it is capable of measuring flow direction relative to the detector based on the sign of the frequency shift. The reported minimum detected flow rate is 0.027 mm/s in the scattering medium. Application One promising application is the non-invasive measurement of flow. This is related to an important problem in medicine: the measurement of blood flow through arteries, capillaries, and veins. Measuring blood velocity in capillaries is an important component to clinically determining how much oxygen is delivered to tissues and is potentially important to the diagnosis of a variety of diseases including diabetes and cancer. However, a particular difficulty of measuring flow velocity in capillaries is caused by the low blood flow rate and micrometre-scale diameter. Photoacoustic Doppler effect based imaging is a promising method for blood flow measurement in capillaries. Existing techniques Based on either ultrasound or light there are several techniques currently being used to measure blood velocity in a clinical setting or other types of flow velocities. Doppler ultrasound The Doppler ultrasound technique uses Doppler frequency shifts in ultrasound wave. This technique is currently used in biomedicine to measure blood flow in arteries and veins. It is limited to high flow rates (cm/s) generally found in large vessels due to the high background ultrasound signal from biological tissue. Laser doppler flowmetry Laser Doppler Flowmetry utilizes light instead of ultrasound to detect flow velocity. The much shorter optical wavelength means this technology is able to detect low flow velocities out of the range of Doppler ultrasound. But this technique is limited by high background noise and low signal due to multiple scattering. Laser Doppler flowmetry can measure only the averaged blood speed within 1mm3 without information about flow direction. Wideband laser Doppler imaging by digital holography with a high-speed camera can overcome some of the limitations of laser Doppler flowmetry and achieve blood flow measurements in superficial vessels at higher spatial and temporal resolution. Doppler optical coherence tomography Doppler Optical coherence tomography is an optical flow measurement technique that improves on the spatial resolution of laser Doppler flowmetry by rejecting multiple scattering light with coherent gating. This technique is able to detect flow velocity as low as m/s with the spatial resolution of m. The detection depth is usually limited by the high optical scattering coefficient of biological tissue to mm. Photoacoustic doppler flowmetry Photoacoustic Doppler effect can be used to measure the blood flow velocity with the advantages of Photoacoustic imaging. Photoacoustic imaging combines the spatial resolution of ultrasound imaging with the contrast of optical absorption in deep biological tissue. Ultrasound has good spatial resolution in deep biological tissue since ultrasonic scattering is much weaker than optical scattering, but it is insensitive to biochemical properties. Conversely, optical imaging is able to achieve high contrast in biological tissue via high sensitivity to small molecular optical absorbers, such as hemoglobin found in red blood cells, but its spatial resolution is compromised by the strong scattering of light in biological tissue. By combining the optical imaging with ultrasound, it is possible to achieve both high contrast and spatial resolution. The photoacoustic Doppler flowmetry could use the power of photoacoustics to measure flow velocities that are usually inaccessible to pure light-based or ultrasound techniques. The high spatial resolution could make it possible to pinpoint only a few absorbing particles localized to a single capillary. High contrast from the strong optical absorbers make it possible to clearly resolve the signal from the absorbers over the background. See also Photoacoustic spectroscopy Photoacoustic imaging in biomedicine Photoacoustic tomography Doppler effect laser Doppler imaging Doppler optical coherence tomography References Doppler effects Radio frequency propagation Wave mechanics Radar signal processing
Photoacoustic Doppler effect
Physics
1,425
23,414,619
https://en.wikipedia.org/wiki/Y%20Chromosome%20Consortium
The Y Chromosome Consortium (YCC) was a collection of scientists who worked toward the understanding of human Y chromosomal phylogenetics and evolution. The consortium had the following objective: web resources that communicate information relating to the non-recombinant region of the Y-chromosome including new variants and changes in the nomenclature. The consortium sponsored literature regarding updates in the phylogenetics and nomenclature. See also Human Y-chromosome DNA haplogroup International Society of Genetic Genealogy (ISOGG) References External links Y-DNA Haplogroup Tree at ISOGG International scientific organizations Phylogenetics Population genetics organizations
Y Chromosome Consortium
Biology
122
4,504,020
https://en.wikipedia.org/wiki/Weight%20machine
A weight machine is an exercise machine used for weight training that uses gravity as the primary source of resistance and a combination of simple machines to convey that resistance to the person using the machine. Each of the simple machines (pulley, lever, wheel, incline) changes the mechanical advantage of the overall machine relative to the weight. Stack machines A stack machine—also called a stack or rack—has a set of rectangular plates that are pierced by a vertical bar which has holes drilled in it to accept a pin. Each of the plates has a channel on its underside (or a hole through the middle, as visible in the picture) that aligns with one of the holes. When the pin is inserted through the channel into the hole, all of the plates above the pin rest upon it, and are lifted when the bar rises. The plates below do not rise. This allows the same machine to provide several levels of resistance over the same range of motion with an adjustment that requires very little force to accomplish in itself. The means of lifting the bar varies. Some machines have a roller at the top of the bar that sits on a lever. When the lever is raised the bar can go up and the roller moves along the lever, allowing the bar to stay vertical. On some machines the bar is attached to a hinge on the lever, which causes swaying in the bar and the plates as the lever goes up and down. On other machines the bar is attached to a cable or belt, which runs through pulleys or over a wheel. The other end of the cable will either be a handle or strap that the user holds or wraps around some body part, or will be attached to a lever, adding further simple machines to the mechanical chain. Usually, each plate is marked with a number. On some machines these numbers give the actual weight of the plate and those above it. On some, the number gives the force at the user's actuation point with the machine. And on some machines the number is simply an index counting the number of plates being lifted. The early Nautilus machines were a combination of lever and cable machines. They also had optional, fixed elements such as a chinning bar. Universal Gym Equipment pioneered the multi-station style of machines. Plate-loaded machines Plate-loaded machines (such as the Smith machine or sled-type leg press) use standard barbell plates instead of captive stacks of plates. They combine a bar-end on which to hang the plates with several simple machines to convey the force to the user. The plate-loaded machines will often have a very high mechanical advantage, due to the need to make room for large plates over a large range of motion following a path that causes them to converge at one end or the other. Also, the motion will generally not be vertical, and the net resistance is equal to the cosine of the angle at which it is moving relative to vertical. For example, consider an incline press machine that is a single-lever machine that has the plates halfway up the lever from the handles to the fulcrum, and begins moving the plates at a 45-degree angle from vertical. The lever will provide a leverage advantage of 2:1, and the incline will have an advantage of 1:√2/2, for a net mechanical advantage of . Thus () of plates will apply to the user only an equaling weight of or a force of at the beginning of the motion. On the other end of the spectrum may be a bent-over-row machine that is designed with the user's grip between the plates and the fulcrum. This amplifies the force needed by the user relative to the weight of the plates. See also Cable machine Smith machine Personal trainer References Weight training equipment Machines
Weight machine
Physics,Technology,Engineering
766
8,140,616
https://en.wikipedia.org/wiki/Dvoretzky%E2%80%93Kiefer%E2%80%93Wolfowitz%20inequality
In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) provides a bound on the worst case distance of an empirically determined distribution function from its associated population distribution function. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality with an unspecified multiplicative constant C in front of the exponent on the right-hand side. In 1990, Pascal Massart proved the inequality with the sharp constant C = 2, confirming a conjecture due to Birnbaum and McCarty. In 2021, Michael Naaman proved the multivariate version of the DKW inequality and generalized Massart's tightness result to the multivariate case, which results in a sharp constant of twice the dimension k of the space in which the observations are found: C = 2k. The DKW inequality Given a natural number n, let X1, X2, …, Xn be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let Fn denote the associated empirical distribution function defined by so is the probability that a single random variable is smaller than , and is the fraction of random variables that are smaller than . The Dvoretzky–Kiefer–Wolfowitz inequality bounds the probability that the random function Fn differs from F by more than a given constant ε > 0 anywhere on the real line. More precisely, there is the one-sided estimate which also implies a two-sided estimate This strengthens the Glivenko–Cantelli theorem by quantifying the rate of convergence as n tends to infinity. It also estimates the tail probability of the Kolmogorov–Smirnov statistic. The inequalities above follow from the case where F corresponds to be the uniform distribution on [0,1] as Fn has the same distributions as Gn(F) where Gn is the empirical distribution of U1, U2, …, Un where these are independent and Uniform(0,1), and noting that with equality if and only if F is continuous. Multivariate case In the multivariate case, X1, X2, …, Xn is an i.i.d. sequence of k-dimensional vectors. If Fn is the multivariate empirical cdf, then for every ε, n, k > 0. The (n + 1) term can be replaced with a 2 for any sufficiently large n. Kaplan–Meier estimator The Dvoretzky–Kiefer–Wolfowitz inequality is obtained for the Kaplan–Meier estimator which is a right-censored data analog of the empirical distribution function for every and for some constant , where is the Kaplan–Meier estimator, and is the censoring distribution function. Building CDF bands The Dvoretzky–Kiefer–Wolfowitz inequality is one method for generating CDF-based confidence bounds and producing a confidence band, which is sometimes called the Kolmogorov–Smirnov confidence band. The purpose of this confidence interval is to contain the entire CDF at the specified confidence level, while alternative approaches attempt to only achieve the confidence level on each individual point, which can allow for a tighter bound. The DKW bounds runs parallel to, and is equally above and below, the empirical CDF. The equally spaced confidence interval around the empirical CDF allows for different rates of violations across the support of the distribution. In particular, it is more common for a CDF to be outside of the CDF bound estimated using the DKW inequality near the median of the distribution than near the endpoints of the distribution. The interval that contains the true CDF, , with probability is often specified as which is also a special case of the asymptotic procedure for the multivariate case, whereby one uses the following critical value for the multivariate test; one may replace 2k with k(n + 1) for a test that holds for all n; moreover, the multivariate test described by Naaman can be generalized to account for heterogeneity and dependence. See also Concentration inequality – a summary of bounds on sets of random variables. References Asymptotic theory (statistics) Statistical inequalities Empirical process
Dvoretzky–Kiefer–Wolfowitz inequality
Mathematics
911
19,079,639
https://en.wikipedia.org/wiki/Kyriakos%20Tamvakis
Kyriakos Tamvakis (; born in 1950) is a Greek theoretical physicist and professor at the University of Ioannina. Kyriakos Tamvakis studied at the University of Athens and gained his Ph.D. at Brown University, Providence, Rhode Island, USA in 1978. His thesis title was "Induced Boson Selfcouplings In Four Fermion And Yukawa Theories". Since then he has held several positions at CERN’s Theory Division in Geneva, Switzerland. He has been Professor of Theoretical Physics at the University of Ioannina, Greece, since 1982. Professor Tamvakis has published more than 100 articles on theoretical high-energy physics in various journals and has written two textbooks in Greek, on quantum mechanics and on classical electrodynamics. References External links Publications in Inspire Ph.D.Thesis (Brown University, 1978) 21st-century Greek physicists Living people 1950 births Theoretical physicists People associated with CERN National and Kapodistrian University of Athens alumni Brown University alumni 20th-century Greek physicists
Kyriakos Tamvakis
Physics
215
33,663,886
https://en.wikipedia.org/wiki/ChiRP-Seq
ChiRP-Seq (Chromatin Isolation by RNA purification) is a high-throughput sequencing method to discover regions of the genome which are bound by a specific RNA (or by a ribonucleoprotein containing the RNA of interest). Recent studies have shown that a significant proportion of some genomes (including mouse and human genomes) synthesize RNA that apparently do not code for proteins. The function of most of these non-coding RNA still has to be ascertained. Various genomic methods are being developed to map the functional association of these novel RNA to distinct regions of the genome to gain a better understanding of their function. ChiRP-Seq is one of these new methods which uses the massively parallel sequencing capability of 2nd generation sequencers to catalog the binding sites of these novel RNA molecules on a genome. Although many have believed that RNAs mainly encode for proteins, a very large portion of the eukaryotic genome is composed of RNAs that do not. These RNAs were originally considered junk until new advancements lead to the realization that they may indeed have a biological purpose. Over the last few years lncRNAs have been the least explored and functionally characterized emerging regulatory molecules, especially in comparison to their short counterparts, small ncRNAs. ChiRP-Seq is a new technique that has allowed researchers to map long RNA occupancy across the genome at a higher resolution than ever before. ChiRP-Seq works via affinity capture of a target complex of lncRNA and chromatin by tiling antisense-oligos. This technique will allow scientists to generate a map of genomic binding sites of several hundred bases very accurately due to high sensitivity and low background. Overview of method Tens of oligonucleotide probes are designed to be complementary to the RNA of interest. These oligos are labeled with biotin. Cells are cross-linked by UV or formalin and nuclei are isolated from these treated cells. The isolated nuclei are lysed and the released chromatin is fragmented by sonication to produce approximately 100-500 bp sized fragments. These chromatin fragments are hybridized to the biotinylated probe set. Complexes containing biotin-probe + RNA of interest + DNA fragment are captured by magnetic beads labeled with streptavidin. DNA is isolated from an aliquot of the bound complex by treatment with RNAse (or proteinase followed by RNAse) to digest associated protein and RNA. RNA may also be isolated from an additional aliquot of the bound complex to detect other RNA molecules associated with the RNA of interest. The purified DNA is then used to prepare a sequencing library and the library is sequenced on a next generation DNA sequencing system. The sequencing reads are then mapped to the genome. A pile-up of reads at specific locations on the genome indicates that the RNA of interest had bound to that region of the genome. This helps delineate specific genomic regions that interact with RNA. For example, genomic targets of enhancer RNA which act at a distance from their site of synthesis can be easily evaluated by ChiRP-Seq. References DNA sequencing
ChiRP-Seq
Chemistry,Biology
652
67,432,243
https://en.wikipedia.org/wiki/Convex%20compactification
In mathematics, specifically in convex analysis, the convex compactification is a compactification which is simultaneously a convex subset in a locally convex space in functional analysis. The convex compactification can be used for relaxation (as continuous extension) of various problems in variational calculus and optimization theory. The additional linear structure allows e.g. for developing a differential calculus and more advanced considerations e.g. in relaxation in variational calculus or optimization theory. It may capture both fast oscillations and concentration effects in optimal controls or solutions of variational problems. They are known under the names of relaxed or chattering controls (or sometimes bang-bang controls) in optimal control problems. The linear structure gives rise to various maximum principles as first-order necessary optimality conditions, known in optimal-control theory as Pontryagin's maximum principle. In variational calculus, the relaxed problems can serve for modelling of various microstructures arising in modelling Ferroics, i.e. various materials exhibiting e.g. Ferroelasticity (as Shape-memory alloys) or Ferromagnetism. The first-order optimality conditions for the relaxed problems leads Weierstrass-type maximum principle. In partial differential equations, relaxation leads to the concept of measure-valued solutions. The notion was introduced by Roubíček in 1991. Example The set of Young measures arising from bounded sets in Lebesgue spaces. The set of DiPerna-Majda measures arising from bounded sets in Lebesgue spaces. See also Young measures References Notes Sources convex analysis Compactification (mathematics)
Convex compactification
Mathematics
325
5,595,712
https://en.wikipedia.org/wiki/Digital%20cross-connect%20system
A digital cross-connect system (DCS or DXC) is a piece of circuit-switched network equipment, used in telecommunications networks, that allows lower-level TDM bit streams, such as DS0 bit streams, to be rearranged and interconnected among higher-level TDM signals, such as DS1 bit streams. DCS units are available that operate on both older T-carrier/E-carrier bit streams, as well as newer SONET/SDH bit streams. DCS devices can be used for "grooming" telecommunications traffic, switching traffic from one circuit to another in the event of a network failure, supporting automated provisioning, and other applications. Having a DCS in a circuit-switched network provides important flexibility that can otherwise only be obtained at higher cost using manual "DSX" cross-connect patch panels. DCS devices "switch" traffic, but they are not packet switches—they switch circuits, not packets, and the circuit arrangements they are used to manage tend to persist over very long time spans, typically months or longer, as compared to packet switches, which can route every packet differently, and operate on micro- or millisecond time spans. DCS units are also sometimes colloquially called "DACS" units, after a proprietary brand name of DCS units created and sold by AT&T's Western Electric division, now Alcatel-Lucent. Modern digital access and cross-connect systems are not limited to the T-carrier system, and may accommodate high data rates such as those of SONET. Transmuxing Transmuxing (transmux: transcode multiplexing) is a telecommunications signaling format change between two signaling methods, typically synchronous optical network signals, SONET, and various time-division multiplexing, TDM, signals. Transmuxing changes the “container” without changing the “contents.” Transmuxing provides the carrier the capability to embed a telecommunications signal from one logical TDM circuit to another within SONET without physically breaking down the TDM circuit into its components and reconstructing it. There are two types of transmuxing – electrical transmuxing and Optical transmuxing (sometimes called portless transmuxing). In electrical transmuxing, TDM signals (typically DS1/T1 or DS3) are brought in using copper connections, transmuxed to SONET and transported across the network until the reverse occurs. In optical transmuxing, TDM signals (DS1/T1, DS3, OCx) are brought in using fiber optics, transmuxed to SONET and transported across the network until the reverse occurs. In the U.S. and Japan, DS1/T1 signals are transmuxed into a SONET virtual tributary called a VT1.5. Traffic grooming Traffic grooming is the process of grouping smaller telecommunications signals into larger. This is typically done to minimize the number of connections and circuits needed to optimize the total cost. In TDM, 24 DS0 signals are grouped into a DS1/T1 signal and 28 DS1/T1 signals are groomed into a DS3 signal. A single DS3 signal carries 44.736 Mbit/s of data (672 DS0) and can be sent using a single cable. Circuit switching Circuit switching is the process of redirecting data signals from one input location to another. Mixed traffic handling In a Central Office DCS system, all kinds of signals connect into a DCS. Common signals connecting to a DCS are at the Electrical - DS1, DS3 levels and Optical (OCx) - OC3, OC12, OC48, and OC192. The DCS must be able to groom the traffic, economically and quickly, at the most efficient and desired levels. This is performed at the lowest level possible - DS1 level (or VT1.5) is preferred. A SONET 3/1 DCS will transmux and carry DS3 signals as STS-1 signals and groom TDM DS1/T1s using VT1.5 signals. The Central Office is where signals are generally switched and groomed to route DS1s needing to be mapped to other Optical or Electrical signals to get to different equipment or sent along to other Central Offices. If an Electrical DS3 is received, it would be connected to an Electrical Transmux port in the DCS where it would be converted from a DS3, demultiplexed back down to the DS1 level (28 DS1s), overhead would be added to the DS1s to make them VT1.5s and the VT1.5s would be put into an STS-1 and sent to the DCS Matrix as a VT mapped STS-1. If a DS3 is delivered to the Central Office inside a STS-1 (DS3 mapped STS1) carried in an OCx signal, the OCx would be connected to the DCS where the DS3 mapped STS-1 would be Optically Transmuxed and converted to a VT mapped STS-1, inside the DCS without terminating the electrical signal, and sent to the DCS Matrix as a VT mapped STS-1. In the DCS VT Matrix, the VT1.5s would be groomed from any VT mapped STS-1 to any other VT mapped STS-1s that are provisioned in the DCS VT Matrix. In diagram A, a Traverse DCS is shown receiving mixed traffic into I/O shelves. In those I/O shelves, the signals are prepared to be sent to the central Matrix shelf as VT mapped STSs. In the case of receiving an Electrical DS3, where 28 DS1s were muxed into a DS3 by means of an external M13 multiplexer (like a WideBank28 or TransAccess200), it will connect to an Electrical Tmux port on the I/O shelf to be Electrically Transmuxed. And, when a DS3 is connected to an I/O shelf via an optical OCx signal, the I/O shelf will Optically Transmux the DS3. All the VT mapped STSs from an I/O shelf are then sent to the central DCS Matrix shelf, where VT1.5s (DS1s) are groomed directly from one VT mapped STS1 to another VT mapped STSs in the VT Matrix and sent back out to an I/O shelf for further routing. See also Optical cross-connect References Cisco Technical Glossary iQor MarketPlace web site Network architecture Telecommunications equipment Cross connect system
Digital cross-connect system
Technology,Engineering
1,367
41,548,415
https://en.wikipedia.org/wiki/Sulfinalol
Sulfinalol is a beta adrenergic receptor antagonist. Synthesis The methyl group on a sulfoxide is sufficiently acidic to substitute for phenolic hydroxyl. The preparation of this combined α- and β-blocker sulfinalol begins by protection of the phenolic hydroxyl as its benzoate ester. Bromination followed by condensation with 4-(4-methoxyphenyl)butan-2-amine (not PMA) gives the aminoketone 3. Successive catalytic reduction and saponification affords aminoalcohol 4. Oxidation of the sulfide to the sulfoxide with a reagent such as metaperiodate gives sulfinalol (5). References Beta blockers Sulfoxides Phenols 4-Methoxyphenyl compounds
Sulfinalol
Chemistry
178
73,570,383
https://en.wikipedia.org/wiki/Plant-based%20leather
Plant-based leather, also known as vegan leather or eco-leather, is a type of material made from plant-based sources as an alternative to traditional leather, which is typically made from animal hides. Plant-based leather can be made from a variety of sources, including pineapple leaves, mushrooms, corn, apple peels, and recycled plastic. The growing interest in sustainable and environmentally friendly products has led to increased demand for plant-based leather in recent years. Apple leather Apple leather, also known as AppleSkin, is a plant-based leather invented by Alberto Volcan from Bolzano, Italy. Working with waste recycling company, Frumat, and manufacturer, Mabel, Volcan's research on turning waste from the apple industry into usable material began in 2004. The first products made with apple leather were manufactured in 2019, and is most commonly used for small accessories like wallets. One of the leading production companies in Apple leather is OLIVER CO, based in Bermondsey, South London; The company creates sustainable accessory such as wallets, cardholders, phone cases, etc... Production There are two processes that can turn apple waste into leather. The first process turns the apple waste into a pureé, which is then spread flat on a sheet and dehydrated; next the sheet is combined with polyurethane to add durability. The second process turns the apple waste into a powder, which is then combined with polyurethane and coated onto a cotton and polyester backing. Sustainability AppleSkin apple leather is PETA approved Vegan, USDA Biopreferred approved, and OEKO-TEX certified. Despite the name, apple leather is not entirely biodegradable. After being combined with polyurethane, the leather is only 50% plant-based. However apple leather production emits less carbon dioxide (CO2) than PU leather; for every of apple waste used as a substitute for PU, of CO2 is saved. The majority of the sustainability that comes from apple leather is in its consumption of waste; by repurposing part of the 4 million metric tones per year³ of waste that comes from apple peels and stalks, the process keeps the surplus from decomposing and producing methane, which contributes to climate change. Cactus leather Cactus leather is a plant-based leather produced from the mature leaves of the nopal (prickly-pear) cactus native to Mexico. Founded by entrepreneurs Adrián López Velarde and Marte Cázarez, Desserto was the first company to manufacture cactus leather. Their goal was to create a sustainable material that fit the specifications required by the industries that utilize animal and/or synthetic leather. Following two years of research and development, the leather was completed in July 2019 and was first showcased in Milan, Italy in October 2019, and is now used in a variety of fashion and automotive products, marking a significant step towards sustainable alternatives in these industries. Production Cactus only needs 200 liters of water to have a growth of one kilogram of biomass; those 200 liters are absorbed by the plant from the humidity of the environment without having to irrigate the plant. The hygroscopic mechanism of the cactus absorbs CO2 during night because only the environment is fresh. The plant opens its stoma capturing CO2, generating oxygen and absorbing water present in the atmosphere which normally comes from the morning dew. The process of cultivating cactus leather has several steps. First, the mature pads of the cactus are harvested, cleaned, and ground down. Next, the pads are dried under the sun for three to five days. Then, fibers are separated from the dried pads and mixed with chemicals to form a bio-resin, which is then poured over a carrier such as cotton or polyester. Winner Nippon leatherette Pvt. Ltd. manufactures cactus leather in India. Sustainability Desserto cactus leather is mostly biodegradable, consisting of 92% organic carbon content and has a tested durability of ten years. Most steps in the cactus leather production process are also sustainable in practice; the Desserto farm generates only of carbon dioxide annually while absorbing over per year. When harvesting the mature leaves, the cactus is not harmed, so it continues to grow. The cacti do not require herbicides or pesticides. Of the of water required to grow of cactus biomass, the majority is absorbed naturally from atmospheric humidity. Cork leather Cork leather is a plant-based leather made from bark harvested from cork oak trees native to many parts of Europe. There is little information regarding which company originally created the idea for cork leather, but current companies that produce it include Mahi Leather in Kanpur, Northern India, and HZcork located in Dongguan, China which produces both cork leather and cork fabrics. Production The process for harvesting and manufacturing cork leather is much simpler than apple and cactus leather. First, the cork tree bark is stripped into planks, these planks are then air-dried for six months; next the boards are boiled in water and pressed into thinner sheets. After this, the sheet of cork is adhered onto a fabric backing, usually cotton or polyester, with suberin, an adhesive naturally produced by the cork. When extra durability is needed, the cork is bonded to the backing with polyurethane, which decreases the fabric's biodegradability. Sustainability Both Mahi brand cork leather and HZcork brand leather have a sustainable production process. When done correctly, the oak is not harmed when the bark is harvested; additionally, a single cork tree will produce usable bark for over 200 years. The process of turning bark into leather does not involve toxic chemicals nor does it emit pollution; cork trees also do not release harmful chemicals when burned. The downside to the use of cork leather are that it is not as durable as animal leather, and despite being one of the most environmentally friendly plant-based leathers, it is underutilized by fashion companies due to its unique texture. Mushroom leather Mushroom leather is a plant-based leather made from mycelium, the vegetative filaments that make up the branches of fungi. Mushroom leather was first developed in 2013 by Philip Ross and Jonas Edvard and called MYX, which was made from the waste of the oyster mushroom industry. About of mushroom leather is produced per year, at an average of $50.00 per square foot ($540.00 per square meter). Current mushroom leather producers include Mylo by Bolt Threads, MycoWorks, which patented their product in 2015, and MuSkin. Mushroom leather is primarily produced in Indonesia. Production Mushroom leather has one of the most complicated production processes of the plant-based leathers. First the substrate, the materials used as food for the mushroom, such as corn or any agriculture waste, is put into a bag, dampened, and pasteurized; this causes the mycelium to grow and colonize the substrate for two to three weeks, at which point it is harvested. The harvested mycelium is then compressed; during the compression, dyes or textures can be added to create the desired color and texture. Sustainability The main bonuses of sustainability in the production of mushroom leather come from the fact that the production is closed-loop, which means that the materials needed to make the substrate can come from consumer or industry waste, and that the end product can also be repurposed as fertilizer. Mylo brand mushroom leather is 80% bio based without synthetic backings or adhesives. In most cases, mushroom leathers are completely biodegradable; however, similar to cork leathers, when extra durability is needed, the mushroom leather is reinforced with polyurethane, which decreases its biodegradability. Pineapple leather Pineapple leather is a plant-based leather made from the cellulose fibers of pineapple leaves. The pineapple leather, Piñatex, was developed by Carmen Hijosa and is produced by textile company Ananas Anam. Production To create the pineapple leather, the fibers are extracted from the leaves and felted together to produce a non-woven mat; the mat is then washed, pressed, and dyed; this is considered the raw Piñafelt. The felt is then combined with non-biodegradable polyurethane resin for durability. Sustainability Piñatex is a certified Benefit Corporation, B-Corp, meaning that the company is high in transparency, sustainability, and standards of performance. Production of pineapple leather emits less carbon dioxide than the production of traditional vegan leather, as each meter (3.28 feet) of Piñatex prevents of CO2 emissions. Despite the Piñafelt consisting of 100% plant-based materials, the combination with polyurethane in the final stage means that Piñatex will not naturally biodegrade. Additional leathers Additional plant-based leathers, for which there is limited production information available, include agave, coffee, grape, and olive leathers. See also Artificial leather References Artificial leather
Plant-based leather
Chemistry
1,848
72,389,705
https://en.wikipedia.org/wiki/4-HO-MALT
4-HO-MALT (4-hydroxy-N-methyl-N-allyltryptamine) is a tryptamine derivative which has been sold as a designer drug, first being detected in Slovenia in 2021. See also 4-HO-MiPT 4-HO-McPT 4-HO-MPT 4-HO-NALT 4-AcO-DALT 5-MeO-MALT References Hydroxyarenes Tryptamines Tertiary amines
4-HO-MALT
Chemistry
101
76,734,649
https://en.wikipedia.org/wiki/Autograph%20of%20Nicolaus%20Copernicus%27%20De%20revolutionibus
The autograph of Nicolaus Copernicus' De revolutionibus is a manuscript of six books of De revolutionibus orbium coelestium (1543) by Nicolaus Copernicus written between 1520 and 1541. Since 1956, it is kept in the Jagiellonian Library in Kraków (signature 10,000). The autograph was handwritten by Nicolaus Copernicus in Latin and Greek, using humanistic cursive. The manuscript consists of 213 paper leaves sized 28 × 19 centimeters, two endpapers, and four protective cards. The binding of the manuscript dates back to the early 17th century and is made of cardboard glued with waste paper and a parchment document from the 16th century. It is a unique object on a global scale, inscribed in 1999 on the UNESCO Memory of the World list. It is also the most valuable and famous autograph kept in the collections of the Jagiellonian Library of the Jagiellonian University, of which it has been the property since 1956. The text of the autograph, which was first published in 1543 in Nuremberg in the first edition of De revolutionibus orbium coelestium, revolutionized the perception of the universe from a historical point of view and was a starting point for modern astronomy and science. Description The autograph contains the text of the six books authored by Nicolaus Copernicus that make up the work De revolutionibus. Contents of the manuscript of De revolutionibus Source: Leaf 1 contains the incipit: (I)nter multa ac varia litteraturum artiumaque studia, Leaf 1 verso: Capitulum primum. Quod mundus sit sphaericus. Principio advertendum nobisest globusm est mundum. Leaf 212 verso contains the : remanebit praepollens latitudo quaesita. Provenance notes The endpaper of the front cover contains an ex libris with the coat of arms of the Nostitz family and the inscription:Ex Bibliotheca Maioratus Familiae Nostitzianae 1774Below the ex libris, there is a note written in ink over the previous one made in pencil, stating:Das Manuscript enthält: 212 Blätter, ausserdem 3 Vorblätter von denen das 1-e leer ist, das 2-e die Aufzählung der verschiedenen Eigenthümer und das 3-e Blatt den Namhen Otto F. v. Nostitz The inserted leaf b contains a note attributed to Jakob Christmann:Venerabilis et eximii Iuris utriusque Doctoris, Dni Nicolai Copernick Canonici Varmiensis, in Borussia Germaniae mathematici celeberrimi opus de revolutionibus coelestibus propria manu exparatum et haectenus in bibliotheca Georgii Ioachimii Rhetici item Valentini Othonis conservatum, ad usum studii mathematici procurauit M. Iakobus Christmannus Decanus Facultatis artium, anno 1603, die 19 DecembrisThe reverse side of leaf b contains a note by John Amos Comenius:Hunc librum a vidua pie defuncti M. Jac. Cristmanni digno redemptum pretio, in suam transtulit Bibliothecam Johannes Amos Nivanus: Anno 1614. 17 Januarii. Heidelbergae.On leaf c, there is the signature Otto F. v. Nostitz mp. Paper The manuscript consists of four types of paper with characteristic watermarks designated in the literature by letters: C, D, E, and F. These symbols were used in their descriptions by Ludwik Birkenmajer, followed by his son Aleksander Birkenmajer. Types of paper used for the manuscript of De revolutionibus Source: Papers of different types occur irregularly in the manuscript. Type C paper Paper C is the oldest used in the manuscript. Its watermark depicts a rather thick snake, resembling the stance and curvature of a seahorse, with its head bent beyond the axis towards the margin, while its tail points in the opposite direction, although its end is also directed towards the margin. On the snake's head, there is a fleur-de-lis resembling a crown, and a tongue protrudes from its mouth directed upwards and ending in a blade-shaped tip. The snake's body is divided by a dorsal line into two parts, which are further divided into segments by oblique horizontal lines descending downwards. In Charles Briquet's catalogue, the most similar watermark is cataloged as number 10,738. As established, this watermark was often found on papers from southern France, Spain, and Italy in the 15th and 16th centuries. However, the most similar ones to those found in the De revolutionibus manuscript were discovered on paper known from Middelburg from 1525. There, an even earlier paper was discovered, with a watermark featuring a similar depiction of the snake's tongue, dated to 1520. Dating the paper is also facilitated by the fact that the text written on paper C, on leaf 88 verso, discusses an astronomical observation made by Copernicus on 11 March 1516. The occurrence of paper C ends on leaf 89. Type D paper Paper D contains a watermark depicting a hand protruding from a cross with 9 pinnacles, with fingers raised upwards and spread out, placed beneath a crown. This paper is of inferior quality. Analysis of watermarks on similar paper has shown its origin from the town of Tulle in France, dating back to the years 1523 and 1526. Presumably, this paper reached Copernicus through the Netherlands, similar to paper C. In Charles Briquet's catalogue, the most similar watermarks are cataloged as number 10,944 and 10,946. On paper D, Copernicus provided information and comments on astronomical observations from 27 September 1522 (leaf 128), 22 February 1523 (leaf 166), and 12 March 1529 (leaf 173). It is assumed that this paper was used by Copernicus from 1523 to 1533. Paper D occurs between leaves 9 and 192. Type E paper Paper E contains a watermark in the shape of the letter P, with a fleuron placed above it. This watermark is almost identical to the one known from Maastricht in 1540. However, this paper reached Copernicus earlier, as he wrote letters in August 1537 and March 1539, and extensive portions of the manuscript were written on this paper before 1540. Paper E occurs between leaves 22 and 213. In Charles Briquet's catalogue, the most similar watermark is cataloged as number 8,698. Type F paper Paper F contains a watermark similar to the one on paper D. It depicts a hand with fingers spread out, with a sleeve ending in a circular fold, above which is a three-leaf clover. This watermark has been identified in full accordance with other watermarks on papers from Osnabrück from 1538 and from Lorraine from 1540. Copernicus used the same paper in a letter to Duke Albert of Prussia dated 15 June 1541. Pages from paper F were therefore used in either 1540 or 1541. In Charles Briquet's catalogue, the most similar watermark is cataloged as number 11,466. This watermark appears only once – on leaf 24, but paper F was used three times – on leaves 24, 25, and 209. Sections The paper block containing Copernicus' autograph is divided into 21 sections, which were marked in the 16th century with consecutive letters of the Latin alphabet from a to x. The numbering of the pages in individual sections was probably added around 1854 in the Nostitz Library, as the list of the number and completeness of the manuscript's leaves placed under the ex libris dates from the same year. Characteristics of sections in the manuscript of De revolutionibus Source: The notation of the sections with letters occurred in the final stage of editing the autograph. Aleksander Birkenmajer expressed the view that for many years of working on the autograph, Copernicus completely managed without any numbering of its leaves or notebooks. The letter signatures of the sections, except for the letter a on leaf 1, were applied by Copernicus in 1539. Researchers paid particular attention to the absence of the first leaf from paper D in section a, which was very carefully cut out – almost without a visible trace – and its remaining edge was glued to the preceding protective leaf. Aleksander Birkenmajer leans towards the view that the leaf, conventionally called zero, served for some time as the title page of Copernicus' work. Presumably, the leaf was removed during the binding of the manuscript, in 1603 or 1604 in Heidelberg. The content of the zero leaf remains unknown. It is not known whether it contained only the title written by Copernicus' hand, or perhaps a dedication or notes about the history of the autograph or its successive owners. It cannot be ruled out that the zero leaf contained some glaring damage, stains, or doubtful notes. The structure of the autograph itself and the changes made by Copernicus in individual sections suggest that if Copernicus himself removed the zero leaf, he would have replaced it with another leaf and corrected the content on it. Copernicus made frequent changes in the structure of the sections. He most often exchanged and rewrote sheets, and sometimes added additional leaves within the section. As a result, various types of paper from different periods of its acquisition are encountered in different sections. Writing The autograph is written by the hand of Nicolaus Copernicus in humanistic cursive. Marginalia and interlinear notes made by Georg Joachim Rheticus are found on leaves 21, 24, 71, 72, 188, and presumably 87 verso and 187 verso. Leaves 107 verso and 109 contain marginalia – two words written in the 17th century, attributed to Jakob Christmann. One of the primary pieces of evidence supporting the assertion that this is Copernicus' handwritten autograph is a note attributed to Jakob Christmann: Nicolai Copernik [...] opus [...] propria manu exaratum. To confirm or exclude the authenticity of Nicolaus Copernicus' handwriting, various autographs of Copernicus were examined, including, in particular, his letters, which serve as unquestionable and authoritatively attributed comparative material. Six handwritten letters of Nicolaus Copernicus to Johannes Dantiscus, preserved in Kraków and held at the National Museum in Kraków, were selected for this analysis. These letters were written over three years – 1536 and 1539 – between the ages of 63 and 66, and they bear his signature and date. The signature and handwriting are undoubtedly original. As a comparative material for De revolutionibus, handwritten notes made and signed by Copernicus, providing samples of his handwriting from the years 1503, 1511, 1512, 1513, 1518, 1521, and 1529, were also considered. These notes, which raise no doubts about authorship or authenticity, have been preserved in the accounts of the Warmia Chapter and in the locational entries of Locationes mansorum desertorum. Their chronology complements the chronology covered by Copernicus' letters. From the comparison of Copernicus' handwriting samples from 1503 to 1541, written between the ages of 30 and 68, it can be inferred that this is the handwriting of a mature individual, with well-formed shapes and no significant differences dependent on chronology. There are no signs of immature handwriting in these examples, even in later years. During the examination of the De revolutionibus autograph, Birkenmajer identified certain differences depending on the speed at which Copernicus wrote: Two ducts of handwriting appear in the autograph: a hasty but well-shaped and legible cursive, and a calmer, more vertical handwriting, which is a typical humanistic antiqua. The most noticeable are the characteristic shapes of letters, their inclinations, elements of cursive writing, the sweep of the handwriting, the direction of pen strokes, and the spacing of lines and margins. The analysis also revealed that in certain periods and fixed records of the autograph, some characteristics became fixed. Copernicus did not adhere to the custom of a fixed number of lines per page, as professional copyists of manuscripts contemporaneous to him did. The number of lines per page varies between 37 and 43. The text block averages 19 centimeters in width and 28 centimeters in height. The autograph text is primarily written in ink shades ranging from brown to full black. Red ink was used in the tables crossed out from leaves 15 verso to 70 verso. Examiners emphasize that the notation style indicates the writer's preference for order, cleanliness, and harmonious arrangement of text columns and accompanying drawings. Despite the aesthetic form of the manuscript, some strange mistakes were found. Between leaves 125 and 175, Copernicus incorrectly wrote the word iusta instead of iuxta at least six times. This error was described as an interesting case of perpetuating a once-made mistake and unconsciously repeating it. Many ink stains and blots of various sizes appear on clean and carefully written leaves. From their distribution, it is inferred that they were created during later erasures and corrections made in haste. Geometric drawings, of which there are 162 in the entire manuscript, distributed over 129 pages, also deserve attention. The drawings are also made by the hand of Nicolaus Copernicus, as evidenced by the style of letters used to label them. The drawings are made carefully, using a compass and ruler, although minor flaws occasionally occborn Copernicus extensively uses lines in astronomical tables, which appear on 118 pages of the manuscript. The conclusion of the conducted research was that the handwriting of the De revolutionibus manuscript is – except for minor foreign annotations – the handwritten autograph of Nicolaus Copernicus. Binding During the work on the De revolutionibus autograph, the section and leaves accumulated by Copernicus did not have a binding. Presumably – according to the customs prevailing in that era – Copernicus stored loose sections of his work in a cover such as a bag, envelope, or folder made of leather or parchment. It could also have been a box or chest. This is indicated by the significantly greater dirtiness of the external leaves of the sections, which was the result of their separate storage. The cover of the manuscript consists of 4 protective leaves (a, b, c, and d) and 2 endpapers. However, it is more accurate to consider that the manuscript has an endpaper and protective leaf a at the beginning and protective leaf e and endpaper at the end because leaves b and c are not strictly protective leaves but substitutes for a title page. Leaf a was made from paper A, while cards b, c, and e were made from paper B. The watermark of paper A depicts a large letter P, split at the bottom, with a rosette above it in the form of a four-petaled flower on a single stem. Below the letter is a faint drawing of an object – either a trumpet or a pine cone. In Briquet's catalog, the watermark most similar is cataloged as number 8,833. The watermark of paper B depicts a heraldic shield cut by a horizontal band, above which is a rod pointing upwards topped with a three-leaf clover entwined by a snake sticking out its long tongue. In Briquet's catalog, the watermark most similar is cataloged as number 1,451. These papers date from 1580 to 1600 and originate from Württemberg paper mills. During conservation work, after removing the endpaper, it was discovered that the cover made of cardboard consisting of parchment pulp under an external parchment cover contained a parchment document of Emperor Maximilian II from 1566 and a corrected printout of De inquisitione Hispanica, Heidelberg 1603. The fact that correction leaves of a book published in 1603 were found inside the cover proves that the cover certainly was not made before that year, and at the same time, its creation could not have been too far from the 1603/1604 transition. Before the binding from 1603/1604, the manuscript was neither sewn nor trimmed, and the current binding is its first binding. History of the manuscript The preserved form of the De revolutionibus autograph represents a certain stage in the work on this piece. It is the stage closest to completion and closest to the death of Nicolaus Copernicus, which occurred on 24 May 1543. The preserved copy of the De revolutionibus autograph was not used for publishing purposes either in Wittenberg in 1542 or in Nuremberg in 1543 during the printing of the first edition, nor in Basel during the printing of the second edition. This is evidenced by the cleanliness of the manuscript and the absence of traces associated with contemporary printing work such as stains, marks, etc. Not only does the appearance of the manuscript indicate that it was not a copy used in the printing process, but also the lists of printer's errors included in the published copies and referring to a comparison with the manuscript, which contain words not present in the original autograph. Owners of the De revolutionibus manuscript and where it is stored Source: Period from Copernicus' death to 1600 After Copernicus' death, his papers and books were inherited by his close friend Tiedemann Giese, the Bishop of Chełmno, and later, from 1548, the Bishop of Warmia. Giese passed away on 23 October 1550, and according to his will, his library was bequeathed to the . However, the De revolutionibus autograph did not end up in the chapter library. Instead, during Tiedemann Giese's lifetime, it came into the possession of Georg Joachim Rheticus (also known as von Lauchen). This could have happened as early as 1545 or as late as 1550. Rheticus actually had access to the content of the autograph in the form of a copy even earlier, when in 1540 he published the first account of Copernicus' work in Gdańsk in the Narratio prima. He also used a copy of the De revolutionibus autograph when he published De lateribus et angulis triangulorum in Wittenberg in 1542, which was intended to be the second book of Copernicus' work for some time. The exact moment when Georg Joachim Rheticus received the De revolutionibus autograph remains unknown. What is certain is that the remaining collection of Copernican materials in Tiedemann Giese's possession was transferred to the Warmia Chapter library. Georg Joachim Rheticus undoubtedly played a major role in disseminating Copernicus' thoughts and works. It was under his influence that Copernicus agreed to publish his autograph and helped in making a copy of it. However, Copernicus did not directly pass on the autograph to Rheticus during his lifetime. In 1551, Rheticus had to urgently leave Leipzig and abandon his further career at the university. Eventually, in 1554, he found himself in Kraków. Around 1569, Valentinus Otho, a student of Johannes Praetorius, joined Rheticus as his collaborator. During this time, the De revolutionibus autograph was in Kraków, together with Rheticus. Shortly before his death, Georg Joachim Rheticus left Kraków for Košice, where he stayed as a guest of Albrecht Łaski, the voivode of Sieradz, and the Hungarian magnate Jan Rüber. During Rheticus' stay in Košice, Valentinus Otho brought the De revolutionibus manuscript from Kraków, left there by Rheticus. This happened on 28 November 1574. A few days later, on 4 December 1574, Georg Joachim Rheticus died, and Valentinus Otho became his heir and the next owner of the manuscript. Otho soon left Košice and sought new employment. He obtained a position as a professor of mathematics at the Calvinist University of Heidelberg. During his stay there, the De revolutionibus manuscript and other papers acquired from Rheticus were stored haphazardly among stacks of other books and papers. This disorderly state of storage was reported by Otho's associate, Bartholomaeus Pitiscus, in the preface to his work Thesaurus Mathematicus. From 1600 to 1945 When Valentinus Otho died, his collections were acquired by the orientalist professor Jakob Christmann, who wrote a note on leaf b attributed to him, stating ad usum studii mathematici procuravit dated 16 December 1603. It is likely that Christmann did not include the manuscript in the university library but instead kept it for personal use when Simon Petiscus, who held the chair of mathematics at the time, took possession of it. Petiscus died in 1608, and at that point, the manuscript most likely returned to Christmann's possession. Christmann unquestionably owned the manuscript at the time of his death on 16 June 1613, and his widow, who took over the manuscript, sold it to John Amos Comenius on 17 January 1614. Comenius acquired the manuscript just over half a year after his enrollment at the Heidelberg University (which occurred on 19 June 1613). The purchase transaction was completed for an unspecified "fair price" paid to Christmann's widow. Comenius noted this information on leaf b verso of the manuscript, signing himself as Johannes Amos Nivanus (from his birthplace – Nivnice in Moravia). It is known that he resided in Poland several times (Leszno 1626–1641, Elbląg 1642–1648, again Leszno 1648–1656), but it is not known whether he had the De revolutionibus manuscript with him during any of his stays. The moment when Comenius lost Copernicus' autograph is also unknown. The next owner of the De revolutionibus autograph was Otto von Nostitz. The autograph is mentioned in the inventory document of the Nostitz Library under the signature MS e 21 (leaf 360 verso) with an entry from 5 October 1667. This date is later than the moment when von Nostitz became the owner, as he had already passed away at the time of the entry, but he left his signature on the protective leaf c of the De revolutionibus autograph. At that time, the manuscript was kept at Jawor Castle. Later on, along with the Nostitz Library, the autograph became part of the estate created by them and was transferred to the Nostitz Palace in Prague. The autograph remained the property of the Nostitz family for nearly 300 years and was mentioned several times in the inventories of this library in the 17th and 18th centuries. From 1945 to the present day In 1945, the Prague collections of the Nostitz family, along with the autograph of De revolutionibus, were nationalized. Eleven years later, on 5 July 1956, the government of Czechoslovakia offered this artifact as a gift to the Polish nation. On 25 October 1956, the autograph of De revolutionibus by Nicolaus Copernicus was handed over to the Jagiellonian University in Kraków, and since then it has been kept in the special collections of the Jagiellonian Library. Owen Gingerich, who examined almost all copies of the first and second editions of De revolutionibus from 1543 from Nuremberg and from 1566 from Basel, and who saw the autograph in Kraków around 1976, states in his book that this priceless treasure was lent to Poland by Czechoslovakia, and the Poles simply kept it and deposited it in the Jagiellonian Library at the Alma Mater of Copernicus. Since it was not customary for one communist country to protest too vehemently against the conduct of a brother nation, the valuable manuscript remained in Poland. However, this opinion is not confirmed by the facts and is even contradictory to the findings in this regard made and conveyed by UNESCO. Autograph storage in modern times The autograph of De revolutionibus is stored in a secured, fireproof vault, in a specially prepared room within the Jagiellonian Library, where a constant temperature and humidity are maintained. Protection of objects from special collections is one of the most important statutory obligations of the university and the Jagiellonian Library. Direct access to the autograph of De revolutionibus is permitted only for scientific and editorial purposes. Access to this object is strictly controlled and limited. According to the regulations governing access to special collections, independent academic staff and individuals with a doctoral degree, doctoral students, adjuncts, and assistants with a master's degree can use them after presenting a letter of recommendation from their supervisor or academic advisor. Students preparing master's theses can also access them after presenting a letter of recommendation from their supervisor, and employees of scientific, cultural institutions, or publishing houses can access them after presenting a letter of recommendation or a certificate informing about the research topic and purpose. Individuals outside of the above groups can only access the special collections with the permission of the head of the manuscripts department. Due to its unique value and the need for protection from external factors, the autograph of De revolutionibus is rarely displayed in public exhibitions. The last time this occurred was in 2012 during the 6th European Congress of Mathematics, and previously in 2005 during the Lesser Poland Days of Cultural Heritage. During the last exhibition, the autograph of De revolutionibus, as one of the most valuable treasures, was exhibited only temporarily, after which, for safety and conservation reasons, it was replaced with facsimile editions. Facsimile editions 1944 – Munich The facsimile was produced using the photolithography technique, in monochrome. This edition does not capture all the details of the original and has been assessed as not possessing significant aesthetic qualities. Full bibliographic information for the edition: Nicolaus Copernicus, Gesamtausgabe, vol. 1: Opus de revolutionibus caelestibus manu propria. Faksimile-Wiedergabe, München, Berlin 1944. Introduction by Fritz Kubach, afterword by Karl Zeller. 1972 – Kraków Volume I of the Complete Works of Nicolaus Copernicus, also published in foreign languages (Latin 1973, French 1973, Russian 1973), containing images of all pages of the autograph, printed on third-class offset paper. The volume is accompanied by an introduction by Jerzy Zathey. The publication was initiated on the occasion of the five hundredth anniversary of Nicolaus Copernicus' birth in 1973. 1974 – Hildesheim Edition of De revolutionibus: mit einem Vorwort zur Gesamtausgabe und einem Vorbericht über das Manuskript – the first volume containing a facsimile of the autograph. 1976 – Kraków The reproduction was made using offset printing, using a halftone contact screen, ensuring full tonal compliance of the facsimile background with the original. The contact screen used allowed for the preservation of writing shades from brown to black and faithful reproduction of even small dots, spots, and distortions. Printing was done on 120 gsm offset paper in shades adjusted to the four types of paper used in the manuscript. The dimensions of the pages are faithful to the original and cropped according to the prototype. The fidelity of the reproduction to the original is as follows: Background texture – 95% Background shades – 90% Low black intensity writing texture – 85% Black writing texture and shades – 95% Red writing shade and texture – 95% This facsimile version looks so authentic that some people mistake it for the original when viewing it. Owen Gingerich mentions in his book about a certain bookseller from Chicago who donated the facsimile to the Adler Planetarium and wanted to deduct the value of the donation from taxes. Believing he was donating the original autograph of Copernicus, he asked Gingerich to appraise it. 1996 – electronic Neurosoft Published as a "digital reprint" of the autograph. The publication consists of a CD-ROM containing images of all manuscript pages. It also includes an article by Marian Zwiercan entitled The History of Nicolaus Copernicus' De revolutionibus Autograph. De revolutionibus autograph online Images of all pages of Nicolaus Copernicus' De revolutionibus autograph are available in the online collections of the Jagiellonian Library. Autograph on the Memory of the World list The manuscript of De revolutionibus handwritten by Nicolaus Copernicus has been inscribed on the UNESCO Memory of the World list since 1999, as one of twelve Polish objects on the list and three hundred globally. The entry emphasizes that De revolutionibus is one of the greatest achievements of an individual that shaped new eras and influenced the development of civilization and culture. References Bibliography Studies External links Nicolaus Copernicus History of astronomy 16th-century manuscripts Memory of the World Register Jagiellonian University
Autograph of Nicolaus Copernicus' De revolutionibus
Astronomy
6,109
418,179
https://en.wikipedia.org/wiki/Whyte%20notation
The Whyte notation is a classification method for steam locomotives, and some internal combustion locomotives and electric locomotives, by wheel arrangement. It was devised by Frederick Methvan Whyte, and came into use in the early twentieth century following a December 1900 editorial in American Engineer and Railroad Journal. The notation was adopted and remains in use in North America and the United Kingdom to describe the wheel arrangements of steam locomotives, but for modern locomotives, multiple units and trams it has been supplanted by the UIC system in Europe and by the AAR system (essentially a simplification of the UIC system) in North America. However, geared steam locomotives do not use the notation. They are classified by their model and their number of trucks. Structure of the system Basic form The notation in its basic form counts the number of leading wheels, then the number of driving wheels, and finally the number of trailing wheels, numbers being separated by dashes. For example, a locomotive with two leading axles (four wheels) in front, then three driving axles (six wheels) and then one trailing axle (two wheels) is classified as a locomotive, and is commonly known as a Pacific. Denotion of other locomotives Articulated locomotives For articulated locomotives that have two wheelsets, such as Garratts, which are effectively two locomotives joined by a common boiler, each wheelset is denoted separately, with a plus sign (+) between them. Thus a 4-6-2-type Garratt is a . For Garratt locomotives, the plus sign is used even when there are no intermediate unpowered wheels, e.g. the LMS Garratt . This is because the two engine units are more than just power bogies. They are complete engines, carrying fuel and water tanks. The plus sign represents the bridge (carrying the boiler) that links the two engines. Simpler articulated types, such as Mallets, have a jointed frame under a common boiler where there are no unpowered wheels between the sets of powered wheels. Typically, the forward frame is free to swing, whereas the rear frame is rigid with the boiler. Thus, a Union Pacific Big Boy is a : four leading wheels, one group of eight driving wheels, another group of eight driving wheels, and then four trailing wheels. Sometimes articulated locomotives of this type are denoted with a “+” between each driving wheels set (so in the previous case, the Big Boy would be a 4-8+8-4). This may have been developed to distinguish articulated and duplex arrangements; duplex arrangements would get a “-“ being rigid and articulated locomotives would get a “+” being flexible. However, given all the wheel arrangements for duplex locomotives have been mutually exclusive to them, it is usually considered unnecessary and thus another “-“ is usually used. Triplex locomotives, and any theoretical larger ones, simply expand on basic articulated locomotives, for example, 2-8-8-8-2. In the case of the Belgian quadruplex locomotive, the arrangement is listed as 0-6-2+2-4-2-4-2+2-6-0. Duplex locomotives For duplex locomotives, which have two sets of coupled driving wheels mounted rigidly on the same frame, the same method is used as for Mallet articulated locomotives – the number of leading wheels is placed first, followed by the leading set of driving wheels, followed by the trailing set of driving wheels, followed by the trailing wheels, each number being separated by a hyphen. Tank locomotives A number of standard suffixes can be used to extend the Whyte notation for tank locomotives: Other steam locomotives Various other types of steam locomotive can be also denoted through suffixes: Internal combustion locomotives The wheel arrangement of small diesel and petrol locomotives can be classified using the same notation as steam locomotives, e.g. 0-4-0, 0-6-0, 0-8-0. Where the axles are coupled by chains or shafts (rather than side rods) or are individually driven, the terms 4w (4-wheeled), 6w (6-wheeled) or 8w (8-wheeled) are generally used. For larger locomotives, the UIC classification is more commonly used. Various suffixes are also used to denote the different types of internal combustion locomotives: Electric locomotives The wheel arrangement of small electric locomotives can be denoted using this notation, like with internal combustion locomotives. Suffixes used for electric locomotives include: Wheel arrangement names In American (and to a lesser extent British) practice, most wheel arrangements in common use were given names, sometimes from the name of the first such locomotive built. For example, the 2-2-0 type arrangement is named Planet, after the 1830 locomotive on which it was first used. (This naming convention is similar to the naming of warship classes.) Note that several wheel arrangements had multiple names, and some names were only used in some countries. Wheel arrangements under the Whyte system are listed below. In the diagrams, the front of the locomotive is to the left. See also AAR wheel arrangement Swiss locomotive and railcar classification UIC classification Wheel arrangement References Further reading External links In the various names above of a 4-8-4, omitted was the letters "F E F" which simply means: four eight four. 1900s introductions Locomotive classification systems Notation
Whyte notation
Mathematics
1,096
7,491,719
https://en.wikipedia.org/wiki/European%20Cultivated%20Potato%20Database
The European Cultivated Potato Database (ECPD) is an online collaborative database of potato variety descriptions. The information that it contains can be searched by variety name, or by selecting one or more required characteristics. 159,848 observations 29 contributors 91 characters 4,119 cultivated varieties 1,354 breeding lines The data is indexed by variety, character, country of origin, and contributor. There is a facility to select a variety and to find similar varieties based upon botanical characteristics. ECPD is the result of collaboration between participants in eight European Union countries and five East European countries. It is intended to be a source of information on varieties maintained by them. More than twenty-three scientific organisations are contributing to this information source. The database is maintained and updated by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks (ECP/GR), which is organised by Bioversity International. The European Cultivated Potato Database was created to advance the conservation and use of genetic diversity for the well-being of present and future generations. External links The European Cultivated Potato Database Biodiversity databases Databases in Scotland Government databases in the United Kingdom Information technology organizations based in Europe Online databases Potatoes
European Cultivated Potato Database
Biology,Environmental_science
239
51,143
https://en.wikipedia.org/wiki/Giant-impact%20hypothesis
The giant-impact hypothesis, sometimes called the Theia Impact, is an astrogeology hypothesis for the formation of the Moon first proposed in 1946 by Canadian geologist Reginald Daly. The hypothesis suggests that the Early Earth collided with a Mars-sized protoplanet of the same orbit approximately 4.5 billion years ago in the early Hadean eon (about 20 to 100 million years after the Solar System coalesced), and the ejecta of the impact event later accreted to form the Moon. The impactor planet is sometimes called Theia, named after the mythical Greek Titan who was the mother of Selene, the goddess of the Moon. Analysis of lunar rocks published in a 2016 report suggests that the impact might have been a direct hit, causing a fragmentation and thorough mixing of both parent bodies. The giant-impact hypothesis is currently the favored hypothesis for lunar formation among astronomers. Evidence that supports this hypothesis includes: The Moon's orbit has a similar orientation to Earth's rotation, both of which are at a similar angle to the ecliptic plane of the Solar System. The stable isotope ratios of lunar and terrestrial rock are identical, implying a common origin. The Earth–Moon system contains an anomalously high angular momentum, meaning the momentum contained in Earth's rotation, the Moon's rotation and the Moon revolving around Earth is significantly higher than the other terrestrial planets. A giant impact might have supplied this excess momentum. Moon samples indicate that the Moon was once molten to a substantial, but unknown, depth. This might have required much more energy than predicted to be available from the accretion of a celestial body of the Moon's size and mass. An extremely energetic process, such as a giant impact, could provide this energy. The Moon has a relatively small iron core, which gives it a much lower density than Earth. Computer models of a giant impact of a Mars-sized body with Earth indicate the impactor's core would likely penetrate deep into Earth and fuse with its own core. This would leave the Moon, which was formed from the ejecta of lighter crust and mantle fragments that went beyond the Roche limit and were not pulled back by gravity to re-fuse with Earth, with less remaining metallic iron than other planetary bodies. The Moon is depleted in volatile elements compared to Earth. Vaporizing at comparably lower temperatures, they could be lost in a high-energy event, with the Moon's smaller gravity unable to recapture them while Earth did. There is evidence in other star systems of similar collisions, resulting in debris discs. Giant collisions are consistent with the leading theory of the formation of the Solar System. However, several questions remain concerning the best current models of the giant-impact hypothesis. The energy of such a giant impact is predicted to have heated Earth to produce a global magma ocean, and evidence of the resultant planetary differentiation of the heavier material sinking into Earth's mantle has been documented. However, there is no self-consistent model that starts with the giant-impact event and follows the evolution of the debris into a single moon. Other remaining questions include when the Moon lost its share of volatile elements and why Venuswhich experienced giant impacts during its formationdoes not host a similar moon. History In 1898, George Darwin made the suggestion that Earth and the Moon were once a single body. Darwin's hypothesis was that a molten Moon had been spun from Earth because of centrifugal forces, and this became the dominant academic explanation. Using Newtonian mechanics, he calculated that the Moon had orbited much more closely in the past and was drifting away from Earth. This drifting was later confirmed by American and Soviet experiments, using laser ranging targets placed on the Moon. Nonetheless, Darwin's calculations could not resolve the mechanics required to trace the Moon back to the surface of Earth. In 1946, Reginald Aldworth Daly of Harvard University challenged Darwin's explanation, adjusting it to postulate that the creation of the Moon was caused by an impact rather than centrifugal forces. Little attention was paid to Professor Daly's challenge until a conference on satellites in 1974, during which the idea was reintroduced and later published and discussed in Icarus in 1975 by William K. Hartmann and Donald R. Davis. Their models suggested that, at the end of the planet formation period, several satellite-sized bodies had formed that could collide with the planets or be captured. They proposed that one of these objects might have collided with Earth, ejecting refractory, volatile-poor dust that could coalesce to form the Moon. This collision could potentially explain the unique geological and geochemical properties of the Moon. A similar approach was taken by Canadian astronomer Alastair G. W. Cameron and American astronomer William R. Ward, who suggested that the Moon was formed by the tangential impact upon Earth of a body the size of Mars. It is hypothesized that most of the outer silicates of the colliding body would be vaporized, whereas a metallic core would not. Hence, most of the collisional material sent into orbit would consist of silicates, leaving the coalescing Moon deficient in iron. The more volatile materials that were emitted during the collision probably would escape the Solar System, whereas silicates would tend to coalesce. Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: "You have eighteen months. Go back to your Apollo data, go back to your computer, and do whatever you have to, but make up your mind. Don't come to our conference unless you have something to say about the Moon's birth." At the 1984 conference at Kona, Hawaii, the giant-impact hypothesis emerged as the most favored hypothesis. Theia The name of the hypothesised protoplanet is derived from the mythical Greek titan Theia , who gave birth to the Moon goddess Selene. This designation was proposed initially by the English geochemist Alex N. Halliday in 2000 and has become accepted in the scientific community. According to modern theories of planet formation, Theia was part of a population of Mars-sized bodies that existed in the Solar System 4.5 billion years ago. One of the attractive features of the giant-impact hypothesis is that the formation of the Moon and Earth align; during the course of its formation, Earth is thought to have experienced dozens of collisions with planet-sized bodies. The Moon-forming collision would have been only one such "giant impact" but certainly the last significant impactor event. The Late Heavy Bombardment by much smaller asteroids may have occurred laterapproximately 3.9 billion years ago. Basic model Astronomers think the collision between Earth and Theia happened at about 4.4 to 4.45 billion years ago (bya); about 0.1 billion years after the Solar System began to form. In astronomical terms, the impact would have been of moderate velocity. Theia is thought to have struck Earth at an oblique angle when Earth was nearly fully formed. Computer simulations of this "late-impact" scenario suggest an initial impactor velocity below at "infinity" (far enough that gravitational attraction is not a factor), increasing as it approached to over at impact, and an impact angle of about 45°. However, oxygen isotope abundance in lunar rock suggests "vigorous mixing" of Theia and Earth, indicating a steep impact angle. Theia's iron core would have sunk into the young Earth's core, and most of Theia's mantle accreted onto Earth's mantle. However, a significant portion of the mantle material from both Theia and Earth would have been ejected into orbit around Earth (if ejected with velocities between orbital velocity and escape velocity) or into individual orbits around the Sun (if ejected at higher velocities). Modelling has hypothesised that material in orbit around Earth may have accreted to form the Moon in three consecutive phases; accreting first from the bodies initially present outside Earth's Roche limit, which acted to confine the inner disk material within the Roche limit. The inner disk slowly and viscously spread back out to Earth's Roche limit, pushing along outer bodies via resonant interactions. After several tens of years, the disk spread beyond the Roche limit, and started producing new objects that continued the growth of the Moon, until the inner disk was depleted in mass after several hundreds of years. Material in stable Kepler orbits was thus likely to hit the Earth–Moon system sometime later (because the Earth–Moon system's Kepler orbit around the Sun also remains stable). Estimates based on computer simulations of such an event suggest that some twenty percent of the original mass of Theia would have ended up as an orbiting ring of debris around Earth, and about half of this matter coalesced into the Moon. Earth would have gained significant amounts of angular momentum and mass from such a collision. Regardless of the speed and tilt of Earth's rotation before the impact, it would have experienced a day some five hours long after the impact, and Earth's equator and the Moon's orbit would have become coplanar. Not all of the ring material need have been swept up right away: the thickened crust of the Moon's far side suggests the possibility that a second moon about in diameter formed in a Lagrange point of the Moon. The smaller moon may have remained in orbit for tens of millions of years. As the two moons migrated outward from Earth, solar tidal effects would have made the Lagrange orbit unstable, resulting in a slow-velocity collision that "pancaked" the smaller moon onto what is now the far side of the Moon, adding material to its crust. Lunar magma cannot pierce through the thick crust of the far side, causing fewer lunar maria, while the near side has a thin crust displaying the large maria visible from Earth. Above a high resolution threshold for simulations, a study published in 2022 finds that giant impacts can immediately place a satellite with similar mass and iron content to the Moon into orbit far outside Earth's Roche limit. Even satellites that initially pass within the Roche limit can reliably and predictably survive, by being partially stripped and then torqued onto wider, stable orbits. Furthermore, the outer layers of these directly formed satellites are molten over cooler interiors and are composed of around 60% proto-Earth material. This could alleviate the tension between the Moon's Earth-like isotopic composition and the different signature expected for the impactor. Immediate formation opens up new options for the Moon's early orbit and evolution, including the possibility of a highly tilted orbit to explain the lunar inclination, and offers a simpler, single-stage scenario for the origin of the Moon. Composition In 2001, a team at the Carnegie Institution of Washington reported that the rocks from the Apollo program carried an isotopic signature that was identical with rocks from Earth, and were different from almost all other bodies in the Solar System. In 2014, a team in Germany reported that the Apollo samples had a slightly different isotopic signature from Earth rocks. The difference was slight, but statistically significant. One possible explanation is that Theia formed near Earth. This empirical data showing close similarity of composition can be explained only by the standard giant-impact hypothesis, as it is extremely unlikely that two bodies prior to collision had such similar composition. Equilibration hypothesis In 2007, researchers from the California Institute of Technology showed that the likelihood of Theia having an identical isotopic signature as Earth was very small (less than 1 percent). They proposed that in the aftermath of the giant impact, while Earth and the proto-lunar disc were molten and vaporised, the two reservoirs were connected by a common silicate vapor atmosphere and that the Earth–Moon system became homogenised by convective stirring while the system existed in the form of a continuous fluid. Such an "equilibration" between the post-impact Earth and the proto-lunar disc is the only proposed scenario that explains the isotopic similarities of the Apollo rocks with rocks from Earth's interior. For this scenario to be viable, however, the proto-lunar disc would have to endure for about 100 years. Work is ongoing to determine whether or not this is possible. Direct collision hypothesis According to research (2012) to explain similar compositions of the Earth and the Moon based on simulations at the University of Bern by physicist Andreas Reufer and his colleagues, Theia collided directly with Earth instead of barely swiping it. The collision speed may have been higher than originally assumed, and this higher velocity may have totally destroyed Theia. According to this modification, the composition of Theia is not so restricted, making a composition of up to 50% water ice possible. Synestia hypothesis One effort, in 2018, to homogenise the products of the collision was to energise the primary body by way of a greater pre-collision rotational speed. This way, more material from the primary body would be spun off to form the Moon. Further computer modelling determined that the observed result could be obtained by having the pre-Earth body spinning very rapidly, so much so that it formed a new celestial object which was given the name 'synestia'. This is an unstable state that could have been generated by yet another collision to get the rotation spinning fast enough. Further modelling of this transient structure has shown that the primary body spinning as a doughnut-shaped object (the synestia) existed for about a century (a very short time) before it cooled down and gave birth to Earth and the Moon. Terrestrial magma ocean hypothesis Another model, in 2019, to explain the similarity of Earth and the Moon's compositions posits that shortly after Earth formed, it was covered by a sea of hot magma, while the impacting object was likely made of solid material. Modelling suggests that this would lead to the impact heating the magma much more than solids from the impacting object, leading to more material being ejected from the proto-Earth, so that about 80% of the Moon-forming debris originated from the proto-Earth. Many prior models had suggested 80% of the Moon coming from the impactor. Evidence Indirect evidence for the giant impact scenario comes from rocks collected during the Apollo Moon landings, which show oxygen isotope ratios nearly identical to those of Earth. The highly anorthositic composition of the lunar crust, as well as the existence of KREEP-rich samples, suggest that a large portion of the Moon once was molten; and a giant impact scenario could easily have supplied the energy needed to form such a magma ocean. Several lines of evidence show that if the Moon has an iron-rich core, it must be a small one. In particular, the mean density, moment of inertia, rotational signature, and magnetic induction response of the Moon all suggest that the radius of its core is less than about 25% the radius of the Moon, in contrast to about 50% for most of the other terrestrial bodies. Appropriate impact conditions satisfying the angular momentum constraints of the Earth–Moon system yield a Moon formed mostly from the mantles of Earth and the impactor, while the core of the impactor accretes to Earth. Earth has the highest density of all the planets in the Solar System; the absorption of the core of the impactor body explains this observation, given the proposed properties of the early Earth and Theia. Comparison of the zinc isotopic composition of lunar samples with that of Earth and Mars rocks provides further evidence for the impact hypothesis. Zinc is strongly fractionated when volatilised in planetary rocks, but not during normal igneous processes, so zinc abundance and isotopic composition can distinguish the two geological processes. Moon rocks contain more heavy isotopes of zinc, and overall less zinc, than corresponding igneous Earth or Mars rocks, which is consistent with zinc being depleted from the Moon through evaporation, as expected for the giant impact origin. Collisions between ejecta escaping Earth's gravity and asteroids would have left impact heating signatures in stony meteorites; analysis based on assuming the existence of this effect has been used to date the impact event to 4.47 billion years ago, in agreement with the date obtained by other means. Warm silica-rich dust and abundant SiO gas, products of high velocity impactsover between rocky bodies, have been detected by the Spitzer Space Telescope around the nearby (29 pc distant) young (~12 My old) star HD 172555 in the Beta Pictoris moving group. A belt of warm dust in a zone between 0.25AU and 2AU from the young star HD 23514 in the Pleiades cluster appears similar to the predicted results of Theia's collision with the embryonic Earth, and has been interpreted as the result of planet-sized objects colliding with each other. A similar belt of warm dust was detected around the star BD+20°307 (HIP 8920, SAO 75016). On 1 November 2023, scientists reported that, according to computer simulations, remnants of Theia could be still visible inside the Earth as two giant anomalies of the Earth's mantle. Difficulties This lunar origin hypothesis has some difficulties that have yet to be resolved. For example, the giant-impact hypothesis implies that a surface magma ocean would have formed following the impact. Yet there is no evidence that Earth ever had such a magma ocean and it is likely there exists material that has never been processed in a magma ocean. Composition A number of compositional inconsistencies need to be addressed. The ratios of the Moon's volatile elements are not explained by the giant-impact hypothesis. If the giant-impact hypothesis is correct, these ratios must be due to some other cause. The presence of volatiles such as water trapped in lunar basalts and carbon emissions from the lunar surface is more difficult to explain if the Moon was caused by a high-temperature impact. The iron oxide (FeO) content (13%) of the Moon, intermediate between that of Mars (18%) and the terrestrial mantle (8%), rules out most of the source of the proto-lunar material from Earth's mantle. If the bulk of the proto-lunar material had come from an impactor, the Moon should be enriched in siderophilic elements, when, in fact, it is deficient in them. The Moon's oxygen isotopic ratios are essentially identical to those of Earth. Oxygen isotopic ratios, which may be measured very precisely, yield a unique and distinct signature for each Solar System body. If a separate proto-planet Theia had existed, it probably would have had a different oxygen isotopic signature than Earth, as would the ejected mixed material. The Moon's titanium isotope ratio (50Ti/47Ti) appears so close to Earth's (within 4 ppm), that little if any of the colliding body's mass could likely have been part of the Moon. Lack of a Venusian moon If the Moon was formed by such an impact, it is possible that other inner planets also may have been subjected to comparable impacts. A moon that formed around Venus by this process would have been unlikely to escape. If such a moon-forming event had occurred there, a possible explanation of why the planet does not have such a moon might be that a second collision occurred that countered the angular momentum from the first impact. Another possibility is that the strong tidal forces from the Sun would tend to destabilise the orbits of moons around close-in planets. For this reason, if Venus's slow rotation rate began early in its history, any satellites larger than a few kilometers in diameter would likely have spiraled inwards and collided with Venus. Simulations of the chaotic period of terrestrial planet formation suggest that impacts like those hypothesised to have formed the Moon were common. For typical terrestrial planets with a mass of 0.5 to 1 Earth masses, such an impact typically results in a single moon containing 4% of the host planet's mass. The inclination of the resulting moon's orbit is random, but this tilt affects the subsequent dynamic evolution of the system. For example, some orbits may cause the moon to spiral back into the planet. Likewise, the proximity of the planet to the star will also affect the orbital evolution. The net effect is that it is more likely for impact-generated moons to survive when they orbit more distant terrestrial planets and are aligned with the planetary orbit. Possible origin of Theia In 2004, Princeton University mathematician Edward Belbruno and astrophysicist J. Richard Gott III proposed that Theia coalesced at the or Lagrangian point relative to Earth (in about the same orbit and about 60° ahead or behind), similar to a trojan asteroid. Two-dimensional computer models suggest that the stability of Theia's proposed trojan orbit would have been affected when its growing mass exceeded a threshold of approximately 10% of Earth's mass (the mass of Mars). In this scenario, gravitational perturbations by planetesimals caused Theia to depart from its stable Lagrangian location, and subsequent interactions with proto-Earth led to a collision between the two bodies. In 2008, evidence was presented that suggests that the collision might have occurred later than the accepted value of 4.53 Gya, at approximately 4.48 Gya. A 2014 comparison of computer simulations with elemental abundance measurements in Earth's mantle indicated that the collision occurred approximately 95 My after the formation of the Solar System. It has been suggested that other significant objects might have been created by the impact, which could have remained in orbit between Earth and the Moon, stuck in Lagrangian points. Such objects might have stayed within the Earth–Moon system for as long as 100 million years, until the gravitational tugs of other planets destabilised the system enough to free the objects. A study published in 2011 suggested that a subsequent collision between the Moon and one of these smaller bodies caused the notable differences in physical characteristics between the two hemispheres of the Moon. This collision, simulations have supported, would have been at a low enough velocity so as not to form a crater; instead, the material from the smaller body would have spread out across the Moon (in what would become its far side), adding a thick layer of highlands crust. The resulting mass irregularities would subsequently produce a gravity gradient that resulted in tidal locking of the Moon so that today, only the near side remains visible from Earth. However, mapping by the GRAIL mission has ruled out this scenario. In 2019, a team at the University of Münster reported that the molybdenum isotopic composition in Earth's primitive mantle originates from the outer Solar System, hinting at the source of water on Earth. One possible explanation is that Theia originated in the outer Solar System. Alternative hypotheses Other mechanisms that have been suggested at various times for the Moon's origin are that the Moon was spun off from Earth's molten surface by centrifugal force; that it was formed elsewhere and was subsequently captured by Earth's gravitational field; or that Earth and the Moon formed at the same time and place from the same accretion disk. None of these hypotheses can account for the high angular momentum of the Earth–Moon system. Another hypothesis attributes the formation of the Moon to the impact of a large asteroid with Earth much later than previously thought, creating the satellite primarily from debris from Earth. In this hypothesis, the formation of the Moon occurs 60–140 million years after the formation of the Solar System (as compared to hypothesized Theia impact at 4.527 ± 0.010 billion years). The asteroid impact in this scenario would have created a magma ocean on Earth and the proto-Moon with both bodies sharing a common plasma metal vapor atmosphere. The shared metal vapor bridge would have allowed material from Earth and the proto-Moon to exchange and equilibrate into a more common composition. Yet another hypothesis proposes that the Moon and Earth formed together, not from the collision of once-distant bodies. This model, published in 2012 by Robin M. Canup, suggests that the Moon and Earth formed from a massive collision of two planetary bodies, each larger than Mars, which then re-collided to form what is now called Earth. After the re-collision, Earth was surrounded by a disk of material which accreted to form the Moon. See also References Notes Further reading Academic articles Non-academic books External links Planetary Science Institute: Giant Impact Hypothesis Origin of the Moon by Prof. AGW Cameron Klemperer rosette and Lagrangian point simulations using JavaScript SwRI giant impact hypothesis simulation (.wmv and .mov) Origin of the Moon – computer model of accretion Moon Archive – Including articles about the giant impact hypothesis Planet Smash-Up Sends Vaporized Rock, Hot Lava Flying (2009-08-10 JPL News) How common are Earth–Moon planetary systems? : 23 May 2011 The Surprising State of the Earth after the Moon-Forming Giant Impact – Sarah Stewart (SETI Talks), Jan 28, 2015 Lunar science Hypothetical impact events Earth sciences Solar System dynamic theories
Giant-impact hypothesis
Astronomy,Biology
5,167
1,620,176
https://en.wikipedia.org/wiki/La%20Grande%20River
La Grande River (, ; ; both meaning "great river") is a river in northwestern Quebec, Canada, rising in the highlands of the north-central part of the province and flowing roughly west to its drainage at James Bay. It is the second-longest river in the province, surpassed only by the Saint Lawrence River. Originally, the La Grande River drained an area of , and had a mean discharge of . Since the 1980s, when hydroelectric development diverted the Eastmain and Caniapiscau rivers into the La Grande, its total catchment area has increased to about , with its mean discharge being more than . In November 2009, the Rupert River was also (partially) diverted, adding another to the basin. At one time, the La Grande was known as the "Fort George River". The Hudson's Bay Company operated a trading post on the river, at Big River House, between 1803 and 1824. In 1837, a larger trading post was established at Fort George, on an island at the mouth of the river. In the early 20th century, this trading post became a village as the Crees of the James Bay region abandoned their nomadic way of life and settled nearby. The modern Cree village of Chisasibi, which replaced Fort George in 1980, is situated on the southern shore of the La Grande River, several kilometers to the East. Tributaries Significant tributaries of La Grande River include: Kanaaupscow River Sakami River Eastmain River (diverted) Opinaca River Rupert River (diverted) Rivière de Pontois Rivière de la Corvette Laforge River Caniapiscau River (diverted) Hydro-electric development The river has been extensively developed as a source of hydroelectric power by Hydro-Québec, starting in 1974. An area of was flooded and almost all of the flow of the Eastmain River and approximately 70% of the flows of the Rupert River were diverted into the La Grande watershed. The following generating stations are on the La Grande River and its tributaries in upstream order: La-Grande-1 (LG-1) Robert-Bourassa La Grande-2A (LG-2A) La Grande-3 (LG-3) La Grande-4 (LG-4) Laforge-1 (LF-1) Laforge-2 (LF-2) Brisay Eastmain-1 As a result of the development projects, the Cree people of the region lost some parts of their traditional hunting and trapping territories (about 10% of the hunting and trapping territories used by the Cree of Chisasibi). Organic mercury levels increased in the fish, which forms an important part of their diet, as the organic material trapped by the rising waters in the new reservoirs began to filter into the food chain. Careful follow-up by Cree health authorities since the 1980s has been largely successful. The authorities continue to promote the regular consumption of fish, with the notable exception of the predatory species living in the reservoirs, which still show high levels of mercury. Climate See also James Bay Project List of longest rivers of Canada References External links Hydro-Québec's La Grande Complex The Grand River at LG-1 (YouTube Video) Rivers of Nord-du-Québec James Bay Project Tributaries of James Bay
La Grande River
Engineering
665
26,956,160
https://en.wikipedia.org/wiki/Priority%20effect
In ecology, a priority effect refers to the impact that a particular species can have on community development as a result of its prior arrival at a site. There are two basic types of priority effects: inhibitory and facilitative. An inhibitory priority effect occurs when a species that arrives first at a site negatively affects a species that arrives later by reducing the availability of space or resources. In contrast, a facilitative priority effect occurs when a species that arrives first at a site alters abiotic or biotic conditions in ways that positively affect a species that arrives later. Inhibitory priority effects have been documented more frequently than facilitative priority effects. Studies indicate that both abiotic (e.g., resource availability) and biotic (e.g., predation) factors can affect the strength of priority effects. . Priority effects are a central and pervasive element of ecological community development that have significant implications for natural systems and ecological restoration efforts. Theoretical foundation Community succession theory Early in the 20th century, Frederic Clements and other plant ecologists suggested that ecological communities develop in a linear, directional manner towards a final, stable endpoint: the climax community. Clements indicated that a site's climax community would reflect local climate. He conceptualized the climax community as a "superorganism" that followed a defined developmental sequence. Early ecological succession theory maintained that the directional shifts from one stage of succession to the next were induced by the plants themselves. In this sense, succession theory implicitly recognized priority effects; the prior arrival of certain species had important impacts on future community composition. At the same time, the climax concept implied that species shifts were predetermined. This implies that a given species would always appear at the same point during the development of the climax community and have a predictable impact on community development. This static view of priority effects remained essentially unchanged by the concept of patch dynamics, introduced by Alex Watt in 1947. Watt conceived of plant communities as dynamic "mechanisms" that followed predetermined succession cycles. He viewed succession as a process driven by facilitation, in which each species made local conditions more suitable for another species. Individualistic approach In 1926, Henry Gleason presented an alternative hypothesis in which plants were conceptualized as individuals rather than components of a superorganism. This hypothesis suggested that the distribution of various species across the landscape reflected species-specific dispersal limitations and environmental requirements rather than predetermined associations among species. Gleason contested the idea of a predetermined climax community, recognizing that different colonizing species could produce alternative trajectories of community development. For example, initially identical ponds colonized by different species could develop through succession into very different communities. The Initial Floristic Composition model was put forward by Frank Egler to describe community development in abandoned agricultural fields. According to this model, the set of species present in a field immediately after abandonment had strong influences on community development and final community composition. Alternative stable states In the 1970s, it was suggested that natural communities could be characterized by multiple or alternative stable states. Multiple stable state models suggested that the same environment could support several combinations of species. Theorists argued that historical context could play a central role in determining which stable state would be present at any given time. Robert May explained, "If there is a unique stable state, historical accidents are unimportant; if there are many alternative locally stable states, historical accidents can be of overriding significance." Community assembly theory Assembly theory explains community development processes in the context of multiple stable states: it asks why a particular type of community developed when other stable community types are possible. In contrast to succession theory, assembly theory was developed largely by animal ecologists and explicitly incorporated historical context. In 1975, Jared Diamond developed quantitative "assembly rules" to predict avian community composition on an archipelago. This approach emphasizes historical contingency and multiple stable states. Although the idea of deterministic community assembly initially drew criticism, the approach continued to gain support. In 1991, Drake used an assembly model to demonstrate that different community types result from different sequences of species invasions. In this model, early invaders have major impacts on the invasion success of species that arrive later. Other modelling studies suggested that priority effects may be especially important when invasion frequency is low enough to allow species to become established before replacement, or when other factors that could drive assembly (e.g., competition, abiotic stress) are relatively unimportant. In a 1999 review, Belyea and Lancaster described three basic determinants of community assembly: dispersal constraints, environmental constraints, and internal dynamics. They identified priority effects as a manifestation of the interaction between dispersal constraints and internal dynamics. Empirical evidence Although early research focused on animals and aquatic systems, more recent studies have begun to examine terrestrial and plant-based priority effects. Marine Most of the earliest empirical evidence for priority effects came from studies on aquatic animals. Sutherland (1974) found that final community composition varied depending on the initial order of larval recruitment in a community of small marine organisms (sponges, tunicates, hydroids, and other species). Shulman (1983) found strong priority effects among coral reef fish. The study found that prior establishment by a territorial damselfish reduced establishment rates of other fish. The authors also identified cross-trophic priority effects; prior establishment by a predator fish reduced establishment rates of prey fishes. In the late 1980s, several studies examined priority effects in marine microcosms. Robinson and Dickerson (1987) found that priority effects were important in some cases, but suggested, "Being the first to invade a habitat does not guarantee success; there must be sufficient time for the early colonist to increase its population size for it to pre-empt further colonization." Robinson and Edgemon (1988) later developed 54 communities of phytoplankton species by varying invasion order, rate, and timing. They found that although invasion order (priority effects) could explain a small fraction of the resulting variation in community composition, most of the variation was explained by changes in invasion rate and invasion timing. These studies indicate that priority effects may not be the only or the most important historical factor affecting the trajectory of community development. In a striking example of cross-trophic priority effects, Hart (1992) found that priority effects explain the maintenance of two alternate stable states in stream ecosystems. While a macroalga is dominant in some patches, sessile grazers maintain a "lawn" of small microalgae in others. If the sessile grazers colonize a patch first, they exclude the macroalga, and vice versa. Amphibian In two of the most commonly cited empirical studies on priority effects, Alford and Wilbur documented inhibitory and facilitative priority effects among toad larvae in experimental ponds. They found that hatchlings of a toad species (Bufo americanus) exhibited higher growth and survivorship when introduced to a pond before those of a frog species (Rana sphenocephala). The frog larvae, however, did best when introduced after the toad larvae. Thus, prior establishment by the toad species facilitated the frog species, while prior establishment by the frog species inhibited the toad species. Studies on tree frogs have also documented both types of priority effects. Morin (1987) also observed that priority effects became less important in the presence of a predatory salamander. He hypothesized that predation mediated priority effects by reducing competition between frog species. Studies on larval insects and frogs in water-filled tree holes and stumps found that abiotic factors such as space, resource availability, and toxin levels can also be important in mediating priority effects. Terrestrial Terrestrial studies on priority effects are rare, with most studies focusing on arthropods or grassland plant species. In a lab experiment, Shorrocks and Bingley (1994) showed that prior arrival increased survivorship for two species of fruit flies; each fly species had inhibitory impacts on the other. A 1996 field study on desert spiders by Ehmann and MacMahon showed that the presence of species from one spider guild reduced establishment of spiders from a different guild. Palmer (2003) demonstrated that priority effects allowed a competitively subordinate ant species to avoid exclusion by a competitively dominant species. If the competitively subordinate ants were able to colonize first, they altered their host tree’s morphology in ways that made it less suitable for other ant species. This study was especially important because it was able to identify a mechanism driving observed priority effects. A study on two species of introduced grasses in Hawaiian woodlands found that the species with inferior competitive abilities may be able to persist through priority effects. At least three studies have come to similar conclusions about the coexistence of native and exotic grasses in California grassland ecosystems. If given time to establish, native species can successfully inhibit the establishment of exotics. The authors of the various studies attributed the prevalence of exotic grasses in California to the low seed production and relatively poor dispersal ability of native species. Emerging concepts Long-term implications: convergence and divergence Although many studies have documented priority effects, the persistence of these effects over time often remains unclear. Young (2001) indicated that both convergence (in which "communities proceed towards a pre-disturbance state regardless of historical conditions") and divergence (in which historical factors continue to affect the long-term trajectory of community development) are present in nature. Among studies of priority effects, both trends seem to have been observed. Fukami (2005) argued that a community could be both convergent and divergent at different levels of community organization. The authors studied experimentally assembled plant communities and found that while the identities of individual species remained unique across different community replicates, species traits generally became more similar. Trophic ecology Some studies indicate that priority effects can occur across guilds or trophic levels. Such priority effects could have dramatic impacts on community composition and food web structure. Even intra-guild priority effects could have important consequences at multiple trophic levels if the affected species are associated with unique predator or prey species. Consider, for example, a plant species that is eaten by a host-specific herbivore. Priority effects that influence the ability of the plant species to establish would indirectly affect the establishment success of the associated herbivore. Theoretical models have described cyclical assembly dynamics in which species associated with different suites of predators can repeatedly replace one another. Intra-specific aggregation In situations where two species are introduced at the same time, spatial aggregation of a species' propagules could cause priority effects by initially reducing interspecific competition. Aggregation during recruitment and establishment could allow inferior competitors to coexist with or even displace competitive dominants over the long-term. Several modelling efforts have begun to examine the implications of spatial priority effects for species coexistence. Mechanisms and new organisms A few studies have begun to explore the mechanisms driving observed priority effects. Moreover, although past studies focused on a small subset of species, recent papers indicate that priority effects may be important for a wide range of organisms, including fungi, birds, lizards, and salamanders. Ecological restoration Priority effects have important implications for ecological restoration. In many systems, information about priority effects can help practitioners identify cost-effective strategies for improving the survival and persistence of certain species, especially species of inferior competitive ability. For example, in a study on the restoration of native Californian grasses and forbs, Lulow (2004) found that forbs could not establish in plots where bunchgrasses had been previously planted. When bunchgrasses were added to plots where forbs had already been growing for a year, forbs were able to coexist with grasses for at least 3–4 years. References Ecology
Priority effect
Biology
2,394
1,868,834
https://en.wikipedia.org/wiki/Symmetrical%20components
In electrical engineering, the method of symmetrical components simplifies analysis of unbalanced three-phase power systems under both normal and abnormal conditions. The basic idea is that an asymmetrical set of N phasors can be expressed as a linear combination of N symmetrical sets of phasors by means of a complex linear transformation. Fortescue's theorem (symmetrical components) is based on superposition principle, so it is applicable to linear power systems only, or to linear approximations of non-linear power systems. In the most common case of three-phase systems, the resulting "symmetrical" components are referred to as direct (or positive), inverse (or negative) and zero (or homopolar). The analysis of power system is much simpler in the domain of symmetrical components, because the resulting equations are mutually linearly independent if the circuit itself is balanced. Description In 1918 Charles Legeyt Fortescue presented a paper which demonstrated that any set of N unbalanced phasors (that is, any such polyphase signal) could be expressed as the sum of N symmetrical sets of balanced phasors, for values of N that are prime. Only a single frequency component is represented by the phasors. In 1943 Edith Clarke published a textbook giving a method of use of symmetrical components for three-phase systems that greatly simplified calculations over the original Fortescue paper. In a three-phase system, one set of phasors has the same phase sequence as the system under study (positive sequence; say ABC), the second set has the reverse phase sequence (negative sequence; ACB), and in the third set the phasors A, B and C are in phase with each other (zero sequence, the common-mode signal). Essentially, this method converts three unbalanced phases into three independent sources, which makes asymmetric fault analysis more tractable. By expanding a one-line diagram to show the positive sequence, negative sequence, and zero sequence impedances of generators, transformers and other devices including overhead lines and cables, analysis of such unbalanced conditions as a single line to ground short-circuit fault is greatly simplified. The technique can also be extended to higher order phase systems. Physically, in a three phase system, a positive sequence set of currents produces a normal rotating field, a negative sequence set produces a field with the opposite rotation, and the zero sequence set produces a field that oscillates but does not rotate between phase windings. Since these effects can be detected physically with sequence filters, the mathematical tool became the basis for the design of protective relays, which used negative-sequence voltages and currents as a reliable indicator of fault conditions. Such relays may be used to trip circuit breakers or take other steps to protect electrical systems. The analytical technique was adopted and advanced by engineers at General Electric and Westinghouse, and after World War II it became an accepted method for asymmetric fault analysis. As shown in the figure to the above right, the three sets of symmetrical components (positive, negative, and zero sequence) add up to create the system of three unbalanced phases as pictured in the bottom of the diagram. The imbalance between phases arises because of the difference in magnitude and phase shift between the sets of vectors. Notice that the colors (red, blue, and yellow) of the separate sequence vectors correspond to three different phases (A, B, and C, for example). To arrive at the final plot, the sum of vectors of each phase is calculated. This resulting vector is the effective phasor representation of that particular phase. This process, repeated, produces the phasor for each of the three phases. The three-phase case Symmetrical components are most commonly used for analysis of three-phase electrical power systems. The voltage or current of a three-phase system at some point can be indicated by three phasors, called the three components of the voltage or the current. This article discusses voltage; however, the same considerations also apply to current. In a perfectly balanced three-phase power system, the voltage phasor components have equal magnitudes but are 120 degrees apart. In an unbalanced system, the magnitudes and phases of the voltage phasor components are different. Decomposing the voltage phasor components into a set of symmetrical components helps analyze the system as well as visualize any imbalances. If the three voltage components are expressed as phasors (which are complex numbers), a complex vector can be formed in which the three phase components are the components of the vector. A vector for three phase voltage components can be written as and decomposing the vector into three symmetrical components gives where the subscripts 0, 1, and 2 refer respectively to the zero, positive, and negative sequence components. The sequence components differ only by their phase angles, which are symmetrical and so are radians or 120°. A matrix Define a phasor rotation operator , which rotates a phasor vector counterclockwise by 120 degrees when multiplied by it: . Note that so that . The zero sequence components have equal magnitude and are in phase with each other, therefore: , and the other sequence components have the same magnitude, but their phase angles differ by 120°. If the original unbalanced set of voltage phasors have positive or abc phase sequence, then: , , meaning that , , , . Thus, where If instead the original unbalanced set of voltage phasors have negative or acb phase sequence, the following matrix can be similarly derived: Decomposition The sequence components are derived from the analysis equation where The above two equations tell how to derive symmetrical components corresponding to an asymmetrical set of three phasors: Sequence 0 is one-third the sum of the original three phasors. Sequence 1 is one-third the sum of the original three phasors rotated counterclockwise 0°, 120°, and 240°. Sequence 2 is one-third the sum of the original three phasors rotated counterclockwise 0°, 240°, and 120°. Visually, if the original components are symmetrical, sequences 0 and 2 will each form a triangle, summing to zero, and sequence 1 components will sum to a straight line. Intuition The phasors form a closed triangle (e.g., outer voltages or line to line voltages). To find the synchronous and inverse components of the phases, take any side of the outer triangle and draw the two possible equilateral triangles sharing the selected side as base. These two equilateral triangles represent a synchronous and an inverse system. If the phasors V were a perfectly synchronous system, the vertex of the outer triangle not on the base line would be at the same position as the corresponding vertex of the equilateral triangle representing the synchronous system. Any amount of inverse component would mean a deviation from this position. The deviation is exactly 3 times the inverse phase component. The synchronous component is in the same manner 3 times the deviation from the "inverse equilateral triangle". The directions of these components are correct for the relevant phase. It seems counter intuitive that this works for all three phases regardless of the side chosen but that is the beauty of this illustration. The graphic is from Napoleon's Theorem, which matches a graphical calculation technique that sometimes appears in older references books. Poly-phase case It can be seen that the transformation matrix A above is a DFT matrix, and as such, symmetrical components can be calculated for any poly-phase system. Contribution of harmonics to symmetrical components in 3-phase power systems Harmonics often occur in power systems as a consequence of non-linear loads. Each order of harmonics contributes to different sequence components. The fundamental and harmonics of order will contribute to the positive sequence component. Harmonics of order will contribute to the negative sequence. Harmonics of order contribute to the zero sequence. Note that the rules above are only applicable if the phase values (or distortion) in each phase are exactly the same. Please further note that even harmonics are not common in power systems. Consequence of the zero sequence component in power systems The zero sequence represents the component of the unbalanced phasors that is equal in magnitude and phase. Because they are in phase, zero sequence currents flowing through an n-phase network will sum to n times the magnitude of the individual zero sequence currents components. Under normal operating conditions this sum is small enough to be negligible. However, during large zero sequence events such as lightning strikes, this nonzero sum of currents can lead to a larger current flowing through the neutral conductor than the individual phase conductors. Because neutral conductors are typically not larger than individual phase conductors, and are often smaller than these conductors, a large zero sequence component can lead to overheating of neutral conductors and to fires. One way to prevent large zero sequence currents is to use a delta connection, which appears as an open circuit to zero sequence currents. For this reason, most transmission, and much sub-transmission is implemented using delta. Much distribution is also implemented using delta, although "old work" distribution systems have occasionally been "wyed-up" (converted from delta to wye) so as to increase the line's capacity at a low converted cost, but at the expense of a higher central station protective relay cost. See also Symmetry Direct-quadrature-zero transformation Alpha–beta transformation References Notes Bibliography J. Lewis Blackburn Symmetrical Components for Power Systems Engineering, Marcel Dekker, New York (1993). William D. Stevenson, Jr. Elements of Power System Analysis Third Edition, McGraw-Hill, New York (1975). . History article from IEEE on early development of symmetrical components, retrieved May 12, 2005. Westinghouse Corporation, Applied Protective Relaying, 1976, Westinghouse Corporation, no ISBN, Library of Congress card no. 76-8060 - a standard reference on electromechanical protective relays Electrical engineering Three-phase AC power
Symmetrical components
Engineering
2,068
36,423,257
https://en.wikipedia.org/wiki/Aerographite
Aerographite is a synthetic foam consisting of a porous interconnected network of tubular carbon. With a density of 180 g/m3 it is one of the lightest structural materials ever created. It was developed jointly by a team of researchers at the University of Kiel and the Technical University of Hamburg in Germany, and was first reported in a scientific journal in June 2012. Structure and properties Aerographite is a black freestanding material that can be produced in various shapes occupying a volume of up to several cubic centimeters. It consists of a seamless interconnected network of carbon tubes that have micron-scale diameters and a wall thickness of about 15 nm. Because of the relatively lower curvature and larger wall thickness, these walls differ from the graphene-like shells of carbon nanotubes and resemble vitreous carbon in their properties. These walls are often discontinuous and contain wrinkled areas that improve the elastic properties of aerographite. The carbon bonding in aerographite has an sp2 character, as confirmed by electron energy loss spectroscopy and electrical conductivity measurements. Upon external compression, the conductivity increases, along with material density, from ~0.2 S/m at 0.18 mg/cm3 to 0.8 S/m at 0.2 mg/cm3. The conductivity is higher for a denser material, 37 S/m at 50 mg/cm3. Owing to its interconnected tubular network structure, aerographite resists tensile forces much better than other carbon foams as well as silica aerogels. It sustains extensive elastic deformations and has a very low Poisson's ratio. A complete shape recovery of a 3-mm-tall sample after it was compressed down to 0.1 mm is possible. Its ultimate tensile strength (UTS) depends on material density and is about 160 kPa at 8.5 mg/cm3 and 1 kPa at 0.18 mg/cm3; in comparison, the strongest silica aerogels have a UTS of 16 kPa at 100 mg/cm3. The Young's modulus is ca. 15 kPa at 0.2 mg/cm3 in tension, but is much lower in compression, increasing from 1 kPa at 0.2 mg/cm3 to 7 kPa at 15 mg/cm3. The density given by the authors is based a mass measurement and the determination of the outer volume of the synthetic foams as usually performed also for other structures. Aerographite is superhydrophobic, thus its centimeter-sized samples repel water; they are also rather sensitive to electrostatic effects and spontaneously jump to charged objects. Synthesis Common aspects of synthesis: With the aerographite's chemical vapor deposition (CVD) process metal oxides had been shown in 2012 to be a suitable template for deposition of graphitic structures. The templates can be in situ removed. Basic mechanism is the reduction of metal oxide to a metallic constituent, the nucleation of carbon in and on top of metal and the simultaneous evaporation of metal component. Requirements for the metal oxides are: a low activation energy for chemical reduction, a metal phase, which can nucleate graphite, a low evaporation point of metal phase (ZnO, SnO). From engineering perspective, the developed CVD process enables the use of ceramic powder processing (use of custom particles and sintering bridges) for creation of templates for 3D carbon via CVD. Key advantages compared to commonly used metal templates are: shape variety of particle shapes, the creation of sintering bridges and the removal without acids. Originally demonstrated on just μm-sized meshed graphite networks, the CVD mechanism had been adopted after 2014 by other scientists to create nm-sized carbon structures. Details specific to reference: Aerographite is produced by chemical vapor deposition, using a ZnO template. The template consists of micron-thick rods, often in the shape of multipods, that can be synthesized by mixing comparable amounts of Zn and polyvinyl butyral powders and heating the mixture at 900 °C. The aerographite synthesis is carried out at ~760 °C, under an argon gas flow, to which toluene vapors are injected as a carbon source. A thin (~15 nm), discontinuous layer of carbon is deposited on ZnO which is then etched away by adding hydrogen gas to the reaction chamber. Thus the remaining carbon network closely follows the morphology of the original ZnO template. In particular, the nodes of the aerographite network originate from the joints of the ZnO multipods. Potential applications Aerographite electrodes have been tested in an electric double-layer capacitor (EDLC, also known as supercapacitor) and endured the mechanical shocks related to loading-unloading cycles and crystallization of the electrolyte (that occurs upon evaporation of the solvent). Their specific energy of 1.25 Wh/kg is comparable to that of carbon nanotube electrodes (~2.3 Wh/kg). Space travel Because aerographite is both black and light, it was proposed as a light-sail material. Simulations show that 1 kg spacecraft with aerographite solar sail can reach Mars in 26 days. Separately, it was proposed to release 1 μm particles from the solar altitude reached by the Parker solar probe. The solar wind would accelerate them to over 2% of lightspeed or 6000 km/sec. A steady stream of pellets could be used by plasma magnet propulsion systems to accelerate payloads to 6% of lightspeed, or 18000 km/sec. See also Aerogels Graphene Metallic microlattice References External links Nanomaterials Chemical vapor deposition Aerogels
Aerographite
Chemistry,Materials_science
1,189
39,747
https://en.wikipedia.org/wiki/Stomach
The stomach is a muscular, hollow organ in the upper gastrointestinal tract of humans and many other animals, including several invertebrates. The stomach has a dilated structure and functions as a vital organ in the digestive system. The stomach is involved in the gastric phase of digestion, following the cephalic phase in which the sight and smell of food and the act of chewing are stimuli. In the stomach a chemical breakdown of food takes place by means of secreted digestive enzymes and gastric acid. The stomach is located between the esophagus and the small intestine. The pyloric sphincter controls the passage of partially digested food (chyme) from the stomach into the duodenum, the first and shortest part of the small intestine, where peristalsis takes over to move this through the rest of the intestines. Structure In the human digestive system, the stomach lies between the esophagus and the duodenum (the first part of the small intestine). It is in the left upper quadrant of the abdominal cavity. The top of the stomach lies against the diaphragm. Lying behind the stomach is the pancreas. A large double fold of visceral peritoneum called the greater omentum hangs down from the greater curvature of the stomach. Two sphincters keep the contents of the stomach contained; the lower esophageal sphincter (found in the cardiac region), at the junction of the esophagus and stomach, and the pyloric sphincter at the junction of the stomach with the duodenum. The stomach is surrounded by parasympathetic (inhibitor) and sympathetic (stimulant) plexuses (networks of blood vessels and nerves in the anterior gastric, posterior, superior and inferior, celiac and myenteric), which regulate both the secretory activity of the stomach and the motor (motion) activity of its muscles. The stomach is distensible, and can normally expand to hold about one litre of food. In a newborn human baby the stomach will only be able to hold about 30 millilitres. The maximum stomach volume in adults is between 2 and 4 litres, although volumes of up to 15 litres have been observed in extreme circumstances. Sections The human stomach can be divided into four sections, beginning at the cardia followed by the fundus, the body and the pylorus. The gastric cardia is where the contents of the esophagus empty from the gastroesophageal sphincter into the cardiac orifice, the opening into the gastric cardia. A cardiac notch at the left of the cardiac orifice, marks the beginning of the greater curvature of the stomach. A horizontal line across from the cardiac notch gives the dome-shaped region called the fundus. The cardia is a very small region of the stomach that surrounds the esophageal opening. The fundus () is formed in the upper curved part. The body or corpus is the main, central region of the stomach. The pylorus opens to the body of the stomach. The pylorus () connects the stomach to the duodenum at the pyloric sphincter. The cardia is defined as the region following the "z-line" of the gastroesophageal junction, the point at which the epithelium changes from stratified squamous to columnar. Near the cardia is the lower esophageal sphincter. Anatomical proximity The stomach bed refers to the structures upon which the stomach rests in mammals. These include the tail of the pancreas, splenic artery, left kidney, left suprarenal gland, transverse colon and its mesocolon, and the left crus of diaphragm, and the left colic flexure. The term was introduced around 1896 by Philip Polson of the Catholic University School of Medicine, Dublin. However this was brought into disrepute by surgeon anatomist J Massey. Blood supply The lesser curvature of the human stomach is supplied by the right gastric artery inferiorly and the left gastric artery superiorly, which also supplies the cardiac region. The greater curvature is supplied by the right gastroepiploic artery inferiorly and the left gastroepiploic artery superiorly. The fundus of the stomach, and also the upper portion of the greater curvature, is supplied by the short gastric arteries, which arise from the splenic artery. Lymphatic drainage The two sets of gastric lymph nodes drain the stomach's tissue fluid into the lymphatic system. Microanatomy Wall Like the other parts of the gastrointestinal wall, the human stomach wall from inner to outer, consists of a mucosa, submucosa, muscular layer, subserosa and serosa. The inner part of the stomach wall is the gastric mucosa a mucous membrane that forms the lining of the stomach. the membrane consists of an outer layer of columnar epithelium, a lamina propria, and a thin layer of smooth muscle called the muscularis mucosa. Beneath the mucosa lies the submucosa, consisting of fibrous connective tissue. Meissner's plexus is in this layer interior to the oblique muscle layer. Outside of the submucosa lies the muscular layer. It consists of three layers of muscular fibres, with fibres lying at angles to each other. These are the inner oblique, middle circular, and outer longitudinal layers. The presence of the inner oblique layer is distinct from other parts of the gastrointestinal tract, which do not possess this layer. The stomach contains the thickest muscular layer consisting of three layers, thus maximum peristalsis occurs here. The inner oblique layer: This layer is responsible for creating the motion that churns and physically breaks down the food. It is the only layer of the three which is not seen in other parts of the digestive system. The antrum has thicker skin cells in its walls and performs more forceful contractions than the fundus. The middle circular layer: At this layer, the pylorus is surrounded by a thick circular muscular wall, which is normally tonically constricted, forming a functional (if not anatomically discrete) pyloric sphincter, which controls the movement of chyme into the duodenum. This layer is concentric to the longitudinal axis of the stomach. The myenteric plexus (Auerbach's plexus) is found between the outer longitudinal and the middle circular layer and is responsible for the innervation of both (causing peristalsis and mixing). The outer longitudinal layer is responsible for moving the semi-digested food towards the pylorus of the stomach through muscular shortening. To the outside of the muscular layer lies a serosa, consisting of layers of connective tissue continuous with the peritoneum. Smooth mucosa along the inside of the lesser curvature forms a passageway - the gastric canal that fast-tracks liquids entering the stomach, to the pylorus. Glands The mucosa lining the stomach is lined with gastric pits, which receive gastric juice, secreted by between 2 and 7 gastric glands. Gastric juice is an acidic fluid containing hydrochloric acid and digestive enzymes. The glands contains a number of cells, with the function of the glands changing depending on their position within the stomach. Within the body and fundus of the stomach lie the fundic glands. In general, these glands are lined by column-shaped cells that secrete a protective layer of mucus and bicarbonate. Additional cells present include parietal cells that secrete hydrochloric acid and intrinsic factor, chief cells that secrete pepsinogen (this is a precursor to pepsin- the highly acidic environment converts the pepsinogen to pepsin), and neuroendocrine cells that secrete serotonin. Glands differ where the stomach meets the esophagus and near the pylorus. Near the gastroesophageal junction lie cardiac glands, which primarily secrete mucus. They are fewer in number than the other gastric glands and are more shallowly positioned in the mucosa. There are two kinds - either simple tubular glands with short ducts or compound racemose resembling the duodenal Brunner's glands. Near the pylorus lie pyloric glands located in the antrum of the pylorus. They secrete mucus, as well as gastrin produced by their G cells. Gene and protein expression About 20,000 protein-coding genes are expressed in human cells and nearly 70% of these genes are expressed in the normal stomach. Just over 150 of these genes are more specifically expressed in the stomach compared to other organs, with only some 20 genes being highly specific. The corresponding specific proteins expressed in stomach are mainly involved in creating a suitable environment for handling the digestion of food for uptake of nutrients. Highly stomach-specific proteins include gastrokine-1 expressed in the mucosa; pepsinogen and gastric lipase, expressed in gastric chief cells; and a gastric ATPase and gastric intrinsic factor, expressed in parietal cells. Development In the early part of the development of the human embryo, the ventral part of the embryo abuts the yolk sac. During the third week of development, as the embryo grows, it begins to surround parts of the yolk sac. The enveloped portions form the basis for the adult gastrointestinal tract. The sac is surrounded by a network of vitelline arteries and veins. Over time, these arteries consolidate into the three main arteries that supply the developing gastrointestinal tract: the celiac artery, superior mesenteric artery, and inferior mesenteric artery. The areas supplied by these arteries are used to define the foregut, midgut, and hindgut. The surrounded sac becomes the primitive gut. Sections of this gut begin to differentiate into the organs of the gastrointestinal tract, and the esophagus, and stomach form from the foregut. As the stomach rotates during early development, the dorsal and ventral mesentery rotate with it; this rotation produces a space anterior to the expanding stomach called the greater sac, and a space posterior to the stomach called the lesser sac. After this rotation the dorsal mesentery thins and forms the greater omentum, which is attached to the greater curvature of the stomach. The ventral mesentery forms the lesser omentum, and is attached to the developing liver. In the adult, these connective structures of omentum and mesentery form the peritoneum, and act as an insulating and protective layer while also supplying organs with blood and lymph vessels as well as nerves. Arterial supply to all these structures is from the celiac trunk, and venous drainage is by the portal venous system. Lymph from these organs is drained to the prevertebral celiac nodes at the origin of the celiac artery from the aorta. Function Digestion In the human digestive system, a bolus (a small rounded mass of chewed up food) enters the stomach through the esophagus via the lower esophageal sphincter. The stomach releases proteases (protein-digesting enzymes such as pepsin), and hydrochloric acid, which kills or inhibits bacteria and provides the acidic pH of 2 for the proteases to work. Food is churned by the stomach through peristaltic muscular contractions of the wall – reducing the volume of the bolus, before looping around the fundus and the body of stomach as the boluses are converted into chyme (partially digested food). Chyme slowly passes through the pyloric sphincter and into the duodenum of the small intestine, where the extraction of nutrients begins. Gastric juice in the stomach also contains pepsinogen. Hydrochloric acid activates this inactive form of enzyme into the active form, pepsin. Pepsin breaks down proteins into polypeptides. Mechanical digestion Within a few moments after food enters the stomach, mixing waves begin to occur at intervals of approximately 20 seconds. A mixing wave is a unique type of peristalsis that mixes and softens the food with gastric juices to create chyme. The initial mixing waves are relatively gentle, but these are followed by more intense waves, starting at the body of the stomach and increasing in force as they reach the pylorus. The pylorus, which holds around 30 mL of chyme, acts as a filter, permitting only liquids and small food particles to pass through the mostly, but not fully, closed pyloric sphincter. In a process called gastric emptying, rhythmic mixing waves force about 3 mL of chyme at a time through the pyloric sphincter and into the duodenum. Release of a greater amount of chyme at one time would overwhelm the capacity of the small intestine to handle it. The rest of the chyme is pushed back into the body of the stomach, where it continues mixing. This process is repeated when the next mixing waves force more chyme into the duodenum. Gastric emptying is regulated by both the stomach and the duodenum. The presence of chyme in the duodenum activates receptors that inhibit gastric secretion. This prevents additional chyme from being released by the stomach before the duodenum is ready to process it. Chemical digestion The fundus stores both undigested food and gases that are released during the process of chemical digestion. Food may sit in the fundus of the stomach for a while before being mixed with the chyme. While the food is in the fundus, the digestive activities of salivary amylase continue until the food begins mixing with the acidic chyme. Ultimately, mixing waves incorporate this food with the chyme, the acidity of which inactivates salivary amylase and activates lingual lipase. Lingual lipase then begins breaking down triglycerides into free fatty acids, and mono- and diglycerides. The breakdown of protein begins in the stomach through the actions of hydrochloric acid, and the enzyme pepsin. The stomach can also produce gastric lipase, which can help digesting fat. The contents of the stomach are completely emptied into the duodenum within two to four hours after the meal is eaten. Different types of food take different amounts of time to process. Foods heavy in carbohydrates empty fastest, followed by high-protein foods. Meals with a high triglyceride content remain in the stomach the longest. Since enzymes in the small intestine digest fats slowly, food can stay in the stomach for 6 hours or longer when the duodenum is processing fatty chyme. However, this is still a fraction of the 24 to 72 hours that full digestion typically takes from start to finish. Absorption Although the absorption in the human digestive system is mainly a function of the small intestine, some absorption of certain small molecules nevertheless does occur in the stomach through its lining. This includes: Water, if the body is dehydrated Medication, such as aspirin Amino acids 10–20% of ingested ethanol (e.g. from alcoholic beverages) Caffeine To a small extent water-soluble vitamins (most are absorbed in the small intestine) The parietal cells of the human stomach are responsible for producing intrinsic factor, which is necessary for the absorption of vitamin B12. B12 is used in cellular metabolism and is necessary for the production of red blood cells, and the functioning of the nervous system. Control of secretion and motility Chyme from the stomach is slowly released into the duodenum through coordinated peristalsis and opening of the pyloric sphincter. The movement and the flow of chemicals into the stomach are controlled by both the autonomic nervous system and by the various digestive hormones of the digestive system: Other than gastrin, these hormones all act to turn off the stomach action. This is in response to food products in the liver and gall bladder, which have not yet been absorbed. The stomach needs to push food into the small intestine only when the intestine is not busy. While the intestine is full and still digesting food, the stomach acts as storage for food. Other Effects of EGF Epidermal growth factor (EGF) results in cellular proliferation, differentiation, and survival. EGF is a low-molecular-weight polypeptide first purified from the mouse submandibular gland, but since then found in many human tissues including the submandibular gland, and the parotid gland. Salivary EGF, which also seems to be regulated by dietary inorganic iodine, also plays an important physiological role in the maintenance of oro-esophageal and gastric tissue integrity. The biological effects of salivary EGF include healing of oral and gastroesophageal ulcers, inhibition of gastric acid secretion, stimulation of DNA synthesis, and mucosal protection from intraluminal injurious factors such as gastric acid, bile acids, pepsin, and trypsin and from physical, chemical, and bacterial agents. Stomach as nutrition sensor The human stomach has receptors responsive to sodium glutamate and this information is passed to the lateral hypothalamus and limbic system in the brain as a palatability signal through the vagus nerve. The stomach can also sense, independently of tongue and oral taste receptors, glucose, carbohydrates, proteins, and fats. This allows the brain to link nutritional value of foods to their tastes. Thyrogastric syndrome This syndrome defines the association between thyroid disease and chronic gastritis, which was first described in the 1960s. This term was coined also to indicate the presence of thyroid autoantibodies or autoimmune thyroid disease in patients with pernicious anemia, a late clinical stage of atrophic gastritis. In 1993, a more complete investigation on the stomach and thyroid was published, reporting that the thyroid is, embryogenetically and phylogenetically, derived from a primitive stomach, and that the thyroid cells, such as primitive gastroenteric cells, migrated and specialized in uptake of iodide and in storage and elaboration of iodine compounds during vertebrate evolution. In fact, the stomach and thyroid share iodine-concentrating ability and many morphological and functional similarities, such as cell polarity and apical microvilli, similar organ-specific antigens and associated autoimmune diseases, secretion of glycoproteins (thyroglobulin and mucin) and peptide hormones, the digesting and readsorbing ability, and lastly, similar ability to form iodotyrosines by peroxidase activity, where iodide acts as an electron donor in the presence of H2O2. In the following years, many researchers published reviews about this syndrome. Clinical significance Diseases A series of radiographs can be used to examine the stomach for various disorders. This will often include the use of a barium swallow. Another method of examination of the stomach, is the use of an endoscope. A gastric emptying study is considered the gold standard to assess the gastric emptying rate. A large number of studies have indicated that most cases of peptic ulcers, and gastritis, in humans are caused by Helicobacter pylori infection, and an association has been seen with the development of stomach cancer. A stomach rumble is actually noise from the intestines. Surgery In humans, many bariatric surgery procedures involve the stomach, in order to lose weight. A gastric band may be placed around the cardia area, which can adjust to limit intake. The anatomy of the stomach may be modified, or the stomach may be bypassed entirely. Surgical removal of the stomach is called a gastrectomy, and removal of the cardia area is a called a cardiectomy. "Cardiectomy" is a term that is also used to describe the removal of the heart. A gastrectomy may be carried out because of gastric cancer or severe perforation of the stomach wall. Fundoplication is stomach surgery in which the fundus is wrapped around the lower esophagus and stitched into place. It is used to treat gastroesophageal reflux disease (GERD). Etymology The word stomach is derived from Greek stomachos (), ultimately from stoma () 'mouth'. Gastro- and gastric (meaning 'related to the stomach') are both derived from Greek gaster () 'belly'. Other animals Although the precise shape and size of the stomach varies widely among different vertebrates, the relative positions of the esophageal and duodenal openings remain relatively constant. As a result, the organ always curves somewhat to the left before curving back to meet the pyloric sphincter. However, lampreys, hagfishes, chimaeras, lungfishes, and some teleost fish have no stomach at all, with the esophagus opening directly into the intestine. These animals all consume diets that require little storage of food, no predigestion with gastric juices, or both. The gastric lining is usually divided into two regions, an anterior portion lined by fundic glands and a posterior portion lined with pyloric glands. Cardiac glands are unique to mammals, and even then are absent in a number of species. The distributions of these glands vary between species, and do not always correspond with the same regions as in humans. Furthermore, in many non-human mammals, a portion of the stomach anterior to the cardiac glands is lined with epithelium essentially identical to that of the esophagus. Ruminants, in particular, have a complex four-chambered stomach. The first three chambers (rumen, reticulum, and omasum) are all lined with esophageal mucosa, while the final chamber functions like a monogastric stomach, which is called the abomasum. In birds and crocodilians, the stomach is divided into two regions. Anteriorly is a narrow tubular region, the proventriculus, lined by fundic glands, and connecting the true stomach to the crop. Beyond lies the powerful muscular gizzard, lined by pyloric glands, and, in some species, containing stones that the animal swallows to help grind up food. In insects, there is also a crop. The insect stomach is called the midgut. Information about the stomach in echinoderms or molluscs can be found under the respective articles. Additional images See also Gastroesophageal reflux disease Gastric microbiota Proton-pump inhibitor References External links Stomach at the Human Protein Atlas Digestion of proteins in the stomach or tiyan (archived 10 March 2007) Site with details of how ruminants process food (archived 27 October 2009) Control of Gastric Emptying () Abdomen Digestive system Organs (anatomy)
Stomach
Biology
4,871
38,282,904
https://en.wikipedia.org/wiki/Lemke%E2%80%93Howson%20algorithm
The Lemke–Howson algorithm is an algorithm that computes a Nash equilibrium of a bimatrix game, named after its inventors, Carlton E. Lemke and J. T. Howson. It is said to be "the best known among the combinatorial algorithms for finding a Nash equilibrium", although more recently the Porter-Nudelman-Shoham algorithm has outperformed on a number of benchmarks. Description The input to the algorithm is a 2-player game . Here, is represented by two game matrices and , containing the payoffs for players 1 and 2 respectively, who have and pure strategies respectively. In the following, one assumes that all payoffs are positive. (By rescaling, any game can be transformed into a strategically equivalent game with positive payoffs.) has two corresponding polytopes (called the best-response polytopes) and , in dimensions and dimensions respectively, defined as follows: is in ; let denote the coordinates. is defined by inequalities , for all , and a further inequalities for all . is in ; let denote the coordinates. is defined by inequalities , for all , and a further inequalities for all . Here, represents the set of unnormalized probability distributions over player 1's pure strategies, such that player 2's expected payoff is at most 1. The first constraints require the probabilities to be non-negative, and the other constraints require each of the pure strategies of player 2 to have an expected payoff of at most 1. has a similar meaning, reversing the roles of the players. Each vertex of is associated with a set of labels from the set as follows. For vertex gets the label if at vertex . For , vertex gets the label if Assuming that is nondegenerate, each vertex is incident to facets of and has labels. Note that the origin, which is a vertex of , has the labels . Each vertex of is associated with a set of labels from the set as follows. For , vertex gets the label if at vertex . For , vertex gets the label if Assuming that is nondegenerate, each vertex is incident to facets of and has labels. Note that the origin, which is a vertex of , has the labels . Consider pairs of vertices , , . The pairs of vertices is said to be completely labeled if the sets associated with and contain all labels . Note that if and are the origins of and respectively, then is completely labeled. The pairs of vertices is said to be almost completely labeled (with respect to some missing label ) if the sets associated with and contain all labels in other than . Note that in this case, there will be a duplicate label that is associated with both and . A pivot operation consists of taking some pair and replacing with some vertex adjacent to in , or alternatively replacing with some vertex adjacent to in . This has the effect (in the case that is replaced) of replacing some label of with some other label. The replaced label is said to be dropped. Given any label of , it is possible to drop that label by moving to a vertex adjacent to that does not contain the hyperplane associated with that label. The algorithm starts at the completely labeled pair consisting of the pair of origins. An arbitrary label is dropped via a pivot operation, taking us to an almost completely labeled pair . Any almost completely labeled pair admits two pivot operations corresponding to dropping one or other copy of its duplicated label, and each of these operations may result in another almost completely labeled pair, or a completely labeled pair. Eventually, the algorithm finds a completely labeled pair , which is not the origin. corresponds to a pair of unnormalised probability distributions in which every strategy of player 1 either pays that player 1, or pays less than 1 and is played with probability 0 by that player (and a similar observation holds for player 2). Normalizing these values to probability distributions, one has a Nash equilibrium (whose payoffs to the players are the inverses of the normalization factors). Properties The algorithm can find at most different Nash equilibria. Any choice of initially-dropped label determines the equilibrium that is eventually found by the algorithm. The Lemke–Howson algorithm is equivalent to the following homotopy-based approach. Modify by selecting an arbitrary pure strategy , and giving the player who owns that strategy, a large payment to play it. In the modified game, the strategy is played with probability 1, and the other player will play his best response to with probability 1. Consider the continuum of games in which is continuously reduced to 0. There exists a path of Nash equilibria connecting the unique equilibrium of the modified game, to an equilibrium of . The pure strategy chosen to receive the bonus corresponds to the initially dropped label. While the algorithm is efficient in practice, in the worst case the number of pivot operations may need to be exponential in the number of pure strategies in the game. Subsequently, it has been shown that it is PSPACE-complete to find any of the solutions that can be obtained with the Lemke–Howson algorithm. References Game theory Non-cooperative games Combinatorial algorithms
Lemke–Howson algorithm
Mathematics
1,064
4,832,804
https://en.wikipedia.org/wiki/Institute%20of%20Asian%20Research
The Institute of Asian Research (IAR) is a research institute founded in 1978 at the University of British Columbia (UBC). The institute conducts interdisciplinary research and teaching on multiple South Asian and East Asian nations. The institute is particularly known for its Master of Public Policy and Global Affairs program. List of faculty Timothy Brook, Republic of China, chair in Chinese Research Timothy Cheek, Louis Cha Chair in Chinese Research Cesi Cruz, assistant professor, Institute of Asian Research and department of political science Julian Dierkes, assistant professor and Keidanren Chair in Japanese Research Paul M. Evans, director of the Institute of Asian Research Hyung Gu Lynn, AECL/KEPCO Chair in Korean Research Jessica Main, Tung Lin Kok Yuen Canada Foundation Chair in Buddhism and Contemporary Society Kai Ostwald, assistant professor, Institute of Asian Research and department of political science Kyung-Ae Park, Korea Foundation Chair in Korean Research Pitman B. Potter, Hong Kong Bank Chair in Asian Research Tsering Shakya, CRC Chair in Religion and Contemporary Society of Asia Sara Shneiderman, assistant professor, Institute of Asian Research and department of anthropology Yves Tiberghien, associate professor, Department of Political Science; Senior Fellow, Global Summitry Project, Munk School of Global Affairs; Senior Fellow, Asia-Pacific Foundation of Canada (APFC); co-director, UBC China Council Ilan Vertinsky, Vinod Sood Professor, International Business Studies, Sauder School of Business, professor of Strategy and Business Economics, professor, Institute of Asian Research and Sauder School of Business C. K. Choi Building The institute is housed in the C. K. Choi Building, a 1995 building notable for its environmentally-friendly design. It was designed by Matsuzaki Wright Architects of Vancouver as the UBC's "flagship environmental building". It features Asian architectural motifs, such as curved roofs. The building is named after Dr. Cheung-Kok Choi, a businessman and philanthropist as well as a major donor to UBC. References 1978 establishments in British Columbia Research institutes of international relations Public policy schools Public policy schools in Canada University of British Columbia Research institutes in Canada Sustainable architecture Buildings and structures in Vancouver Asian-Canadian culture in British Columbia
Institute of Asian Research
Engineering,Environmental_science
457
6,223,185
https://en.wikipedia.org/wiki/Beta%20hairpin
The beta hairpin (sometimes also called beta-ribbon or beta-beta unit) is a simple protein structural motif involving two beta strands that look like a hairpin. The motif consists of two strands that are adjacent in primary structure, oriented in an antiparallel direction (the N-terminus of one sheet is adjacent to the C-terminus of the next), and linked by a short loop of two to five amino acids. Beta hairpins can occur in isolation or as part of a series of hydrogen bonded strands that collectively comprise a beta sheet. Researchers such as Francisco Blanco et al. have used protein NMR to show that beta-hairpins can be formed from isolated short peptides in aqueous solution, suggesting that hairpins could form nucleation sites for protein folding. Classification Beta hairpins were originally categorized solely by the number of amino acid residues in their loop sequences, such that they were named one-residue, two-residue, etc. This system, however, is somewhat ambiguous as it does not take into account whether the residues that signal the end of the hairpin are singly or doubly hydrogen bonded to one another. An improved means of classification has since been proposed by Milner-White and Poet. Beta hairpins are broken into four distinct classes as depicted in the publication's Figure 1. Each class begins with the smallest possible number of loop residues and progressively increases the loop size by removing hydrogen bonds in the beta sheet. The primary hairpin of class 1 is a one-residue loop where the bound residues share two hydrogen bonds. One hydrogen bond is then removed to create a three-residue loop, which is the secondary hairpin of class 1. Singly bound residues are counted in the loop sequence but also signal the end of the loop, thus defining this hairpin as a three-residue loop. This single hydrogen bond is then removed to create the tertiary hairpin; a five-residue loop with doubly bound residues. This pattern continues indefinitely and defines all beta hairpins within the class. Class 2 follows the same pattern beginning with a two-residue loop with terminating residues that share two hydrogen bonds. Class 3 begins with a three-residue, and class 4 with a four-residue. Class 5 does not exist as that primary hairpin is already defined in class 1. Pi This classification scheme not only accounts for various degrees of hydrogen bonding, but also says something about the biological behavior of the hairpin. Single amino acid replacements may destroy a particular hydrogen bond, but will not unfold the hairpin or change its class. On the other hand, amino acid insertions and deletions will have to unfold and reform the entire beta strand in order to avoid a beta bulge in the secondary structure. This will change the class of the hairpin in the process. As substitutions are the most common amino acid mutations, a protein could potentially undergo a conversion without affecting the functionality of the beta hairpin. Folding and binding dynamics Understanding the mechanism through which micro-domains fold can help to shed light onto the folding patterns of whole proteins. Studies of a beta hairpin called chignolin (see Chignolin on Proteopedia) have uncovered a stepwise folding process that drives beta-hairpin folding. This hairpin has sequence features similar to over 13,000 known hairpins, and thus may serve as a more general model for beta hairpin formation. The formation of a native turn region signals the folding cascade to start, where a native turn is one that is present in the final folded structure. In the folding of overall proteins, the turn may originate not in the native turn region but in the C-strand of the beta-hairpin. This turn then propagates through the C-strand (the beta strand leading to C-terminus) until it reaches the native turn region. Sometimes the residue interactions leading up to the native turn region are too strong, causing reverse propagation. However, once the native turn does form, interactions between prolines and tryptophan residues (seen in image at right) in the region help to stabilize the turn, preventing "roll back" or dissolution. Researchers believe that turns do not originate in the N-strand, due to increased rigidity (often caused by a proline leading up to the native turn region) and less conformational options. The initial turn formation takes place in about 1 μs. Once the initial turn has been established, two mechanisms have been proposed as to how the rest of the beta-hairpin folds: a hydrophobic collapse with side-chain level rearrangements, or the more accepted zipper-like mechanism. The β-hairpin loop motif can be found in many macromolecular proteins. However, small and simple β-hairpins can exist on their own as well. To see this clearly, the Pin1 Domain protein is shown to the left as an example. Proteins that are β-sheet rich, also called WW domains, function by adhering to proline-rich and/or phosphorylated peptides to mediate protein–protein interactions. The "WW" refers to two tryptophan (W) residues that are conserved within the sequence and aid in the folding of the β-sheets to produce a small hydrophobic core. These tryptophan residues can be seen below (right) in red. This enzyme binds its ligand through van der Waals forces of the conserved tryptophans and the proline-rich areas of the ligand. Other amino acids can then associate with the hydrophobic core of the β-hairpin structure to enforce secure binding. It is also common to find proline residues within the actual loop portion of the β-hairpin, since this amino acid is rigid and contributes to the "turn" formation. These proline residues can be seen as red side chains in the image of the Pin1 WW domain below (left). Artificially designed beta-hairpin The design of peptides that adopt β-hairpin structure (without relying on metal binding, unusual amino acids, or disulfide crosslinks) has made significant progress and yielded insights into protein dynamics. Unlike α-helices, β-hairpins are not stabilized by a regular hydrogen bonding pattern. As a result, early attempts required at least 20–30 amino acid residues to attain stable tertiary folds of β-hairpins. However, this lower limit was reduced to 12 amino acids by the stability gains conferred by the incorporation of tryptophan-tryptophan cross-strand pairs. Two nonhydrogen-bonding tryptophan pairs have been shown to interlock in a zipper-like motif, stabilizing the β-hairpin structure while still allowing it to remain water-soluble. The NMR structure of a tryptophan zipper (trpzip) β-peptide shows the stabilizing effect of favorable interactions between adjacent indole rings. The synthesis of trpzip β-hairpin peptides has incorporated photoswitches that facilitate precise control over folding. Several amino acids in the turn are replaced by azobenzene, which can be induced to switch from the trans to the cis conformation by light at 360 nm. When the azobenzene moiety is in the cis conformation, the amino acid residues align correctly to adopt a β-hairpin formation. However, the trans conformation does not have proper turn geometry for the β-hairpin. This phenomenon can be used to investigate peptide conformational dynamics with femtosecond absorption spectroscopy. References Protein structural motifs
Beta hairpin
Biology
1,552
63,498,079
https://en.wikipedia.org/wiki/Axion%20%28brand%29
Axion is an American brand of dishwashing liquid product marketed by Colgate-Palmolive. It is available in Asia and Latin America. History Originally, Colgate's Axion brand was the name of an enzyme pre-soak, to be used before laundering clothes. It was introduced on March 18, 1968. See also Palmolive - a similar dishwashing liquid produced by C-P for the U.S., Canada and other markets. Axion - a hypothetical fundamental particle whose name was inspired by the detergent. References Colgate-Palmolive brands Cleaning products
Axion (brand)
Chemistry
124
72,263,956
https://en.wikipedia.org/wiki/Troubled%20teen%20industry
The troubled teen industry (also known as TTI) is a broad range of youth residential programs aimed at struggling teenagers. The term encompasses various facilities and programs, including youth residential treatment centers, wilderness programs, boot camps, and therapeutic boarding schools. These programs claim to rehabilitate and teach troubled teenagers through various practices. Troubled teen facilities are privately run, and the troubled teen industry constitutes a multi-billion dollar industry. They accept young people who are considered to have struggles with learning disabilities, emotional regulation, mental illness, and substance abuse. Young people may be labeled as "troubled teens", delinquents, or other language on their websites and other advertising materials. Sometimes, these therapies are used as a punishment for contravening family expectations. For example, one person was placed in a troubled teen program because her mother found her choice in boyfriends unacceptable. The troubled teen industry has encountered many scandals due to child abuse, institutional corruption, and deaths, and is highly controversial. Many critics of these facilities point to a lack of local, state, and federal laws in the United States and elsewhere governing them. Some countries, such as Bermuda, have been known to send teenagers to programs located in the United States. In addition to their controversial therapeutic practices, many former residents report being forcibly transported to troubled teen facilities by teen escort companies, a practice dubbed "gooning". History The troubled teen industry has a precursor in the drug rehabilitation program called Synanon, founded in 1958 by Charles Dederich. By the late 1970s, Synanon had developed into a cult and adopted a resolution proclaiming the Synanon Religion, with Dederich as the highest spiritual authority, allowing the organization to qualify as tax-exempt under US law. Synanon rejected the use of medication for drug rehabilitation, and instead relied on the "Synanon Game", group sessions of attack therapy where members were encouraged to criticize and humiliate each other. Synanon popularized "tough love" attack therapy as a treatment for addiction, and the idea that confrontation and verbal condemnation could cure adolescent misbehavior. Synanon disbanded in 1991, after its tax-exempt status was revoked by the IRS and it was bankrupted by having to pay US$17 million in back taxes. Synanon's techniques were highly influential and inspired human potential self-help organizations such as Erhard Seminars Training (est) and Lifespring. Synanon-style therapy was also used in Straight, Incorporated and The Seed, two drug rehabilitation programs for youth. Former Synanon member Mel Wasserman founded CEDU Educational Services in 1967, a company which operated within the troubled teens industry. CEDU owned several for-profit therapeutic boarding schools, group homes, and behavior modification programs. The techniques used by CEDU schools were derived from Synanon's; for example, long, confrontational large-group sessions called "Propheets" took cues from the Synanon Game. CEDU went out of business in 2005, amid lawsuits and state regulatory crackdowns. Joseph "Joe" Ricci, a dropout from a direct Synanon-descendent program, founded a therapeutic boarding school called Élan School in 1970. Élan closed down in 2011 amid persistent allegations of abuse. Synanon's techniques also inspired the World Wide Association of Specialty Programs (WWASP), an umbrella organization of facilities meant for rehabilitating troubled teenagers. WWASP is no longer in business, due to widespread allegations of physical and psychological abuse. Many WWASP programs were shut down by the Costa Rican, Jamaican, and Mexican governments after investigations into allegations of abuse. Practices Troubled teen programs have been criticized for failing to offer evidence-based therapies such as cognitive behavioral therapy or trauma- and violence-informed care. Many or most troubled teen programs share a common lineage descending from Synanon, and use some form of "the game," a group attack therapy session. Additionally, some TTI programs use a form of primal therapy, a discredited form of therapy which involves reenacting traumatic and painful moments such as rape. Many practices used in troubled teen programs, especially punishments, have been singled out as constituting child abuse or neglect. These include but are not limited to: restricting communication with family and peers; use of physical and chemical restraint (i.e., in the form of sedative drugs); use of seclusion as punishment; gay conversion therapy; excessive use of strip search and cavity search; denial of sleep and nutrition; aversion therapy; etc. In 2007, the Government Accountability Office published a study verifying thousands of reports of abuse and death in TTI facilities dating back to 1990. The National Disability Rights Network published a report in 2021 reporting common issues at troubled teen facilities including the aforementioned forms of abuse as well as chronic staffing shortages, deprivation of education, and unhygienic and unsafe facility conditions. Transportation Many troubled teen institutions offer youth transportation through teen escort companies, in which minors are transported to their facilities against their will. Parents who sign their children up for troubled teen camps will sign over temporary custody to the teen escort company. This transportation is a service offered in the United States and elsewhere, and is a practice that has been criticized on ethical and legal grounds as being akin to kidnapping. Some of the subjects report not realizing they were transported with permission of their parents until days afterward. Clients have reported being ambushed in their own beds at home, or tricked into believing they are going elsewhere. Those who have been in the troubled teen industry call this process "gooning". There have been incidents where transportation staff have impersonated government officials. Former clients of troubled teen programs have made efforts to pursue legal recourse through civil lawsuits targeting both parents and the companies associated with these programs. Controversies False imprisonment 19-year-old Fred Collins Jr. found himself falsely imprisoned by Straight Inc., after initially visiting a family member who was enrolled in the program by his parents. Upon arrival, he was kept in a windowless room for six-and-a-half hours, and the staff refused to let him leave until he agreed to enroll into the program. At one New Mexico program, Tierra Blanca Ranch, the authorities found that the adolescent clients had been shackled and handcuffed. Forced labor Numerous troubled teen programs have been reported to engage in the practice of compelled labor, wherein program participants are required to perform physically demanding tasks such as wood chopping and horse manure shoveling. Kidnapping Elizabeth Zasso was an emancipated minor living in the state of New York who was illegally kidnapped by a teen escort company hired by her parents and taken to the state Utah where she was enrolled in a wilderness therapy program called the Challenger Foundation. It was ruled that the Challenger Foundation had violated her constitutional rights. Stress positions In certain instances, troubled teen programs have employed a torture technique known as "stress positions" as a form of discipline against their clients. Strip searches Many troubled teen programs conduct forced strip searches against the will of adolescent clients. Solitary confinement Some troubled teen programs, including the well-known Provo Canyon School, have faced allegations of employing solitary confinement as a disciplinary measure. Solitary confinement is a controversial practice that involves isolating individuals from social contact and is the subject of extensive debate regarding its ethical and psychological implications. Additionally, the now-defunct program known as Tranquility Bay, located in Jamaica, has also been reported to have utilized solitary confinement as part of its disciplinary methods. This practice has garnered considerable attention and criticism from various quarters. Psychological abuse Numerous reports have surfaced, documenting instances of psychological abuse inflicted upon clients within troubled teen programs. One particularly disturbing example of such abuse involves mock executions, wherein students were coerced into digging their own graves as part of a psychologically distressing exercise. These allegations highlight the gravity of ethical concerns within these programs and have sparked significant scrutiny and criticism from various outlets. Regulatory laws The Stop Child Abuse in Residential Programs for Teens Act was first introduced on June 28, 2007, by Congressman George Miller. The act passed the House of Representatives on June 25, 2008, but failed to progress further in the legislative process and was not enacted into law. Utah, California, Oregon, Montana, and Missouri have all enacted laws aimed at increasing oversight of troubled teen facilities. Utah's law was proposed in 2021 after noted celebrity Paris Hilton came out with her story about her experience at Provo Canyon School. Hilton's testimony triggered a state investigation into the facility, and she later advocated for the law when it was in the process of being passed. In the United States Congress, bills were proposed to regulate troubled teen facilities every year from 2007 to 2018. On April 4, 2023 the Stop Institutionalized Child Abuse Act was introduced to the House of Representatives and Sente., it has been passed by the Senate. It was signed into law by President Biden on December 24, 2024. Legal history On June 27, 1990, Kristen Chase died from heatstroke whilst enrolled at the Challenger Foundation, a Wilderness Therapy program located in Kane County, Utah. The county's district attorney charged the owner of the program, Steve Cartisano, with nine counts of child abuse and one count of negligent homicide. Lance Jagger was also charged with negligent homicide and child abuse, but the charges were dropped after he agreed to testify against Cartisano. A jury acquitted Steve Cartisano on all charges. On January 15, 1995, Aaron Bacon died from acute peritonitis while attending the North Star Wilderness Program in Utah. Nine staff members, including company co-founder Lance Jagger, were charged with abuse and neglect. Lance Jagger, William Henry, and Georgette Costigan pleaded guilty to negligent homicide. Craig Fisher was found guilty of third-degree felony abuse or neglect of a disabled child. On March 2, 1998, Nicholaus Contreraz died from complications due to an infection. Among his symptoms were chronic urinary and fecal incontinence, for which staff would force him to eat meals on the toilet and sleep in his soiled underwear as punishment. The autopsy revealed Contreraz had died from empyema with a partial collapse of his left lung. He had also contracted strep and staph infections with pneumonia and chronic bronchitis, and the coroner also discovered 71 cuts and bruises. During the investigation by the Pinal County Sheriff's Office, it was found that Nicholaus had been cleared for physical training activities by staff. The Federal Bureau of Investigation opened an investigation into civil rights violations at the location on a broader scale. The California Social Services Department investigation found widespread excessive use of physical restraint and hands-on confrontations by staff members. Trails Carolina homicide investigation On the morning of February 3, 2024, a 12-year-old boy died after one night at Trails Carolina wilderness program. The Transylvania County Sheriff's Office launched an investigation in the death of the boy. On February 6, the investigators executed a search warrant on Trails Carolina. Trails Carolina refused to co-operate with the investigation. on the February 13, 2024 North Carolina Department of Health and Human Services told Trails Carolina it was to stop new admissions during the investigation. Later on February 18, 2024. All children were removed from Trails Carolina. On May 17, 2024 the North Carolina Department of Health and Human Services permanently revoked Trails Carolina's license. On June 25, 2024 the medical examiner's report was released. The cause of death was determined to be asphyxia, and it was ruled as a homicide. Timeline 1967: CEDU High School is founded by Mel Wasserman, a former Synanon member, in Running Springs, California. May 30, 1970: The Élan School is founded by Joe Ricci, a former resident of Daytop Village, in Naples, Maine. February 16, 1982: Nancy Reagan visits Straight, Inc. in Florida. December 27, 1982: Philip Williams Jr. dies in Elan School boxing ring. May 26, 1983: A federal jury awards a Straight, Inc. Patient $220,000 after finding said patient to have been falsely imprisoned by the foundation. November 11, 1985: Princess Diana and Nancy Reagan visit Straight, Inc. 1987: Scientology's troubled teen program Mace-Kingsley Ranch School opens in California. January 15, 1995: Aaron Bacon dies from acute peritonitis while attending the North Star Wilderness Program in Utah. December 21, 1996: Craig Fisher is sentenced over his role in Aaron Bacon's death. 1998: Robert Lichfield creates the World Wide Association of Specialty Programs and Schools. 1999: National Association of Therapeutic Schools and Programs is founded. February 2001: 14-year-old Ryan Lewis dies by suicide while enrolled at Alldredge Academy in West Virginia. July 2001: 14-year-old Tony Haynes is forced to eat dirt and dies at a desert boot camp for teenagers. July 15, 2002: Ian August dies from heat exhaustion while attending the Skyline Journey Wilderness Program in Utah. The Utah Department of Human Service revoked Skyline Journey's state license on the 25 October 2002. December 25, 2002: 17-year-old Kiley Jaquays falls to her death while visiting the Bloomington Caves in Utah with her residential treatment center, Integrity House. May 23, 2003: Costa Rican government officials shut down the Academy at Dundee Ranch, a behavior modification program run by the US-based company World Wide Association of Specialty Programs and Schools. February 8, 2004: 16-year-old Daniel Yuen goes missing from CEDU High School in California. October 2004: Karlye Newman dies by suicide at Spring Creek Lodge Academy. 2006: Yang Yongxin establishes an "Internet-addiction camp" inside the Fourth Hospital of Linyi in China and begins practicing electroconvulsive therapy. August 28, 2009: Sergey Blashchishen dies from heat exhaustion during a hike whilst attending Sage Walk, a wilderness therapy program operated by Aspen Education Group. February 8, 2013: The hacking collective group Anonymous launches #OpTTIabuse, a campaign against the troubled teen industry. November 2015: Ten teenagers are arrested after a riot at Copper Hills Youth Center in Utah. February 2017: 16-year-old Ben Jackson dies by suicide at Montana Academy. July 10, 2019: Red Rock Canyon School in Utah closes after a riot breaks out in April 2019. April 2020: 16-year-old Cornelius Fredericks dies while being restrained at youth program in Michigan. October 9, 2020: American socialite Paris Hilton and other former residents of Provo Canyon School lead a silent protest against the school in Provo, Utah. January 16, 2022: A 14-year-old girl dies from medical concerns at Maple Lake Academy, a residential treatment center in Utah. August 31, 2022: Agape Baptist Academy is served an indictment for transporting a California teenager and violating a protection order. January 11, 2023: Agape Baptist Academy announces plans for permanent closure. February 15, 2024: Open Sky Wilderness closes after years of controversy surrounding the effectiveness of wilderness therapy programs. August 22, 2024: Evoke Therapy, a wilderness program located out of Santa Clara, UT, announces their intention to close down after over 20 years of operations. Media Children of Darkness, a 1983 documentary on the Élan School Not My Kid, a 1985 TV movie based on the Straight, Inc. program Locked in Paradise, a television program on the troubled teen program called Tranquility Bay, aired in December 2004. Boot Camp, a 2008 film based on the WWASP program Paradise Cove, located in Samoa. Kidnapped for Christ, a documentary released in 2014 about a Christian behavior modification program. The Last Stop, a documentary on the Élan School released in 2017. This Is Paris, a documentary on Paris Hilton's experience in various troubled-teen programs, released in 2020. Hell Camp: Teen Nightmare, a documentary released in December 2023. It is about a wilderness therapy program called the Challenger Foundation in Utah, and covers the controversial conditions of the program as well as the death of Kristen Chase. Joe versus Elan School, an autobiographical, web-based graphic novel. The Program: Cons, Cults, and Kidnapping is a 2024 American true-crime documentary series, directed by Katherine Kubler. It follows Kubler and former classmates of hers from the Academy at Ivy Ridge, a behavior modification facility that was marketed as a boarding school, as they reflect on the abusive conditions they experienced in the program and the lasting trauma. "The Sunshine Place," The Second Season of the Sunshine place the podcast covers the stories of Students placed within Straight Inc. References Further reading Strangeways, Sam. (11 December 2019) "Sending troubled children to US cost $33m" Juvenile delinquency Human rights abuses Conversion therapy Religion and mental health Youth rights Behavior modification
Troubled teen industry
Biology
3,444
5,882,899
https://en.wikipedia.org/wiki/Monokine
A monokine is a type of cytokine produced primarily by monocytes and macrophages. Some monokines are: interleukin 1 tumor necrosis factor-alpha alpha and beta interferon colony stimulating factors Functions Monokines released from macrophages can attract neutrophils, via the process chemotaxis. The secretion of monokine, prompted by interferon gamma, has activity for the receptors found within immune cells, such as T-cells, hindering their ability to function as regulators of the body. Thus, promoting tumor progression in the cancer-state. In fact, its activity is important in other diseases, such as pulmonary tuberculosis, where researchers have identified monokine as a biomarker. See also Lymphokine References External links Cytokines
Monokine
Chemistry
170
25,669,698
https://en.wikipedia.org/wiki/Dunstable%20Swan%20Jewel
The Dunstable Swan Jewel is a gold and enamel brooch in the form of a swan made in England or France in about 1400 and now in the British Museum, where it is on display in Room 40. The jewel was excavated in 1965 on the site of Dunstable Friary in Bedfordshire, and is presumed to have been intended as a livery badge given by an important figure to his supporters; the most likely candidate was probably the future Henry V of England, who was Prince of Wales from 1399. The jewel is a rare medieval example of the then recently developed and fashionable white opaque enamel used in to almost totally encase an underlying gold form. It is invariably compared to the white hart badges worn by King Richard II and by the angels surrounding the Virgin Mary in the painted Wilton Diptych of around the same date, where the chains hang freely down. The jewel is formed as a standing or walking mute swan gorged (collared) by a gold royal crown with six fleur-de-lys tines. There is a gold chain terminating in a ring attached to the crown, and the swan has a pin and catch on its right side for fastening the brooch to clothes or a hat. The swan is high and wide, and the length of the chain is . The swan's body is in white enamel, its eyes are of black enamel, which also once covered the legs and feet, where only traces now remain. Tiny fragments of pink or red enamel remain on the beak. Livery badges The jewel is a unique survival of the most expensive form of livery badge, otherwise only known from inventories and representations in paintings. These were badges in various forms made for a leading figure bearing his personal device, and given to others who would demonstrate by wearing them that they were in some way his employees, retainers, allies or supporters. They were especially common in England in the age of "bastard feudalism" from the mid-fourteenth century until about the end of the fifteenth century, a period of intense factional conflict which saw the deposition of Richard II and the Wars of the Roses. A lavish badge like the jewel would only have been worn by the person whose device was represented, members of his family or important supporters, and possibly servants who were in regular very close contact with him. However, the jewel lacks the ultimate luxury of being set with gems, for example ruby eyes, like the gems on the lion pendants worn by Sir John Donne and his wife in their portraits by Hans Memling, now in the National Gallery, London, and several examples listed on the 1397 treasure roll of Richard II. In the Wilton Diptych, Richard's own badge has pearls on the antler tips, which the angels' badges lack. The white hart in the badge on the Treasury Roll, which the painted one may have copied, had pearls and sat on a grass bed made of emeralds, and a hart badge of Richard's inventoried in the possession of Duke Philip the Good of Burgundy in 1435 was set with 22 pearls, two spinels, two sapphires, a ruby and a huge diamond. Cheaper forms of badge were more widely distributed, sometimes very freely, similarly to modern political campaign badges and t-shirts. However, wearing the wrong badge in the wrong place could lead to personal danger. In 1377, when the young Richard II's highly unpopular uncle, John of Gaunt, was Regent, one of his more than 200 retainers, Sir John Swinton, unwisely rode through London wearing Gaunt's badge on a livery collar (an innovation of Gaunt's, probably the Collar of Esses). The mob attacked him, pulling him off his horse and the badge off him, and he had to be rescued by the mayor from suffering serious harm. Over twenty years later, after Gaunt's son Henry IV had deposed Richard, one of Richard's servants was imprisoned by Henry for continuing to wear Richard's livery badge. Many of the large number of badges of various liveries recovered from the Thames in London were perhaps discarded hurriedly by retainers who found themselves unwisely dressed at various times. In 1483 King Richard III ordered 13,000 fustian (cloth) badges with his emblem of a boar for the investiture of his son Edward as Prince of Wales, a huge number given the population at the time. Other grades of boar badges that have survived are in lead, silver, and gilded copper high relief, the last found at Richard's home of Middleham Castle in Yorkshire, and very likely worn by one of his household when he was Duke of Gloucester. The British Museum also has a flat lead swan badge with low relief, typical of the cheap metal badges which were similar to the pilgrim badges that were also common in the period. Apparently beginning relatively harmlessly under Edward III in a context of tournaments and courtly celebrations, by the reign of his grandson, Richard II, the badges had become seen as a social menace, and were "one of the most protracted controversies of Richard's reign", as they were used to denote the small private armies of retainers kept by lords, largely for the purpose of enforcing their lord's will on the less powerful in his area. Though they were surely a symptom rather than a cause of both local baronial bullying and the disputes between the king and his uncles and other lords, Parliament repeatedly tried to curb the use of livery badges. The issuing of badges by lords was attacked in the Parliament of 1384, and in 1388 they made the startling request that "all liveries called badges [signes], as well of our lord the king as of other lords[...] shall be abolished", because "those who wear them are flown with such insolent arrogance that they do not shrink from practising with reckless effrontery various kinds of extortion in the surrounding countryside[...] and it is certainly the boldness inspired by these badges that makes them unafraid to do these things". Richard offered to give up his own badges, to the delight of the House of Commons of England, but the House of Lords refused to give up theirs, and the matter was put off. In 1390 it was ordered that no-one below the rank of banneret should issue badges, and no one below the rank of esquire wear them. The issue was apparently quiet for a few years, but from 1397 Richard issued increasingly large numbers of badges to retainers who misbehaved (his "Cheshire archers" being especially notorious), and in the Parliament of 1399, after his deposition, several of his leading supporters were forbidden from issuing "badges of signs" again, and a statute was passed allowing only the king (now Henry IV) to issue badges, and only to those ranking as esquires and above, who were only to wear them in his presence. In the end it took a determined campaign by Henry VII to largely stamp out the use of livery badges by others than the king, and reduce them to things normally worn only by household servants. The swan as a badge The widespread use of the swan as a badge largely derives from the legend of the Swan Knight, today most familiar from Richard Wagner's opera Lohengrin. A group of Old French called the Crusade cycle had associated the legend with the ancestors of Godfrey of Bouillon ( 1100), the hero of the First Crusade. Although Godfrey had no legitimate issue, his family had many descendants among the aristocracy of Europe, many of whom made use of the swan in their heraldry or as a para-heraldic emblem. In England these included the important de Bohun family, which used the so-called Bohun swan as its heraldic badge; after the marriage in 1380 of Mary de Bohun ( 1394) to the future King Henry IV of England, the swan became adopted by the House of Lancaster, who continued to use it for over a century. The swan with the crown and chain is especially associated with Lancastrian use; it echoes the crown and chain of Richard II's white hart, which he began to use as a livery badge from 1390. As well as several of his own white hart badges, Richard's treasure roll of 1397 also includes a swan badge with a gold chain, perhaps presented by one of his enemies mentioned above: "Item, a gold swan enamelled white with a little gold chain hanging around the neck, weighing 2 oz., value, 46s. 8d". He declared to Parliament that he had exchanged liveries with his uncles as a sign of amity at various moments of reconciliation. After Henry seized the throne in 1399, the use of the swan emblem was transferred to his son, the future Henry V, who was made Prince of Wales at his father's coronation, and whose tomb in Westminster Abbey includes swans. It was also used by his grandson Edward of Westminster, Prince of Wales before his death in the Battle of Tewkesbury in 1471. In 1459 Edward's mother Margaret of Anjou insisted that he give swan livery badges to "all the gentlemen of Cheshire"; the type and number are unknown. The badge was also used by other families; the swan was the crest of the Beauchamp Earls of Warwick, leading supporters of the Lancastrian faction under Thomas de Beauchamp, 12th Earl of Warwick ( 1401). Eleanor de Bohun, Mary's sister, had in 1376 also married into the Plantagenet royal family, in the person of King Edward III of England's youngest son, Thomas of Woodstock, 1st Duke of Gloucester ( 1397), another prominent Lancastrian supporter, and the swan badge was used by his Stafford descendants. Mary and Eleanor were co-heiresses to huge Bohun estates, and disputes over the settlement of these continued until late into the next century, when most of their descendants had been killed in the Wars of the Roses, perhaps encouraging the continued assertion of Bohun ancestry. Henry Stafford, 2nd Duke of Buckingham, a descendant of the Beauchamps, Eleanor de Bohun and Thomas of Woodstock, and John of Gaunt, used the swan with crown and chain as his own badge. He was certainly active in trying to get the Bohun lands, and may well have also plotted to seize the throne, for which he was executed in 1483 by Richard III. Place and date of manufacture Another user of swan insignia around 1400 was John, Duke of Berry, the Valois prince who commissioned two of the most spectacular medieval works featuring white enamel , the Holy Thorn Reliquary, also in the British Museum, and the Goldenes Rössl. He has been considered as a possible commissioner of the jewel, in which case it would almost certainly have been made in Paris, and might have made its way to England after being presented. This might also have been the case if it was commissioned by an English person, especially a royal one. However, there are records of London goldsmiths producing white enamel works for the court, and a reliquary with many figures in white enamel and now in the Louvre may have been made in London. Other small jewels have survived in England which may have been made in London, either by native goldsmiths or the foreign ones known to have worked there. No more precise date for the jewel than "around 1400" is given by experts; this might have a wider range than many works as style is not much help in dating here. Given the royal collar of the swan, the marriage of the future Henry IV to Mary de Bohun probably provides the earliest possible date. A date Henry IV seized the throne in 1399—when his son would have been using the badge—is perhaps more likely. The difficult technique of adding elements in further colours was not perfected until about 1400, in Paris. Fixing a is more difficult, but white enamel became less fashionable after about the 1430s. Moreover, there was no Prince of Wales between 1413, when Henry V succeeded to the throne, and 1454. History Dunstable, where the ancient roads of Watling Street and the Icknield Way cross some thirty miles north of London, was frequently visited by the medieval elite. Apart from travellers passing through, tournaments were held there at least until the 1340s, and Lancastrian armies used it as a base in 1459 and 1461. The jewel was found in an excavation of the friary, in what seemed to be a deposit of rubble dating from the destruction of the buildings after the Dissolution of the Monasteries. It would appear to have been above ground until that point. However, it must have been overlooked—the scrap value of the gold itself would have prevented it from being merely discarded. After its excavation, the jewel was bought by the British Museum in 1966 for £5,000, of which £666 was a grant from the Art Fund (then NACF); other contributions were made by the Pilgrim Trust and the Worshipful Company of Goldsmiths. It is on display in Room 40. Notes References On the jewel "BM database": British Museum collection database The Dunstable Swan Jewel "BM Highlights": British Museum Highlights The Dunstable Swan Jewel Campbell, Marian. An Introduction to Medieval Enamels, 1983, HMSO for V&A Museum, Cherry, John (1987), in Jonathan Alexander & Paul Binski (eds), Age of Chivalry, Art in Plantagenet England, 1200-1400, Royal Academy/Weidenfeld & Nicolson, London 1987 Cherry, John (2003), in Marks, Richard and Williamson, Paul, eds. Gothic: Art for England 1400-1547, 2003, V&A Publications, London, (part of text given on BM database) Cherry, John (2010), The Holy Thorn Reliquary, The British Museum Press, 2010, Matthews, C. L. "Excavations on the site of the Dominican Friary, Dunstable 1965", Mansshead Magazine, 16, 1966 Platt, Colin, Medieval England: A Social History and Archaeology from the Conquest to 1600 A.D., Routledge, 1994 Robinson, James. Masterpieces of Medieval Art, 2008, British Museum Press, Stratford, Jenny, The swan badge and the Dunstable Swan, and other pages as specified, in Richard II's Treasure; the riches of a medieval king, website by The Institute of Historical Research and Royal Holloway, University of London, 2007 Tait, Hugh. Catalogue of the Waddesdon Bequest in the British Museum, 1986, British Museum Press, On livery badges Brown, Peter. A Companion to Chaucer, Wiley-Blackwell, 2002, , Google books Campbell, Lorne. National Gallery Catalogues: The Fifteenth Century Netherlandish Paintings, National Gallery Publications, London, 1998, Castor, Helen. The king, the crown, and the Duchy of Lancaster: public authority and private power, 1399-1461, Oxford University Press, 2000, , . Google books Fox-Davies, Arthur Charles. Heraldic Badges, J. Lane, 1907. Reprint by BiblioBazaar, LLC, 2009, , Google books Given-Wilson, Chris, Richard II and the Higher Nobility, in Goodman, Anthony and Gillespie, James (eds): Richard II: The Art of Kingship, Oxford University Press, 2003, , Google books Siddons, Michael Powell. Heraldic Badges in England and Wales (partial pdf), 4 vols, Boydell & Brewer, 2009, Steane, John. The Archaeology of the Medieval English Monarchy. Routledge, 1999. Google books Further reading Cherry, John. "The Dunstable Swan Jewel", Journal of the British Archaeological Association, XXXII, 1969 Evans, Vivienne (ed). The Dunstable Swan Jewel, Dunstable Museum Trust, 1982 Gordon, D. Making and meaning: The Wilton Diptych, London: National Gallery, 1993 Wagner, A. R. "The Swan Badge and the Swan Knight", Archaeologia, 97, 1959 Works in vitreous enamel Medieval European metalwork objects Medieval European objects in the British Museum Gold objects Swans in art Badges Individual brooches Heraldic charges
Dunstable Swan Jewel
Mathematics
3,323
78,643,306
https://en.wikipedia.org/wiki/HD%2077191
HD 77191 is a spectroscopic binary composed of a Sun-like variable star and a probable red dwarf, located in the zodiac constellation of Cancer. It has the variable-star designation HL Cancri (abbreviated to HL Cnc). With an apparent magnitude of 8.88, it is too faint to be seen by the naked eye but observable using binoculars as a yellow-hued dot of light. It is located at a distance of according to Gaia EDR3 parallax measurements, and is receding farther away from the Sun at a heliocentric radial velocity of 7.10 km/s. The star is part of the Castor stream, a moving group of young stars that includes some of the brightest stars in the night sky, such as Castor, Fomalhaut, and Vega. Stellar properties The primary star is a G-type main-sequence star with the spectral type G0V, almost identical to the Sun in mass, effective temperature, and metallicity, but approximately 7% smaller in radius. Its spectrum shows clear signs of high stellar activity and a strong lithium doublet spectral line at wavelength 6707.8 Å, indicative of its youth, with an estimated age of . Accordingly, the star displays large starspots, which are responsible for slight variations in its brightness, first discovered in 2000 with a mean amplitude of about 0.025 mag and a period of (which is also the star's rotation period). Hence, the star is classified as a BY Draconis variable. Data collected by Hipparcos suggested that the star was single, but radial velocity observations via the Coravel spectrograph at the University of Cambridge yielded a 44-day period orbit for a binary companion. By matching the primary's rotational velocities measured through Doppler broadening and its photometric period, the mass of the unseen secondary star is placed at roughly 0.38 , making it likely a red dwarf. References G-type main-sequence stars M-type main-sequence stars Binary stars Spectroscopic binaries Cancer (constellation) Cancri, HL 077191 BD+11 01961 J09012277+1043585 044303
HD 77191
Astronomy
467
65,396,148
https://en.wikipedia.org/wiki/Backusella%20luteola
Backusella luteola is a species of zygote fungus in the order Mucorales. It was described by Andrew S. Urquhart and James K. Douch in 2020. The specific epithet refers to the yellow colony colouration. The type locality is Jack Cann Reserve, Australia. See also Fungi of Australia References External links Zygomycota Fungi described in 2020 Fungus species
Backusella luteola
Biology
85
22,103,504
https://en.wikipedia.org/wiki/Goss%20zeta%20function
In the field of mathematics, the Goss zeta function, named after David Goss, is an analogue of the Riemann zeta function for function fields. proved that it satisfies an analogue of the Riemann hypothesis. proved results for a higher-dimensional generalization of the Goss zeta function. References Zeta and L-functions
Goss zeta function
Mathematics
69
22,132,813
https://en.wikipedia.org/wiki/Surgical%20lighting
A surgical light – also referred to as an operating light or surgical lightheadis a medical device intended to assist medical personnel during a surgical procedure by illuminating a local area or cavity of the patient. A combination of several surgical lights is often referred to as a “surgical light system”. History Technical development In the mid-1850s, operating rooms were built towards the southeast with windows in the ceiling to benefit from natural sunlight as much as possible. The biggest problem was the dependence of lighting and whether a surgical procedure could be done on the time of day and weather conditions. Furthermore, a doctor, nurse or medical equipment easily blocked the light. The use of mirrors on the four corners of the ceiling to reflect sunlight towards the operating table only slightly alleviated these problems. Attempts were made to use an optical condenser in an indirect light to reduce the heating, but without success. The entrance of electric lights into the operating room in the 1880s was accompanied by problems. With early electrical technology control of the light emitted was poor. Electric light was still moving and diffuse, with great heat radiation. Many operating room lights used halogen lamps or xenon lamps, some with backup lamps that operated in case of lamp failure until the advent of Light-emitting diodes as light sources since 2007 which remove the problem of heat radiation and reduce energy requirements. Early LED surgical lamps suffered from color shadows since several LEDs with distinct colors and their own reflectors were used but modern LED surgical lamps do not have this problem. Surgical lights can have cameras that are pointed at the surgical field, and many surgical lights can be used with disposable handles. Terminology and measurements Lux Unit for the amount of visible light measured by a luxmeter at a certain point. Central illuminance (Ec) Illuminance (measured in lux) at 1m distance from the light emitting surface in the light field centre. Light field centre Point in the light field (lighted area) where illuminance reaches maximum lux intensity. It is the reference point for most measurements. Depth of illumination The distance between the points of 20% illumination intensity above and below the center point. From the point of maximum illumination, which is the center of the light field 1 meter from the light-emitting surface, the photometer is moved toward the light until the light intensity measured falls to 20% of the maximum value. The distance between the center and this point is defined as L1. The similarly measured distance in the direction away from the light is L2. The depth of illumination without needing to refocus is the sum of the two distances L1 and L2. In the second edition of the IEC standard, published in 2009, the threshold value was revised from 20% to 60%. Shadow dilution The light's ability to minimize the effect of obstructions. Light field diameter (D10) Diameter of light field around the light field centre, ending where the illuminance reaches 10% of Ec. The value reported is the average of four different cross sections through the light field centre. D50 Diameter of light field around the light field centre, ending where the illuminance reaches 50% of Ec. The value reported is the average of four different cross sections through the light field centre Norms and requirements for surgical light The International Electrotechnical Commission (IEC) created the document IEC 60601-2-41 – Particular requirements for the basic safety and essential performance of surgical luminaires and luminaires for diagnosis, 2009 to establish norms and guidelines for the characteristics of a surgical and examination light to secure safety for the patient as well as lower the risk to a reasonable level when the light is used according to the user manual. Some of the standards for surgical lightheads are the following: Homogeneous light: The light should offer good illumination on a flat, narrow or deep surface in a cavity, despite obstacles such as surgeons' heads or hands. Lux: The central illuminance should be between 160,000 and 40,000 lux. Light field diameter: The D50 diameter should be at least 50% of D10. Colour rendition: For the purpose of distinguishing true tissue colour in a cavity, the colour rendering index (Ra) should be between 85 and 100. Backup possibility: In case of interruption of the power supply, the light should be restored within 5 seconds with at least 50% of the previous lux intensity, but not less than 40,000 lux. Within 40 seconds the light should be completely restored to the original brightness. Announcement: The IEC document also mentions what needs to be notified to the user. For example, the voltage and power consumption should be marked on or near the lampholder as well as on the lighthead. In the instructions for use the following should be announced. Cleaning and decontamination of the surgical light Safety aspects of the optical filter (purpose and warning to prevent removal) Central illuminance Light field diameter Depth of illumination Shadow dilution Correlated colour temperature and colour rendering index Total irradiance Cleaning and disinfecting Handling of the lighthead in case of failure How the user should respect the national rules for hygiene and disinfecting References Extrait de la revue Techniques Hospitalières noo 400 Janvier/1979 "L’éclairage en salle d’opération" by M. Hainault p. 47 IEC International, 60601-2-41 Medical electrical equipment - Part 2-41: Particular requirements for the basic safety and essential performance of surgical luminaires and luminaires for diagnosis Medical equipment Types of lamp Light fixtures
Surgical lighting
Biology
1,137
76,362,010
https://en.wikipedia.org/wiki/Buellia%20nashii
Buellia nashii is a species of lichen characterized by its crustose thallus, typically found in the Sonoran Desert Region and adjacent areas. It was first described by Bungartz et al. The species is named in honor of Dr. Thomas H. Nash III, a notable lichenologist and the Ph.D. supervisor of the author. Morphology The thallus appears as a crust, dense in texture, showcasing a spectrum of hues ranging from ivory to deep brown or gray. Its surface varies from smooth to deeply fissured, sometimes adorned with fine or coarse pruina. Apothecia are lecideine in nature, meaning they are sessile and predominantly black, often with thin to thick margins. As they mature, the disc typically darkens and becomes convex. Ascospores are brown, with a single septum, and are shaped either oblong or ellipsoid. Pycnidia are infrequent and take on an urceolate to globose form, housing bacilliform conidia within. Chemistry Typically, Buellia nashii contains the depside atranorin and the depsidones norstictic and connorstictic acid. However, some specimens may lack norstictic acid and instead contain stictic and hypositictic acids. Spot tests usually result in K+ yellow to red, P+ yellow reactions, and negative reactions for C, KC, and CK. The thallus is not amyloid, but apothecia react amyloid in Lugol's solution. Ecology Buellia nashii is commonly found on a variety of siliceous rock substrates, occasionally on sandstones with small amounts of carbonates. It thrives in arid environments, particularly in the Sonoran Desert Region. Distribution The species has a wide distribution throughout the Sonoran Desert Region and adjacent areas, such as Arizona, southern California, Baja California, Baja California Sur, and Chihuahua. Identification Buellia nashii closely resembles B. dispersa but can be distinguished by its chemistry and exciple pigmentation. While both species have similar thalli, B. nashii contains norstictic acid and exhibits a characteristic aeruginose pigment in the outer exciple cells. References nashii Fungus species Fungi described in 2004
Buellia nashii
Biology
477
20,182,719
https://en.wikipedia.org/wiki/Cartoning%20machine
A cartoning machine or cartoner, is a packaging machine that forms cartons: erect, close, folded, side seamed and sealed cartons. Packaging machines which form a carton board blank into a carton filled with a product or bag of products or number of products say into single carton, after the filling, the machine engages its tabs / slots to apply adhesive and close both the ends of carton completely sealing the carton. Types Cartoning machines can be divided into two types: Horizontal cartoning machines Vertical cartoning machines A cartoning machine which picks a single piece from stack of folded carton and erects it, fills with a product or bag of products or number of products horizontally through an open end and closes by tucking the end flaps of the carton or applying glue or adhesive. The product might be pushed in the carton either through the mechanical sleeve or by pressurized air. For many applications however, the products are inserted into the carton manually. This type of Cartoning machine is widely used for packaging foodstuffs, confectionery, medicine, cosmetics, sundry goods, etc. A cartoning machine which erects a folded carton, fills with a product or number of products vertically through an open end and closes by either tucking the end flaps of the carton or applying glue or adhesive, is called an end load cartoning machine. Cartoning machines are widely used for packaging bottled foodstuffs, confectionery, medicine, cosmetics, etc., and can vary based on the scale of business. Carton interface The machinery needs to be able to consistently form, fill, and seal the specific carton of interest. Several factors might come into play: carton design, size, surface of carton, humidity, type of adhesive, stiffness, type of score, etc. Some board factors can be simulated; others might need a finite element analysis or an experimental process capability study. See also Folding carton References Books Hanlon, Kelsey, and Forcinio; Handbook of Package Engineering (CRC Press, 1998) Soroka, W, "Fundamentals of Packaging Technology", IoPP, 2002, Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, Secondary sector of the economy Packaging machinery
Cartoning machine
Engineering
477
60,463,888
https://en.wikipedia.org/wiki/Alterite
Alterite (IMA symbol: Atr) is a yellow-green mineral with the chemical formula ZnFe(SO)(CO)(OH)·17HO. Its type locality is Coconino County, Arizona. It is found exclusively in logs that have mineralized. References Zinc minerals Iron(III) minerals Sulfate minerals Oxalate minerals Hydroxide minerals 17 Minerals described in 2018
Alterite
Chemistry
82
17,382,620
https://en.wikipedia.org/wiki/Supercritical%20angle%20fluorescence%20microscopy
Supercritical angle fluorescence microscopy (SAF) is a technique to detect and characterize fluorescent species (proteins, biomolecules, pharmaceuticals, etc.) and their behaviour close or even adsorbed or linked at surfaces. The method is able to observe molecules in a distance of less than 100 to 0 nanometer from the surface even in presence of high concentrations of fluorescent species around. Using an aspheric lens for excitation of a sample with laser light, fluorescence emitted by the specimen is collected above the critical angle of total internal reflection selectively and directed by a parabolic optics onto a detector. The method was invented in 1998 in the laboratories of Stefan Seeger at University of Regensburg/Germany and later at University of Zurich/Switzerland. SAF microscopy principle The principle how SAF Microscopy works is as follows: A fluorescent specimen does not emit fluorescence isotropically when it comes close to a surface, but approximately 70% of the fluorescence emitted is directed into the solid phase. Here, the main part enters the solid body above the critical angle. When the emitter is located just 200 nm above the surface, fluorescent light entering the solid body above the critical angle is decreased dramatically. Hence, SAF Microscopy is ideally suited to discriminate between molecules and particles at or close to surfaces and all other specimen present in the bulk. Typical SAF-setup The typical SAF setup consists of a laser line (typically 450-633 nm), which is reflected into the aspheric lens by a dichroic mirror. The lens focuses the laser beam in the sample, causing the particles to fluoresce. The fluorescent light then passes through a parabolic lens before reaching a detector, typically a photomultiplier tube or avalanche photodiode detector. It is also possible to arrange SAF elements as arrays, and image the output onto a CCD, allowing the detection of multiple analytes. Selected publications Fluorescence techniques Microscopy Laser applications
Supercritical angle fluorescence microscopy
Chemistry,Biology
407
29,655,328
https://en.wikipedia.org/wiki/Fonseca%20Prize
The Fonseca Prize of science communication () is an annual award created by the University of Santiago de Compostela and the Consortium of Santiago under the auspices of the Program ConCiencia. The award is named after Alonso III Fonseca, one of the earliest patrons of the university. Recipients References https://www.abc.es/espana/galicia/abci-convocan-o-premio-fonseca-divulgacion-cientifica-200803010300-1641688831292_noticia.html https://www.lavozdegalicia.es/noticia/sociedad/2019/07/20/dia-armstrong-dijo/00031563654757940780309.htm Academic awards Spanish awards Awards established in 2008 Science communication awards
Fonseca Prize
Technology
184
52,464,221
https://en.wikipedia.org/wiki/YK-11
YK-11 is a synthetic steroidal selective androgen receptor modulator (SARM). It is a gene-selective partial agonist of the androgen receptor (AR) and does not induce the physical interaction between the NTD/AF1 and LBD/AF2 (known as the N/C interaction), which is required for full transactivation of the AR. The drug has anabolic activity in vitro in C2C12 myoblasts and shows greater potency than dihydrotestosterone (DHT) in this regard. It has been investigated as a potential treatment for sepsis-induced muscle wasting in animal studies. See also Cl-4AS-1 MK-0773 MK-4541 TFM-4AS-1 References Estranes Ethers Ketones Selective androgen receptor modulators
YK-11
Chemistry
177
524,981
https://en.wikipedia.org/wiki/Ernst%20Abbe
Ernst Karl Abbe (23 January 1840 – 14 January 1905) was a German businessman, optical engineer, physicist, and social reformer. Together with Otto Schott and Carl Zeiss, he developed numerous optical instruments. He was also a co-owner of Carl Zeiss AG, a German manufacturer of scientific microscopes, astronomical telescopes, planetariums, and other advanced optical systems. Personal life Abbe was born 23 January 1840 in Eisenach, Saxe-Weimar-Eisenach, to Georg Adam Abbe and Elisabeth Christina Barchfeldt. He came from a humble home – his father was a foreman in a spinnery. Supported by his father's employer, Abbe was able to attend secondary school and to obtain the general qualification for university entrance with fairly good grades, at the Eisenach Gymnasium, which he graduated from in 1857. By the time he left school, his scientific talent and his strong will had already become obvious. Thus, in spite of the family's strained financial situation, his father decided to support Abbe's studies at the Universities of Jena (1857–1859) and Göttingen (1859–1861). During his time as a student, Abbe gave private lessons to improve his income. His father's employer continued to fund him. Abbe was awarded his PhD in Göttingen on 23 March 1861. While at school, he was influenced by Bernhard Riemann and Wilhelm Eduard Weber, who also happened to be one of the Göttingen Seven. This was followed by two short assignments at the Göttingen observatory and at Physikalischer Verein in Frankfurt (an association of citizens interested in physics and chemistry that was founded by Johann Wolfgang von Goethe in 1824 and still exists today). On 8 August 1863 he qualified as a university lecturer at the University of Jena. In 1870, he accepted a contract as an associate professor of experimental physics, mechanics and mathematics in Jena. In 1871, he married Else Snell, daughter of the mathematician and physicist Karl Snell, one of Abbe's teachers, with whom he had two daughters. He attained full professor status by 1879. He became director of the Jena astronomical and meteorological observatory in 1878. In 1889, he became a member of the Bavarian Academy of Sciences and Humanities. He also was a member of the Saxon Academy of Sciences. He was relieved of his teaching duties at the University of Jena in 1891. Abbe died 14 January 1905 in Jena. He was an atheist. Life work In 1866, he became a research director at the Zeiss Optical Works, and in 1868 he invented the apochromatic lens, a microscope lens which eliminates both the primary and secondary color distortion. By 1870, Abbe invented the Abbe condenser, used for microscope illumination. In 1871, he designed the first refractometer, which he described in a booklet published in 1874. He developed the laws of image of non-luminous objects by 1872. Zeiss Optical Works began selling his improved microscopes in 1872, by 1877 they were selling microscopes with homogenous immersion objective, and in 1886 his apochromatic objective microscopes were being sold. He created the Abbe number, a measure of any transparent material's variation of refractive index with wavelength and Abbe's criterion, which tests the hypothesis, that a systematic trend exists in a set of observations (in terms of resolving power this criterion stipulates that an angular separation cannot be less than the ratio of the wavelength to the aperture diameter, see angular resolution). Already a professor in Jena, he was hired by Carl Zeiss to improve the manufacturing process of optical instruments, which back then was largely based on trial and error. Abbe was the first to define the term numerical aperture, as the sine of the half angle multiplied by the refractive index of the medium filling the space between the cover glass and front lens. Abbe is credited by many for discovering the resolution limit of the microscope, and the formula (published in 1873) although in a publication in 1874, Helmholtz states this formula was first derived by Joseph Louis Lagrange, who had died 61 years prior. Helmholtz was so impressed as to offer a professorship at the University of Berlin, which he refused due to his ties to Zeiss. Abbe was in the camp of the wide aperturists, arguing that microscopic resolution is ultimately limited by the aperture of the optics, but also argued that depending on application there are other parameters that should be weighted over the aperture in the design of objectives. In Abbe's 1874 paper, titled "A Contribution to the Theory of the Microscope and the nature of Microscopic Vision", Abbe states that the resolution of a microscope is inversely dependent on its aperture, but without proposing a formula for the resolution limit of a microscope. In 1876, Abbe was offered a partnership by Zeiss and began to share in the considerable profits. Although the first theoretical derivations of were published by others, it is fair to say that Abbe was the first to reach this conclusion experimentally. In 1878, he built the first homogenous immersion system for the microscope. The objectives that the Abbe Zeiss collaboration were producing were of ideal ray geometry, allowing Abbe to find that the aperture sets the upper limit of microscopic resolution, not the curvature and placement of the lenses. Abbe's first publication of occurred in 1882. In this publication, Abbe states that both his theoretical and experimental investigations confirmed . Abbe's contemporary Henry Edward Fripp, English translator of Abbe's and Helmholtz's papers, puts their contributions on equal footing. He also perfected the interference method by Fizeau, in 1884. Abbe, Zeiss, Zeiss' son, Roderich Zeiss, and Otto Schott formed, in 1884, the Jenaer Glaswerk Schott & Genossen. This company, which in time would in essence merge with Zeiss Optical Works, was responsible for research and production of 44 initial types of optical glass. Working with telescopes, he built an image reversal system in 1895. In order to produce high quality objectives, Abbe made significant contributions to the diagnosis and correction of optical aberrations, both spherical aberration and coma aberration, which is required for an objective to reach the resolution limit of . In addition to spherical aberration, Abbe discovered that the rays in optical systems must have constant angular magnification over their angular distribution to produce a diffraction limited spot, a principle known as the Abbe sine condition. So monumental and advanced were Abbe's calculations and achievements that Frits Zernike based his phase contrast work on them, for which he was awarded the Nobel Prize in 1953, and Hans Busch used them to work on the development of the electron microscope. During his association with Carl Zeiss' microscope works, not only was he at the forefront of the field of optics but also labor reform. He founded the social democratic Jenaische Zeitung (newspaper) in 1890 and in 1900, introduced the eight-hour workday, in remembrance of the 14-hour workday of his own father. In addition, he created a pension fund and a discharge compensation fund. In 1889, Ernst Abbe set up and endowed the Carl Zeiss Foundation for research in science. The aim of the foundation was "to secure the economic, scientific, and technological future and in this way to improve the job security of their employees." He made it a point that the success of an employee was based solely on their ability and performance, not on their origin, religion, or political views. In 1896, he reorganized the Zeiss optical works into a cooperative with profit-sharing. His social views were so respected as to be used by the Prussian state as a model and idealized by Alfred Weber in the 1947 book Schriften der Heidelberger Aktionsgruppe zur Demokratie und Zum Freien Sozialismus (Writings of the Heidelberg Action Group on Democracy and Free Socialism). The crater Abbe on the Moon was named in his honour. Bibliography Abbe was a pioneer in optics, lens design, and microscopy, and an authority of his time. He left us with numerous publications of his findings, inventions, and discoveries. Below is a list of publications he authored including many links to the scanned Google Books pages. See also Abbe condenser Abbe diffraction limit Abbe error Abbe eyepiece Abbe number Abbe prism Abbe refractometer Abbe sine condition Abbe–Koenig prism Abbe–Porro prism Aberration in optical systems Crown glass (optics) Dermatoscopy Diaphragm (optics) Calculation of glass properties Optical aberration Optical dilatometer German inventors and discoverers Notes References Sources Further reading Volkmann, Harald. "Ernst Abbe and his work." Applied Optics 5.11 (1966): 1720–1731. External links Abbe's contributions Basic Principles of Refractometers (and Polarimeters) Molecular Expressions's biography Abbe Refractometer by Carl Zeiss made in 1904 1840 births 1905 deaths 19th-century German businesspeople 19th-century German inventors 19th-century German physicists Academic staff of the University of Jena Carl Zeiss AG people Fellows of the Royal Microscopical Society German atheists German manufacturing businesspeople German scientific instrument makers History of glass Glass engineering and science Glass physics Lens designers Members of the Göttingen Academy of Sciences and Humanities Members of the Prussian Academy of Sciences Microscopists Optical engineers People from Eisenach People from Saxe-Weimar-Eisenach University of Göttingen alumni University of Jena alumni
Ernst Abbe
Physics,Chemistry,Materials_science,Engineering
1,980
11,570,913
https://en.wikipedia.org/wiki/TFI-5
TFI-5 in computer networking is a standardized TDM Fabric to Framer Interface by the Optical Internetworking Forum (OIF) that allow both framer components and switch components from multiple vendors to inter-operate facilitating the development of add/drop multiplexers, TDM cross connect and grooming switches. The TFI-5 standard includes link integrity monitoring, connection management and mapping mechanisms for both SONET/SDH and non-SONET/SDH clients such as Ethernet and Fibre Channel. The main application of TFI-5 is for Time-division multiplexing (TDM). This contrast with other OFI standards such SPI-5 which target packet/cell applications. OFI level 5 standards that covered interfaces of 40 Gbit/s. See also TDM SFI-5 Optical Internetworking Forum Framer References Multiplexing
TFI-5
Technology
175
35,322,432
https://en.wikipedia.org/wiki/C27H22O18
{{DISPLAYTITLE:C27H22O18}} The molecular formula C27H22O18 (molar mass: 634,43 g/mol, exact mass: 634.08062 u) may refer to: Corilagin, an ellagitannin Punicacortein A, an ellagitannin Punicacortein B, an ellagitannin Strictinin, an ellagitannin
C27H22O18
Chemistry
96
2,380,176
https://en.wikipedia.org/wiki/ActiMates
ActiMates are a short-lived and discontinued series of interactive toys released by Microsoft Kids in September 1997. The toys are in the form of licensed dolls which can interact with episodes of their respective television series from 1997 to 2000 or on special ActiMates-compatible VHS tapes and computer games. The toys were marketed as educational tools and gave positive affirmations for correct answers from the user. Characters Microsoft released seven characters based on their three respective television series: Barney in 1997, Arthur, with his sister D.W. from the 1996 series in 1998 and the Teletubbies in 1999. Barney was the first to be released, was first displayed at the New York Toy Fair that year and became a success during the holiday season. Television and computer interaction The dolls can interact with a television set and computer (the Teletubbies can't interact with the computer) using TV and PC packs. They can also be played standalone without the VCR, even with taped recordings on a blank VHS and computer packs. The barcode on the left side of the video frame and screen indicates that the show is ActiMates-compatible. Three ActiMates Barney PC games released at launch, with more additional software to be released for Barney. Discontinuation Microsoft discontinued the dolls in 2000 and lost the patent rights to the toys five years later. However, despite the dolls and technology being discontinued in 2000, Teletubbies episodes were ActiMates-compatible up until 2001, the toys still interacted with reruns of their respective shows (from ActiMates-compatible years) during that time for a few more years, and Arthur and D.W. could still interact with Arthur VHS releases from 2000 to 2005 (releases that feature episodes from seasons 1-4). ActiMates Barney could also interact with airings of seasons 4-6 of Barney & Friends on PBS Kids Sprout. References 1997 software Discontinued Microsoft products Toy brands Barney (franchise) Teletubbies Arthur (TV series)
ActiMates
Technology
409
23,294,909
https://en.wikipedia.org/wiki/Bamifylline
Bamifylline is a drug of the xanthine chemical class which acts as a selective adenosine A1 receptor antagonist. See also Theophylline Caffeine References Adenosine receptor antagonists Enones Primary alcohols Xanthines
Bamifylline
Chemistry
53
22,703,560
https://en.wikipedia.org/wiki/Poisson%20game
In game theory and political science, Poisson games are a class of games often used to model the behavior of large populations. One common application is determining the strategic behavior of voters with imperfect information about each other's preferences. Poisson games are most often used to model strategic voting in large electorates with secret and simultaneous voting. A Poisson game consists of a random population of players of various types, the size of which follow a Poisson distribution. This can occur when voters are not sure what the relative turnout of each party will be, or when they have imperfect polling information. For example, a model of the 1992 United States presidential election might include 4 types of voters: Democrats, Republicans, and two classes of Reform voters (those with second preferences of either Bill Clinton or George H.W. Bush). Main assumptions The first assumption of the model is that the total number of players of each type follows a Poisson distribution. In other words, the probability of voters turning out to support a given candidate is given by: More important is the assumption that voters are only interested in securing the best possible election outcome for themselves, and are motivated only by the possibility of casting the deciding vote. In other words, voters are assumed not to care about expressing their true opinions; about showing support for a minor party, even if they do not win; or about allowing other voters' voices to be heard. All of these effects tend to produce more honest voting in real elections than would be found in the Poisson model. In the model, all information is publicly-available, meaning that every voter can estimate the probability that each pair of candidates will be tied. An example of this would be an election with public opinion polling. Results The Poisson voting model generates several key results. Approval and score Under the Poisson model, approval voting and score voting behave identically, as each voter's best strategy involves casting a ballot that assigns every candidate either the maximum or minimum score. Plurality Under plurality, sincere voting is never a stable equilibrium with more than two candidates, i.e. many voters are incentivized to lie about their favorite candidate and vote for the lesser of two evils. For example, in the 2016 United States presidential election, some polls suggest that Gary Johnson was the majority-preferred winner. However, Johnson ultimately received only a small fraction of the vote because voters expected him to lose, creating a self-fulfilling prophecy. See also Poisson distribution Strategic voting References Game theory game classes
Poisson game
Mathematics
505
5,063,146
https://en.wikipedia.org/wiki/Applied%20general%20equilibrium
In mathematical economics, applied general equilibrium (AGE) models were pioneered by Herbert Scarf at Yale University in 1967, in two papers, and a follow-up book with Terje Hansen in 1973, with the aim of empirically estimating the Arrow–Debreu model of general equilibrium theory with empirical data, to provide "“a general method for the explicit numerical solution of the neoclassical model” (Scarf with Hansen 1973: 1) Scarf's method iterated a sequence of simplicial subdivisions which would generate a decreasing sequence of simplices around any solution of the general equilibrium problem. With sufficiently many steps, the sequence would produce a price vector that clears the market. Brouwer's Fixed Point theorem states that a continuous mapping of a simplex into itself has at least one fixed point. This paper describes a numerical algorithm for approximating, in a sense to be explained below, a fixed point of such a mapping (Scarf 1967a: 1326). Scarf never built an AGE model, but hinted that “these novel numerical techniques might be useful in assessing consequences for the economy of a change in the economic environment” (Kehoe et al. 2005, citing Scarf 1967b). His students elaborated the Scarf algorithm into a tool box, where the price vector could be solved for any changes in policies (or exogenous shocks), giving the equilibrium ‘adjustments’ needed for the prices. This method was first used by Shoven and Whalley (1972 and 1973), and then was developed through the 1970s by Scarf’s students and others. Most contemporary applied general equilibrium models are numerical analogs of traditional two-sector general equilibrium models popularized by James Meade, Harry Johnson, Arnold Harberger, and others in the 1950s and 1960s. Earlier analytic work with these models has examined the distortionary effects of taxes, tariffs, and other policies, along with functional incidence questions. More recent applied models, including those discussed here, provide numerical estimates of efficiency and distributional effects within the same framework. Scarf's fixed-point method was a break-through in the mathematics of computation generally, and specifically in optimization and computational economics. Later researchers continued to develop iterative methods for computing fixed-points, both for topological models like Scarf's and for models described by functions with continuous second derivatives or convexity or both. Of course, "global Newton methods" for essentially convex and smooth functions and path-following methods for diffeomorphisms converged faster than did robust algorithms for continuous functions, when the smooth methods are applicable. AGE and CGE models AGE models, being based on Arrow–Debreu general equilibrium theory, work in a different manner than CGE models. The model first establishes the existence of equilibrium through the standard Arrow–Debreu exposition, then inputs data into all the various sectors, and then applies Scarf’s algorithm (Scarf 1967a, 1967b and Scarf with Hansen 1973) to solve for a price vector that would clear all markets. This algorithm would narrow down the possible relative prices through a simplex method, which kept reducing the size of the ‘net’ within which possible solutions were found. AGE modelers then consciously choose a cutoff, and set an approximate solution as the net never closed on a unique point through the iteration process. CGE models are based on macro balancing equations, and use an equal number of equations (based on the standard macro balancing equations) and unknowns solvable as simultaneous equations, where exogenous variables are changed outside the model, to give the endogenous results. References Bibliography Cardenete, M. Alejandro, Guerra, Ana-Isabel and Sancho, Ferran (2012). Applied General Equilibrium: An Introduction. Springer. Scarf, H.E., 1967a, “The approximation of Fixed Points of a continuous mapping”, SIAM Journal on Applied Mathematics 15: 1328–43 Scarf, H.E., 1967b, “On the computation of equilibrium prices” in Fellner, W.J. (ed.), Ten Economic Studies in the tradition of Irving Fischer, New York, NY: Wiley Scarf, H.E. with Hansen, T, 1973, The Computation of Economic Equilibria, Cowles Foundation for Research in economics at Yale University, Monograph No. 24, New Haven, CT and London, UK: Yale University Press Kehoe, T.J., Srinivasan, T.N. and Whalley, J., 2005, Frontiers in Applied General Equilibrium Modeling, In honour of Herbert Scarf, Cambridge, UK: Cambridge University Press Shoven, J. B. and Whalley, J., 1972, "A General Equilibrium Calculation of the Effects of Differential Taxation of Income from Capital in the U.S.", Journal of Public Economics 1 (3–4), November, pp. 281–321 Shoven, J.B. and Whalley, J., 1973, “General Equilibrium with Taxes: A Computational Procedure and an Existence Proof”, The Review of Economic Studies 40 (4), October, pp. 475–89 Velupillai, K.V., 2006, “Algorithmic foundations of computable general equilibrium theory”, Applied Mathematics and Computation 179, pp. 360–69 General equilibrium theory Fixed points (mathematics)
Applied general equilibrium
Mathematics
1,086
43,744,246
https://en.wikipedia.org/wiki/The%20Machine%20%28computer%20architecture%29
The Machine is an experimental computer made by Hewlett Packard Enterprise. It was created as part of a research project to develop a new type of computer architecture for servers. The design focused on a “memory centric computing” architecture, where NVRAM replaced traditional DRAM and disks in the memory hierarchy. The NVRAM was byte addressable and could be accessed from any CPU via a photonic interconnect. The aim of the project was to build and evaluate this new design. Hardware overview The Machine was a computer cluster with many individual nodes connected over a memory fabric. The fabric interconnect used VCSEL-based silicon photonics with a custom chip called the X1. Access to memory is non-uniform and may include multiple hops. The Machine was envisioned to be a rack-scale computer initially with 80 processors and 320 TB of fabric attached memory, with potential for scaling to more enclosures up to 32 ZB. The fabric attached memory is not cache coherent and requires software to be aware of this property. Since traditional locks need cache coherency, hardware was added to the bridges to do atomic operations at that level. Each node also has a limited amount of local private cache-coherent memory (256 GB). Storage and compute on each node had completely separate power domains. The whole fabric attached memory of The Machine is too large to be mapped into a processor's virtual address space (which was 48-bits wide). A way is needed to map windows of the fabric attached memory into processor memory. Therefore, communication between each node SoC and the memory pool goes through an FPGA-based “Z-bridge” component that manages memory mapping of the local SoC to the fabric attached memory. The Z-bridge deals with two different kinds of addresses: 53-bit logical Z addresses and 75-bit Z addresses, which allows addressing 8PB and 32ZB respectively. Each Z-bridge also contained a firewall to enforce access control. The interconnect protocol was developed in-house and known as Next Generation Memory Interconnect (NGMI). This protocol evolved into the open Gen-Z standard. The Z-bridge connects to the SoC using PCIe, avoiding major software changes. A half rack prototype of the machine was unveiled at HPE Discover in London in 2016. Each node contained ARMv8-A based Broadcom/Cavium ThunderX2 SoCs. In total there were 40 32-core SoCs. Due to unavailability of adequate memristor-based NVRAM or phase-change memory, the prototype used 160 TB of battery-backed DRAM. Despite this setback, software architect Keith Packard said this "can be used to prove the other parts of the design before switching". According to The Register, HPE's partnership with SK Hynix to develop memristor-based NVRAM ran into funding and directional problems and they were working with Sandisk on Resistive RAM (ReRAM) for The Machine. According to The Next Platform, HPE considered switching to Intel Optane DIMMs "when production quantities of are available on the market". The Next Platform estimated the rack prototype to consume 24 kW to 36 kW of power. Software overview Two major software projects were created for the Machine. An experimental version of Linux called Linux++ with all the necessary enhancements to configure the hardware and work with traditional programming models. This included bridge configuration, access control and mapping using the DAX subsystem. In parallel, a new operating system (OS) called Carbon was announced that would be designed from first principles to take full advantage of an NVRAM based computer. Primary workloads for The Machine included in-memory database, Hadoop-style software, and real-time big data analytics. HPE claimed that a memory-driven computing design like The Machine could "improve speeds by up to 8000x compared to conventional systems". In the prototype system, the fabric attached memory of the system was organised by a "top of rack" management server component called The Librarian. The Librarian divided the memory into "shelves" of 8GB "books", and hardware protections could be configured on book boundaries. A fine grained 64KB "booklet" was also supported. The mapping of memory is handled by the OS, while the access controls for the memory are configured by the management infrastructure of The Machine system as a whole. Software needs to be aware that fabric attached memory memory reads can have synchronous errors whilst writes can have asynchronous errors. On the Linux system, when a memory error occurs the SIGBUS operating system signal is used. Programming model and data structure changes were also explored, including changes to thread libraries and heap data structures to be resilient with non-volatile memory failure modes. History A few years after HP’s re-discovery of the Memristor, the newly appointed CTO of HP, Martin Fink, created a HP Labs project to build a computer system based on memristor to tackle the slowing of Moore's law. He announced the project at HP’s Discover event in the summer of 2014. Some of the ideas of The Machine also came from Dragonhawk system designs. Three-quarters of HP Labs’s 200 staff were focused on the hardware and software of the machine. Speaking to Bloomberg, HP says it would commercialize The Machine within a few years, “or fall on its face trying.” Kirk Bresniker served as Chief Architect, and Keith Packard was hired to work on the Linux enhancements. Bdale Garbee was hired to manage open source development. In 2015, Hewlett-Packard separated into two separate companies, HP Inc and Hewlett Packard Enterprise (HPE), with The Machine project assigned to the latter. In late 2016, Martin Fink retired as HPE CTO. Fink's retirement announcement also said that Hewlett Packard Labs staff would be moved into the Enterprise product group to "align our R&D work on The Machine with the business". By early 2017, Hewlett Packard Labs had a slide saying that the project's aim was “to demonstrate progress, not develop products” and they would “collaborate to deliver differentiating Machine value into existing architectures as well as disruptive architectures”. BleepingComputer said "In other words, The Machine is no longer a product in its own right. Instead it will provide technologies that will be used in other HPE products going forward.". HPE restructured its pure R&D organization and placed it in the products group. Yahoo! Finance reported that the Machine prototype "remains years away from being commercially available". In 2018, HPE stated that the project had reached the stage where it needed commercial applications from customers in the next step of its evolution. References Computer architecture Supercomputers Non-volatile memory Silicon photonics devices
The Machine (computer architecture)
Technology,Engineering
1,426
63,534,082
https://en.wikipedia.org/wiki/Body-part%20counting%20system
Some languages of the world have numeral systems that do not make use of an arithmetic base. One such system is the body-part counting system which make use of further body parts to extend the system beyond the ten fingers. Counting typically begins by touching (and usually bending) the fingers of one hand, moves up the arm to the shoulders and neck, and in some systems, to other parts of the upper body or the head. A central point serves as the half-way point. Once this is reached, the counter continues, touching and bending the corresponding points on the other side until the fingers are reached. Use The body-part counting system is quite typical of a number of languages within the New Guinea Highlands. Oceania Foi, an East Kutubuan language, features a body-part numeral system that counts up to 37. Oksapmin, a Trans–New Guinea language spoken in Sandaun Province, features a body-part counting system that goes up to 27. Geoffrey Saxe, in ethnographic work, has described shifts in form and function of the system in recent history linked to shifts towards a cash economy and the introduction of Western-styled schooling. Kobon, a Papuan language spoken in the Madang Province, counts up to 23. The count can then be reversed for larger numbers. Reverse counting For example, in Kobon, the body parts on the left-hand side of the body are used in order to count from 1 to 12. The count can then continue down the right-hand side of the body up to 23. It is then possible to reverse the count, starting from the end point on the right as 24 back up to the 12th position on the left as 35, then down again to the end point on the left as 46. One effect of this is that the names of particular body parts when used as numerals are multiply ambiguous. The same body part can represent multiple numbers depending on the how many passes across the body were made. There are usually means, optional or obligatory depending on the language, to distinguish the second side of the body used in a count from the first, as well as to indicate which pass across the body is being used, but there is no productive means to identify other than a small number of passes across the body. References Numbers
Body-part counting system
Mathematics
470
4,666,045
https://en.wikipedia.org/wiki/Silver%20chromate
Silver chromate is an inorganic compound with formula Ag2CrO4 which appears as distinctively coloured brown-red crystals. The compound is insoluble and its precipitation is indicative of the reaction between soluble chromate and silver precursor salts (commonly potassium/sodium chromate with silver nitrate). This reaction is important for two uses in the laboratory: in analytical chemistry it constitutes the basis for the Mohr method of argentometry, whereas in neuroscience it is used in the Golgi method of staining neurons for microscopy. In addition to the above, the compound has been tested as a photocatalyst for wastewater treatment. The most important practical and commercial application for silver chromate, however, is its use in Li-Ag2CrO4 batteries, a type of lithium battery mainly found in artificial pacemaker devices. As for all chromates, which are chromium(VI) species, the compound poses a hazard of toxicity, carcinogenicity and genotoxicity, as well as great environmental harm. Preparation Silver chromate is usually produced by the salt metathesis reaction of potassium chromate (K2CrO4) and silver nitrate (AgNO3) in purified water – the silver chromate will precipitate out of the aqueous reaction mixture: 2 + → 2 + This occurs as the solubility of silver chromate is very low (Ksp = 1.12×10−12 or 6.5×10−5 mol/L). The formation of insoluble Ag2CrO4 nanostructures via the above reaction with good control over particle size and shape has been achieved through sonochemistry, template-assisted synthesis or hydrothermal methods. Structure and properties Crystal structure The compound is polymorphic and can exhibit two crystal structures depending on temperature: hexagonal at higher and orthorhombic at lower temperatures. The hexagonal phase transforms to the orthorhombic upon cooling below the crystal structure transition temperature T=482 °C. The orthorhombic polymorph is the commonly encountered one and it crystallizes in the space group Pnma, with two distinct coordination environments for the silver ions (one tetragonal bipyramidal and the other distorted tetrahedral). Colour The characteristic brick-red/acajou colour (absorption λmax=450 nm) of silver chromate is rather unlike other chromates which are typically yellow to yellowish orange in appearance. This difference in absorption has been hypothesised to be due to the charge-transfer transition between the silver 4d orbital and chromate e* orbitals, although this seems not to be the case based on careful analysis of UV/Vis spectroscopic data. Instead, the shift in λmax is more likely attributed to the Davydov splitting effect. Applications Argentometry The precipitation of the strongly coloured silver chromate is used to indicate the endpoint in the titration of chloride with silver nitrate in the Mohr method of argentometry. The reactivity of the chromate anion with silver is lower than with halides (e.g. chlorides) so that in a mixture of both ions, only silver chloride precipitate will form: + + → + + Only when no chloride (or any halogen) is left will silver chromate form and precipitate out. Prior to the endpoint the solution has a milky lemon-yellow appearance, due to the suspension of the AgCl precipitate already formed and the yellow colour of the chromate ion in solution. Approaching the endpoint, additions of AgNO3 lead to steadily more slowly disappearing red colouration. When the red-brownish colour persists (with some greyish spots of silver chloride in it) the endpoint of titration is reached. This method is only suitable for near neutral pH: in very low (acidic) pH, the silver chromate is soluble (due to the formation of H2CrO4), and in alkaline pH, the silver precipitates as the hydroxide. The titration was introduced by Mohr in the mid 19th century and despite limitations in pH conditions it has not completely fallen out of use since. An example of a practical application of Mohr's method is in determining the chloride level of salt water pools. Golgi method A very different application of the same reaction is for the staining of neurons so that their morphology becomes visible under a microscope. The technique involves first impregnating aldehyde-fixed brain tissue with a 2% aqueous potassium dichromate solution. This is followed by drying and immersion in a 2% aqueous silver nitrate solution. By the same reaction as above, silver chromate forms and by a mechanism not entirely understood the precipitation occurs inside some of the neurons, allowing detailed observation of morphological details too fine for common staining techniques. Several variations on the method exist to increase contrast or selectivity in the type of neuron stained, and include additional impregnation in mercuric chloride solution (Golgi-Cox) or post-treatment with osmium tetroxide (Cajal or rapid Golgi). The previously infeasible observations enabled by the silver chromate staining technique led to the eventual award of the 1906 Nobel Prize in Physiology or Medicine to discoverer Golgi and pioneer of its use and improvement Ramón y Cajal. Photocatalyst Silver chromate has been investigated for possible use as a catalyst for the photocatalytic degradation of organic pollutants in wastewater. Although Ag2CrO4 nanoparticles are somehow effective for this purpose, the high toxicity of chromium(VI) to humans and the environment requires additional complex procedures for the containment of any chromium from the catalyst, which must be prevented from leaching into the treated wastewater. Li-batteries Li-Ag2CrO4 batteries are a type of Li-metal batteries developed in the early 1970s by Saft, in which silver chromate serves as the cathode, metallic lithium as the anode, and a lithium perchlorate solution as the electrolyte. The battery was intended for biomedical applications and had characteristics like high reliability and shelf life quality for the time of discovery. Lithium-silver chromate batteries have therefore found wide application in implanted pacemaker devices. References Cited sources Chromates Silver compounds Photographic chemicals Oxidizing agents
Silver chromate
Chemistry
1,345
2,214,575
https://en.wikipedia.org/wiki/10%20Gigabit%20Ethernet%20Alliance
The 10 Gigabit Ethernet Alliance (10GEA) was an independent (not directly related to Institute of Electrical and Electronics Engineers (IEEE), although working in collaboration with it) organization which aimed to further 10 Gigabit Ethernet development and market acceptance. Founded in February 2000 by a consortium of companies, the organization provided IEEE with technology demonstrations (including, for instance, a May 7, 2002 demonstration in Las Vegas, in which a 200 plus kilometres 10 Gb Ethernet network was deployed, using 10GBASE-LR, 10GBASE-ER, 10GBASE-SR and 10GBASE-LW ports, as well as presenting communication over the IEEE 802.3ae XAUI interface) and specifications. Its efforts bore fruit with the IEEE Standards Association (IEEE-SA) Standards Board's approval in June 2002 of the IEEE 802.3 standard (formulated by the IEEE P802.3ae 10 Gbit/s Ethernet Task Force). The 10GEA was founded by 3Com, Cisco Systems, Extreme Networks, Intel Corporation, Nortel Networks, Sun Microsystems, and World Wide Packets. Other companies at various times supporting the consortium included: Agilent Technologies Inc., Blaze Network Products, Cable Design Technologies, Corning Inc., Enterasys Networks, Force10 Networks Inc., Foundry Networks Inc., Hitachi Cable Ltd, Infineon Technologies, Ixia, JDS Uniphase, Marvell Technology Group Ltd., Mindspeed, Molex Inc., OFS (part of Lucent), ONI Systems/CIENA, Optillion, PMC-Sierra, Primarion, Quake Technologies (acquired by Applied Micro Circuits Corporation), Spirent Communications, and Velio Communications (later acquired by LSI Corporation). See also Ethernet Alliance References External links IEEE P802.3ae 10Gb/s Ethernet Task Force Collection of archived 10GEA whitepapers Ethernet
10 Gigabit Ethernet Alliance
Technology
397
27,625,889
https://en.wikipedia.org/wiki/Crizotinib
Crizotinib, sold under the brand name Xalkori among others, is an anti-cancer medication used for the treatment of non-small cell lung carcinoma (NSCLC). Crizotinib inhibits the c-Met/Hepatocyte growth factor receptor (HGFR) tyrosine kinase, which is involved in the oncogenesis of a number of other histological forms of malignant neoplasms. It also acts as an ALK (anaplastic lymphoma kinase) and ROS1 (c-ros oncogene 1) inhibitor. Medical uses Crizotinib is indicated for the treatment of metastatic non-small cell lung cancer (NSCLC) or relapsed or refractory, systemic anaplastic large cell lymphoma (ALCL) that is ALK-positive. It is also indicated for the treatment of unresectable, recurrent, or refractory inflammatory anaplastic lymphoma kinase (ALK)-positive myofibroblastic tumors (IMT). Mechanism of action Crizotinib has an aminopyridine structure, and functions as a protein kinase inhibitor by competitive binding within the ATP-binding pocket of target kinases. About 4% of patients with non-small cell lung carcinoma have a chromosomal rearrangement that generates a fusion gene between EML4 ('echinoderm microtubule-associated protein-like 4') and ALK ('anaplastic lymphoma kinase'), which results in constitutive kinase activity that contributes to carcinogenesis and seems to drive the malignant phenotype. The kinase activity of the fusion protein is inhibited by crizotinib. Patients with this gene fusion are typically younger non-smokers who do not have mutations in either the epidermal growth factor receptor gene (EGFR) or in the K-Ras gene. The number of new cases of ALK-fusion NSLC is about 9,000 per year in the U.S. and about 45,000 worldwide. ALK mutations are thought to be important in driving the malignant phenotype in about 15% of cases of neuroblastoma, a rare form of peripheral nervous system cancer that occurs almost exclusively in very young children. Crizotinib is thought to exert its effects through modulation of the growth, migration, and invasion of malignant cells. Other studies suggest that crizotinib might also act via inhibition of angiogenesis in malignant tumors. Society and culture Legal status In August 2011, the US Food and Drug Administration (FDA) approved crizotinib to treat certain late-stage (locally advanced or metastatic) non-small cell lung cancers that express the abnormal anaplastic lymphoma kinase (ALK) gene. Approval required a companion molecular test for the EML4-ALK fusion. In March 2016, the FDA approved crizotinib in ROS1-positive non-small cell lung cancer. In October 2012, the European Medicines Agency (EMA) approved the use of crizotinib to treat non-small cell lung cancers that express the abnormal anaplastic lymphoma kinase (ALK) gene. Research Lung cancer Crizotinib caused tumors to shrink or stabilize in 90% of 82 patients carrying the ALK fusion gene. Tumors shrank at least 30% in 57% of people treated. Most had adenocarcinoma, and had never smoked or were former smokers. They had undergone treatment with an average of three other drugs prior to receiving crizotinib, and only 10% were expected to respond to standard therapy. They were given 250 mg crizotinib twice daily for a median duration of six months. Approximately 50% of these patients had at least one side effect, such as nausea, vomiting, or diarrhea. Some responses to crizotinib have lasted up to 15 months. A Phase III trial, PROFILE 1007, compares crizotinib to standard second line chemotherapy (pemetrexed or taxotere) in the treatment of ALK-positive NSCLC. Additionally, a phase 2 trial, PROFILE 1005, studies patients meeting similar criteria who have received more than one line of prior chemotherapy. In February 2016, the J-ALEX phase III study comparing alectinib with crizotinib ALK-positive metastatic NSCLC was terminated early because an interim analysis showed that progression-free survival was longer with alectinib. These results were confirmed in a 2017 analysis. Lymphomas In people affected by relapsed or refractory ALK+ anaplastic large cell lymphoma, crizotinib produced objective response rates ranging from 65% to 90% and 3 year progression free survival rates of 60–75%. No relapse of the lymphoma was ever observed after the initial 100 days of treatment. Treatment must be continued indefinitely at present. Other cancers Crizotinib is also being tested in clinical trials of advanced disseminated neuroblastoma. References External links Chlorobenzene derivatives CYP3A4 inhibitors Fluorobenzene derivatives 2-Aminopyridines Drugs developed by Pfizer Drugs developed by Merck Pyrazoles 4-Piperidinyl compounds Receptor tyrosine kinase inhibitors Ethers
Crizotinib
Chemistry
1,147