id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
214,241
https://en.wikipedia.org/wiki/Terpenoid
The terpenoids, also known as isoprenoids, are a class of naturally occurring organic chemicals derived from the 5-carbon compound isoprene and its derivatives called terpenes, diterpenes, etc. While sometimes used interchangeably with "terpenes", terpenoids contain additional functional groups, usually containing oxygen. When combined with the hydrocarbon terpenes, terpenoids comprise about 80,000 compounds. They are the largest class of plant secondary metabolites, representing about 60% of known natural products. Many terpenoids have substantial pharmacological bioactivity and are therefore of interest to medicinal chemists. Plant terpenoids are used for their aromatic qualities and play a role in traditional herbal remedies. Terpenoids contribute to the scent of eucalyptus, the flavors of cinnamon, cloves, and ginger, the yellow color in sunflowers, and the red color in tomatoes. Well-known terpenoids include citral, menthol, camphor, salvinorin A in the plant Salvia divinorum, ginkgolide and bilobalide found in Ginkgo biloba and the cannabinoids found in cannabis. The provitamin beta carotene is a terpene derivative called a carotenoid. The steroids and sterols in animals are biologically produced from terpenoid precursors. Sometimes terpenoids are added to proteins, e.g., to enhance their attachment to the cell membrane; this is known as isoprenylation. Terpenoids play a role in plant defense as prophylaxis against pathogens and attractants for the predators of herbivores. Structure and classification Terpenoids are modified terpenes, wherein methyl groups have been moved or removed, or oxygen atoms added. Some authors use the term "terpene" more broadly, to include the terpenoids. Just like terpenes, the terpenoids can be classified according to the number of isoprene units that comprise the parent terpene: Terpenoids can also be classified according to the type and number of cyclic structures they contain: linear, acyclic, monocyclic, bicyclic, tricyclic, tetracyclic, pentacyclic, or macrocyclic. The Salkowski test can be used to identify the presence of terpenoids. Biosynthesis Terpenoids, at least those containing an alcohol functional group, often arise by hydrolysis of carbocationic intermediates produced from geranyl pyrophosphate. Analogously hydrolysis of intermediates from farnesyl pyrophosphate gives sesquiterpenoids, and hydrolysis of intermediates from geranylgeranyl pyrophosphate gives diterpenoids, etc. Impact on aerosols In air, terpenoids are converted into various species, such as aldehydes, hydroperoxides, organic nitrates, and epoxides by short-lived free radicals (like the hydroxyl radical) and to a lesser extent by ozone. These new species can dissolve into water droplets and contribute to aerosol and haze formation. Secondary organic aerosols formed from this pathway may have atmospheric impacts. As an example the Blue Ridge Mountains in the U.S. and Blue Mountains of New South Wales in Australia are noted for having a bluish color when seen from a distance. Trees put the "blue" in Blue Ridge, from their terpenoids released into the atmosphere. See also List of antioxidants in food List of phytochemicals in food Nutrition Phytochemistry Secondary metabolites References External links IUPAC definition of terpenoids Plant communication cs:Izoprenoidy es:Terpenoides it:Terpeni
Terpenoid
Chemistry,Biology
799
3,469,770
https://en.wikipedia.org/wiki/Molecular%20anatomy
Molecular anatomy is the subspecialty of microscopic anatomy concerned with the identification and description of molecular structures of cells, tissues, and organs in an organism. References Anatomy
Molecular anatomy
Biology
35
47,767,987
https://en.wikipedia.org/wiki/Phellodon%20nothofagi
Phellodon nothofagi is a species of tooth fungus in the family Bankeraceae. Found in New Zealand, it was described as new to science in 1971 by mycologist Robert Francis Ross McNabb. References External links Fungi described in 1971 Fungi of New Zealand Inedible fungi nothofagi Fungus species
Phellodon nothofagi
Biology
65
29,107,838
https://en.wikipedia.org/wiki/Mortality%20forecasting
Mortality forecasting refers to the art and science of determining likely future mortality rates. It is especially important in rich countries with a high proportion of aged people, since populations with lower mortality accumulate more pensions. References See also Lee-Carter model Life expectancy Actuarial science References Actuarial science Death Forecasting
Mortality forecasting
Mathematics
64
40,543,246
https://en.wikipedia.org/wiki/SUN%20Innovations
SUN Innovations is a holding company, developer and manufacturer of wide-format printing equipment. Its main products are a UV-LED printer (NEO Evolution) and UV; solvent inks for wide format printing ("Sunflower" and "NANOiNK" respectively). SUN Innovations is a pioneer in the sphere of UV-LED printing technologies. It launched wide format printers based on the LED technology of ink curing, which allows printing on a wide range of materials including wood, metal, plastic, glass, mirrors, banner and fabric. History SUN Innovations was established in 1998 as a privately held company. Today, SUN has partners and sells its solutions for digital printing in over 70 countries. In 2010, SUN Innovations was supported by the governmental organization Rusnano, which was established to foster the growth of nanotechnology industries in Russia. In 2010, Rusnano became an investor in SUN Innovations. On March 1, 1998, the company was founded in Novosibirsk. Initially, the company's main activity was the sale of consumables for the production of advertising and lighting equipment. 2003 – started selling Chinese solvent printers & inks. 2005 - established the largest at that time in the Russian service center for printing products and research. The company produced the first solvent ink in Russia and moved from the resale of the equipment to production. 2006 – opened its first ink factory in Russia under the brand SunFlower. 2007 - entered the international market with UV printer NEO Evolution. 2010 – accepted investment from RUSANO and becomes its portfolio company. 2010 – invented full-color printing on water surface. 2011 – created new UV ink formula according to “eco” and “nano” mission with antibacterial effect based on silver nanoparticles. 2012 – introduced Evolution printers with higher printing speed and new fast-drying solvent inks Turbo-S. 2013 – released a new version of UV printer – Sun Universal. Features included: web-based interface; ink measurement; easy to control zoning vacuum table; automatic parking, conservation and cleaning of the printhead; metering sensor height of the material; automatic material positioning Directors Founder is Solovyova Svetlana Vladimirovna (share in the authorized capital: 100%). Bankruptcy Trustee is Petrakov Pavel Vladimirovich. References External links SUN Innovation company Manufacturing companies based in Novosibirsk Computer companies of Russia Computer hardware companies Computer printer companies
SUN Innovations
Technology
486
28,549,106
https://en.wikipedia.org/wiki/Kaluli%20creation%20myth
The Kaluli creation myth is a traditional creation myth of the Kaluli people of Papua New Guinea. In the version as was recorded by anthropologist and ethnographer Edward L. Shieffelin whose first contact with them took place in the late 1960s. The story begins in a time the Kaluli call hena madaliaki, which translates "when the land came into form." During the time of hena madaliaki people covered the earth but there was nothing else: no trees or plants, no animals, and no streams. With nothing to use for food or shelter, the people became cold and hungry. Then one man among them (alternative accounts give two) gathered everyone together and delegated different tasks. He directed one group to become trees and they did. He directed another to become sago, yet another to be fish, another banana and so forth until the world was brimming with animals, food, streams, mountains and all other natural features. There were only a few people left and they became the ancestors of present-day human beings. The Kaluli describe this story as "the time when everything alə bano ane" which means roughly "the time when everything divided". This concept of all world phenomena as a result of a "splitting" has many echos in Kaluli thought and cultural practices. In the Kaluli world view, all of existence is made from people who differentiated into different forms. Animals, plants, streams and people are all the same except in the form they have assumed following this great split. Death is another splitting. The Kaluli have no concept of a transcendent, sacred domain that is spiritual or in any fundamental way distinct from the natural, material world; instead death is another event that divides beings through the acquisition of new forms which are unrecognizable to the living. The Kaluli are an indigenous people whose first contact with contemporary western civilization began in the 1940s. Following extensive Christian missionary efforts in the region, variants of the traditional creation story have adopted a few Christian elements. Prior to contact, the Kaluli story described creation as a pragmatic solution to problems of cold and hunger, and the efforts were initiated by one or two ordinary and unnamed men rather than any deity or deities. The Kaluli have since tended to identify one or both of them as "Godeyo" (God) and "Yesu" (Jesus Christ). See also References Creation myths Papua New Guinean mythology
Kaluli creation myth
Astronomy
513
55,740,699
https://en.wikipedia.org/wiki/BV%20Centauri
BV Centauri is a cataclysmic variable binary star in the constellation Centaurus. It is a dwarf nova, and undergoes rapid increases in brightness that are recurrent with a mean period of 150 days. This period seems to have increased in the last few decades. During quiescence, its visual apparent magnitude is about 13, with variations of a few tenths of magnitude over an orbit due to differences in the star's visible surface area (ellipsoidal variability), brightening to a maximum magnitude of 10.7 during outbursts. From its luminosity, it is estimated that the system is about away from Earth. A Gaia parallax of 2.81 mas has been measured, corresponding to about 360 pc. William Francis Herschel Waterfield discovered that the star is a variable star, in 1929. Cataclysmic variables are short-period binary systems in which a white dwarf primary accretes matter from a secondary star. For BV Centauri, the white dwarf and its companion have estimated masses of 1.18 and 1.05 times the mass of the Sun respectively, although alternate, conflicting mass measurements were reported too. The secondary is a conventional star with a spectral type of G5-G8IV-V and it is assumed to contribute to half of the visual luminosity of the system. It is thought to have a radius of and so to be significantly evolved away from the zero age main sequence. The reconstruction of its surface by Doppler imaging revealed it to be a highly magnetically active star, with about 25% of its surface covered in starspots which are much more abundant on the hemisphere facing the white dwarf. Furthermore, a prominence was detected above the secondary star's surface, also in the side facing the white dwarf. The white dwarf primary can be observed clearly at ultraviolet wavelengths where it is the strongest source. Any accretion disk in the system appears relatively faint. The system has a period of 0.611179 days (16.7 hours), one of the longest periods for a dwarf nova, and is inclined by 53 ± 4° in relation to the plane of the sky. It has been noted that BV Centauri's light curve during outbursts has anomalous behavior for a dwarf nova, with a long interval of up to 15 days before reaching peak brightness and no plateau at maximum brightness, and it has been compared to the classic nova GK Persei. Based on this, it has been proposed that BV Centauri could have generated an unobserved nova outburst in the 19th century, which was missed by the observers at the time. References Centaurus Dwarf novae Centauri, BV G-type subgiants J13311951-5458335
BV Centauri
Astronomy
574
33,203,965
https://en.wikipedia.org/wiki/Additive%20Architecture
Additive Architecture is an approach used by Danish architect Jørn Utzon to describe his development of architectural projects on the basis of growth patterns in nature. Mogens Prip-Buus, one of Utzon's closest colleagues, reports that the term was coined in 1965 in Utzon's Sydney office when, after a discussion of the social structures in Britain and Denmark, Utzon suddenly jumped up and wrote "Additive Architecture" on the wall. He saw it as part of an additive world where both natural and cultural forms contributed to additive systems and hierarchies. He realized that his own architecture reflected the same principle, just as the transitions in primitive societies between family, village and the surrounding world have visible links revealing differences, relations and distances. Utzon observed the additive approach in Chinese temples whose stacked timber structures are basically identical, differing only with the size of the building. In his "Additive Architecture" manifesto in 1970, he tells us how he saw the phenomenon reflected in a group of deer at the edge of a forest or in the pebbles on a beach, convincing him that buildings should be designed more freely rather than in identical box shapes. Earlier, in 1948, he had expressed the same ideas in an essay titled "The Innermost Being of Architecture" stating: "Something of the naturalness found in the growth principle in nature ought to be a fundamental idea in works of architecture." The application of the additive approach can be seen in many of Utzon's works including the courtyard housing schemes which began with the Kingo Houses, the tiling of the Sydney Opera House and his designs for a sports complex in Jeddah. Utzon's early competition project for a crematorium in 1945 also exemplifies his approach. The building's free-standing walls could be extended over time, a new brick being added for each cremation. Examples of the Additive Architecture approach in Utzon's work can also be seen in his designs for the unbuilt Silkeborg Museum, the Farum Town Centre proposal, the Herning expansion plan including a "school town" and the flexible Espansiva approach for low-cost housing which only resulted in a prototype. Perhaps the best example of all is the proposal for a major sports centre in Jeddah, Saudi Arabia, based on the use of a limited number of repeating elements. References Literature Jørn Utzon, Additive Architecture: Logbook Vol. V, Copenhagen, Edition Bløndal, 2009, 312 pages. Richard Weston: Utzon — Inspiration, Vision, Architecture. Denmark: Edition Bløndal, 2002. Modernist architecture Architectural design Jørn Utzon buildings
Additive Architecture
Engineering
544
54,246,592
https://en.wikipedia.org/wiki/Osilodrostat
Osilodrostat, sold under the brand name Isturisa, is a medication for the treatment of adults with Cushing's disease who either cannot undergo pituitary gland surgery or have undergone the surgery but still have the disease. Osilodrostat is an orally active (taken by mouth), nonsteroidal corticosteroid biosynthesis inhibitor which was developed by Novartis for the treatment of Cushing's syndrome and pituitary hypersecretion (a specific subtype of Cushing's syndrome). It specifically acts as a potent and selective inhibitor of aldosterone synthase (CYP11B2) and at higher dosages of 11β-hydroxylase (CYP11B1). The most common side effects are adrenal insufficiency, headache, vomiting, nausea, fatigue, and edema (swelling caused by fluid retention). Hypocortisolism (low cortisol levels), QTc prolongation (a heart rhythm condition) and elevations in adrenal hormone precursors (inactive substance converted into a hormone) and androgens (hormone that regulates male characteristics) may also occur in people taking osilodrostat. Osilodrostat was approved for medical use in the European Union in January 2020, and for medical use in the United States in March 2020. The U.S. Food and Drug Administration (FDA) considers it to be a first-in-class medication. History In October 2014, an orphan designation was granted by the European Commission for osilodrostat for the treatment of Cushing's syndrome. Osilodrostat was approved for medical use in the European Union in January 2020, and for medical use in the United States in March 2020. Osilodrostat's safety and effectiveness for treating Cushing's disease among adults was evaluated in a study of 137 adult subjects (about three-quarters women) with a mean age of 41 years. The majority of subjects either had undergone pituitary surgery that did not cure Cushing's disease or were not surgical candidates. In the 24-week, single-arm, open-label period, all subjects received a starting dose of 2 milligrams (mg) of osilodrostat twice a day that could be increased every two weeks up to 30 mg twice a day. At the end of this 24-week period, about half of subjects had cortisol levels within normal limits. After this point, 71 subjects who did not need further dose increases and tolerated the drug for the last 12 weeks entered an eight-week, double-blind, randomized withdrawal study where they either received osilodrostat or a placebo (inactive treatment). At the end of this withdrawal period, 86% of subjects receiving osilodrostat maintained cortisol levels within normal limits compared to 30% of subjects taking the placebo. The US Food and Drug Administration (FDA) approved osilodrostat based on the evidence from one clinical trial (NCT02180217) of 137 subjects with Cushing's disease. The trial was conducted at 66 sites across 19 countries (United States, Argentina, Austria, Bulgaria, Canada, China, Columbia, Germany, Spain, France, Great Britain, India, Italy, Japan, Korea, Netherlands, Russia, Thailand, and Turkey). There was one trial of 48 weeks duration that assessed the benefits and side effects of osilodrostat. The trial enrolled subjects with Cushing's disease for whom pituitary gland surgery was not an option or did not work. The trial was divided in four periods. Subjects received osilodrostat two times a day in all four periods. After the first two periods (24 weeks), the benefit of osilodrostat was assessed by the percentage of subjects who had 24-hour urinary free cortisol levels within normal limits. In the third period (which lasted eight weeks), half of the subjects who had normal urinary free cortisol levels after 24 weeks of treatment continued taking osilodrostat and the other half was switched to placebo. Neither the subjects nor the healthcare providers know which treatment was given during this period. The benefit of osilodrostat was assessed on the percentage of subjects who had normal cortisol levels at the end of this period versus the subjects who received placebo. The FDA granted osilodrostat an orphan drug designation and granted the approval of Isturisa to Novartis. Society and culture Economics At the recommended starting dose of 2 mg, a year's supply would cost at 2021 prices in the United States. Research A systematic review and meta-analysis of osilodrostat, published in 2024, found it to have efficacy and safety in normalizing serum cortisol levels in people with Cushing's Syndrome. References Further reading External links 11β-Hydroxylase inhibitors Aldosterone synthase inhibitors Antiglucocorticoids Fluoroarenes Imidazoles Nitriles Drugs developed by Novartis Orphan drugs Pyrroles
Osilodrostat
Chemistry
1,063
5,692,791
https://en.wikipedia.org/wiki/Neuropeptide%20Y%20receptor
Neuropeptide Y receptors are a family of receptors belonging to class A G-protein coupled receptors and they are activated by the closely related peptide hormones neuropeptide Y, peptide YY and pancreatic polypeptide. These receptors are involved in the control of a diverse set of behavioral processes including appetite, circadian rhythm, and anxiety. Activated neuropeptide receptors release the Gi subunit from the heterotrimeric G protein complex. The Gi subunit in turn inhibits the production of the second messenger cAMP from ATP. Only the crystal structure of Y1 in complex with two antagonist is available. Types There are five known mammalian neuropeptide Y receptors designated Y1 through Y5. Four neuropeptide Y receptors each encoded by a different gene have been identified in humans, all of which may represent therapeutic targets for obesity and other disorders. Y1 - Y2 - Y4 - Y5 - Antagonists BIBP-3226 Lu AA-33810 BIIE-0246 UR-AK49 References External links G protein-coupled receptors
Neuropeptide Y receptor
Chemistry
224
18,198,410
https://en.wikipedia.org/wiki/Modern%20Greek%20architecture
After the Fall of Constantinople to the Ottomans and the following trends of Greek migration to the Diaspora, Greek architecture was concentrated mainly on the Greek Orthodox churches of the Diaspora. These churches, such as other intellectual centres built by Greeks (foundations, schools, etc.), were used also as a meeting-place. The architectural style of these buildings was heavily influenced by the western European architecture. After the Greek War of Independence and the creation of the modern Greek state, the modern Greek architecture tried to combine the traditional Greek architecture and Greek elements and motives with the western European movements and styles. The 19th-century architecture of Athens and other cities of the Greek Kingdom is mostly influenced by the Neoclassical architecture, with architects such as Theophil Hansen, Ernst Ziller, Panagis Kalkos, Lysandros Kaftanzoglou and Stamatios Kleanthis. History Architecture was built using bated and phenixes-a special type of grass in Greece mixed in white paste. The architecture of the modern Greek cities, especially the old centres ("old towns") is mostly influenced either by the Ottoman or the Venetian architecture, two forces that dominated the Greek space from the early modern period. After the Greek Independence, the modern Greek architects tried to combine traditional Greek and Byzantine elements and motives with the western European movements and styles. Patras was the first city of the modern Greek state to develop a city plan. In January 1829, Stamatis Voulgaris, a Greek engineer of the French army, presented the plan of the new city to the Governor Kapodistrias, who approved it. Voulgaris applied the orthogonal rule in the urban complex of Patras. However his initial plan was modified in 1830. Two special genres can be considered the Cycladic architecture, featuring whitewashedhouses, in the Cyclades and the Epirotic architecture in the region of Epirus, featuring stone houses. After the establishment of the Greek Kingdom, the architecture of Athens and other cities was mostly influenced by the Neoclassical architecture. For Athens, the first King of Greece, Otto of Greece, commissioned the architects Stamatios Kleanthis and Eduard Schaubert to design a modern city plan fit for the capital of a state. Neoclassical examples In 1917 most of Thessaloniki's old center of the city was destroyed by the Great Thessaloniki Fire of 1917. Following the fire the government prohibited quick rebuilding, so it could implement the new redesign of the city according to the European-style urban plan prepared by a commission of architects headed by French architect Ernest Hébrard. In 1933 was signed the Athens Charter, a manifesto of the modernist movement which published later by Le Corbusier. Architects of this movement were among others the Bauhaus-architect Ioannis Despotopoulos, Dimitris Pikionis, Patroklos Karantinos and Takis Zenetos. Antiparochi laws In 1929, two important laws concerning apartment buildings took effect. The law about "horizontal property" made it possible that many different owners own one apartment building, each by owning one or more apartment units. Theoretically, each apartment corresponds to a percentage of the original plot. The most important effect of this law was the practice of "αντιπαροχή" (antiparochí, literally "a supply in exchange"). With antiparochí, the owner of a plot, who can't afford to build an apartment building by himself, makes a contract with a construction company so that the latter will build the apartment building but keep the ownership of as many apartments as the contract states. Although during the inter-war period the practice of antiparochí was limited, as the construction of most apartment buildings was financed solely by the original owners of the plot, antiparochí became the most common method for financing the construction of condominiums (polykatoikíes) from the 1950s onwards. However this practice had the negative effect of the destruction of many old-era (mostly of 19th century) buildings and mansions in the major Greek cities, to be replaced by common apartment buildings. Even today (2019), the owners of old buildings or mansions, listed as architecturally preserved, prefer to let them collapse to avoid the preservation cost and exploit the plot. After World War II and the Greek civil war, this practice (the massive construction of condominiums in the major Greek city-centres) was a major contributory factor for the Greek economy and the post-war recovery. The first scycrapers were also constructed during the 1960s and 1970s, such as the OTE Tower and the Athens Tower Complex. During the 1960s and 1970s, Xenia was a nationwide hotel construction program initiated by the Hellenic Tourism Organisation (Ελληνικός Οργανισμός Τουρισμού, EOT) to improve the country's tourism infrastructure. It constitutes one of the largest infrastructure projects in modern Greek history. The first manager of the project was the architect Charalambos Sfaellos (from 1950 to 1958) and from 1957 the buildings were designed by a team under Aris Konstantinidis. Famous foreign architects who have also designed buildings in Greece during the 20th and 21st century, include Walter Gropius, Eero Saarinen and Mario Botta. Several new buildings were also constructed by Santiago Calatrava for the 2004 Athens Olympics, while Bernard Tschumi designed the New Acropolis Museum. Recently in 2012 Renzo Piano designed the Stavros Niarchos Foundation Cultural Center (completed in 2016). Gallery See also Ancient Greek architecture Greek Revival architecture Modern architecture in Athens Mycenaean Revival architecture References External links Architectural history Architecture in Greece
Modern Greek architecture
Engineering
1,176
24,519,420
https://en.wikipedia.org/wiki/Graded%20manifold
In algebraic geometry, graded manifolds are extensions of the concept of manifolds based on ideas coming from supersymmetry and supercommutative algebra. Both graded manifolds and supermanifolds are phrased in terms of sheaves of graded commutative algebras. However, graded manifolds are characterized by sheaves on smooth manifolds, while supermanifolds are constructed by gluing of sheaves of supervector spaces. Graded manifolds A graded manifold of dimension is defined as a locally ringed space where is an -dimensional smooth manifold and is a -sheaf of Grassmann algebras of rank where is the sheaf of smooth real functions on . The sheaf is called the structure sheaf of the graded manifold , and the manifold is said to be the body of . Sections of the sheaf are called graded functions on a graded manifold . They make up a graded commutative -ring called the structure ring of . The well-known Batchelor theorem and Serre–Swan theorem characterize graded manifolds as follows. Serre–Swan theorem for graded manifolds Let be a graded manifold. There exists a vector bundle with an -dimensional typical fiber such that the structure sheaf of is isomorphic to the structure sheaf of sections of the exterior product of , whose typical fibre is the Grassmann algebra . Let be a smooth manifold. A graded commutative -algebra is isomorphic to the structure ring of a graded manifold with a body if and only if it is the exterior algebra of some projective -module of finite rank. Graded functions Note that above mentioned Batchelor's isomorphism fails to be canonical, but it often is fixed from the beginning. In this case, every trivialization chart of the vector bundle yields a splitting domain of a graded manifold , where is the fiber basis for . Graded functions on such a chart are -valued functions , where are smooth real functions on and are odd generating elements of the Grassmann algebra . Graded vector fields Given a graded manifold , graded derivations of the structure ring of graded functions are called graded vector fields on . They constitute a real Lie superalgebra with respect to the superbracket , where denotes the Grassmann parity of . Graded vector fields locally read . They act on graded functions by the rule . Graded exterior forms The -dual of the module graded vector fields is called the module of graded exterior one-forms . Graded exterior one-forms locally read so that the duality (interior) product between and takes the form . Provided with the graded exterior product , graded one-forms generate the graded exterior algebra of graded exterior forms on a graded manifold. They obey the relation , where denotes the form degree of . The graded exterior algebra is a graded differential algebra with respect to the graded exterior differential , where the graded derivations , are graded commutative with the graded forms and . There are the familiar relations . Graded differential geometry In the category of graded manifolds, one considers graded Lie groups, graded bundles and graded principal bundles. One also introduces the notion of jets of graded manifolds, but they differ from jets of graded bundles. Graded differential calculus The differential calculus on graded manifolds is formulated as the differential calculus over graded commutative algebras similarly to the differential calculus over commutative algebras. Physical outcome Due to the above-mentioned Serre–Swan theorem, odd classical fields on a smooth manifold are described in terms of graded manifolds. Extended to graded manifolds, the variational bicomplex provides the strict mathematical formulation of Lagrangian classical field theory and Lagrangian BRST theory. See also Connection (algebraic framework) Graded (mathematics) Serre–Swan theorem Supergeometry Supermanifold Supersymmetry References C. Bartocci, U. Bruzzo, D. Hernandez Ruiperez, The Geometry of Supermanifolds (Kluwer, 1991) T. Stavracou, Theory of connections on graded principal bundles, Rev. Math. Phys. 10 (1998) 47 B. Kostant, Graded manifolds, graded Lie theory, and prequantization, in Differential Geometric Methods in Mathematical Physics, Lecture Notes in Mathematics 570 (Springer, 1977) p. 177 A. Almorox, Supergauge theories in graded manifolds, in Differential Geometric Methods in Mathematical Physics, Lecture Notes in Mathematics 1251 (Springer, 1987) p. 114 D. Hernandez Ruiperez, J. Munoz Masque, Global variational calculus on graded manifolds, J. Math. Pures Appl. 63 (1984) 283 G. Giachetta, L. Mangiarotti, G. Sardanashvily, Advanced Classical Field Theory (World Scientific, 2009) ; ; . External links G. Sardanashvily, Lectures on supergeometry, . Supersymmetry Generalized manifolds
Graded manifold
Physics
1,000
1,967,067
https://en.wikipedia.org/wiki/5-Nitro-2-propoxyaniline
5-Nitro-2-propoxyaniline, also known as P-4000 and Ultrasüss, is about 4,000 times the intensity of sucrose (hence its alternate name, P-4000). It is an orange solid that is only slightly soluble in water. It is stable in boiling water and dilute acids. 5-Nitro-2-propoxyaniline was once used as an artificial sweetener but has been banned in the United States because of its possible toxicity. In the US, food containing any added or detectable level of 5-nitro-2-propoxyaniline is deemed to be adulterated in violation of the act based upon an order published in the Federal Register of January 19, 1950 (15 FR 321). References External links Aromatic amines Food additives Sugar substitutes Nitrobenzene derivatives Ethers
5-Nitro-2-propoxyaniline
Chemistry
182
51,646,890
https://en.wikipedia.org/wiki/Haplarithm
Parental (paternal and maternal) haplarithms are the outputs of haplarithmisis process. For instance, paternal haplarithm represents chromosome specific profile illuminating paternal haplotype of that chromosome (including homologous recombination between the two paternal homologous chromosomes) and the amount of those haplotypes. Importantly, the haplarithm signatures allow tracing back the genomic aberration to meiosis and/or mitosis. References Genomics Human genetics Molecular biology techniques
Haplarithm
Chemistry,Biology
109
26,205,474
https://en.wikipedia.org/wiki/Heat%20trap
Heat traps are valves or loops of pipe on the cold water inlet and hot water outlet of water heaters. The heat traps allow cold water to flow into the water heater tank, but prevent unwanted natural convection and heated water to flow out of the tank. Newer water heaters have built-in heat traps. About Many water-heating pieces of equipment have integral heat traps installed from the factory. For water-heating equipment that does not already have factory installed heat traps, they must be purchased then installed in the inlet and outlet connections. Heat traps are very simple and inexpensive. They are an effective way to prevent cooling of hot water in water heaters by thermosyphoning the hot water to a higher elevated portion of the piping system. Thermosyphoning is based on natural convection. Hot water rises and is then displaced by cold water beneath it. The heat trap stops this process, thus keeping the hot water inside the insulated storage tank. See also Literature Have we forgotten to make heat traps? , Esbe AB, May 2012 References Plumbing
Heat trap
Engineering
218
30,823,935
https://en.wikipedia.org/wiki/System%20of%20parameters
In mathematics, a system of parameters for a local Noetherian ring of Krull dimension d with maximal ideal m is a set of elements x1, ..., xd that satisfies any of the following equivalent conditions: m is a minimal prime over (x1, ..., xd). The radical of (x1, ..., xd) is m. Some power of m is contained in (x1, ..., xd). (x1, ..., xd) is m-primary. Every local Noetherian ring admits a system of parameters. It is not possible for fewer than d elements to generate an ideal whose radical is m because then the dimension of R would be less than d. If M is a k-dimensional module over a local ring, then x1, ..., xk is a system of parameters for M if the length of is finite. General references References Commutative algebra Ideals (ring theory)
System of parameters
Mathematics
211
6,840,173
https://en.wikipedia.org/wiki/Steven%20A.%20Leadon
Steven A. (Tony) Leadon is a former professor of radiation oncology at the University of North Carolina. In 2003, a university found that Leadon had fabricated and falsified data in his research on DNA repair. In 2006, the United States Office of Research Integrity came to the same conclusion, saying that "Leadon engaged in scientific misconduct by falsifying DNA samples and constructing falsified figures for experiments done in his laboratory to support claimed findings of defects in a DNA repair process that involved rapid repair of DNA damage in the transcribed strand of active genes, included in four grant applications and in eight publications and one published manuscript". In the wake of the investigations, papers have been retracted from several journals including Science and Mutation Research, while more articles were partially retracted from journals including Proceedings of the National Academy of Sciences and Molecular and Cellular Biology. See also List of scientific misconduct incidents External links Living people American cancer researchers DNA repair People involved in scientific misconduct incidents Place of birth missing (living people) University of North Carolina at Chapel Hill faculty Year of birth missing (living people)
Steven A. Leadon
Biology
220
11,421,806
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z196/R39/R59%20family
In molecular biology, Small nucleolar RNA Z196/R39/R59 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA Z196/R39/R59 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. Plant snoRNA Z196 was identified in a screen of Arabidopsis thaliana. References External links Small nuclear RNA
Small nucleolar RNA Z196/R39/R59 family
Chemistry
210
101,352
https://en.wikipedia.org/wiki/Amiga%20Advanced%20Graphics%20Architecture
Amiga Advanced Graphics Architecture (AGA) is the third-generation Amiga graphic chipset, first used in the Amiga 4000 in 1992. Before release AGA was codenamed Pandora by Commodore International. AGA was originally called AA for Advanced Architecture in the United States. The name was later changed to AGA for the European market to reflect that it largely improved the graphical subsystem, and to avoid trademark issues. AGA is able to display graphics modes with a depth of up to s per pixel. This allows for in indexed display modes and (18-bit) in Hold-And-Modify (HAM-8) modes. The palette for the AGA chipset has 256 entries from (24-bit), whereas previous chipsets, the Original Chip Set (OCS) and Enhanced Chip Set (ECS), only allow out of 4096 or 64 colors in Amiga Extra Half-Brite (EHB mode). Other features added to AGA over ECS are super-hi-res smooth scrolling and 32-bit fast page memory fetches to supply the graphics data bandwidth for 8 bitplane graphics modes and wider sprites. AGA is an incremental upgrade, rather than the dramatic upgrade of the other chipset that Commodore had begun in 1988, the Amiga Advanced Architecture chipset (AAA), lacking many features that would have made it competitive with other graphic chipsets of its time. Apart from the graphics data fetches, AGA still operates on 16-bit data only, meaning that significant bandwidth is wasted during register accesses and copper and blitter operations. Also the lack of a chunky graphics mode is a speed impediment to graphics operations not tailored for planar modes, resulting in ghost artifacts during the common productivity task of scrolling. In practice, the AGA HAM mode is mainly useful in paint programs, picture viewers, and for video playback. Workbench in 256 colors is much slower than ECS operation modes for normal application use; a workaround is to use multiple screens with different color depths. AGA lacks flicker free higher resolution modes, being only able to display at flicker-free operation. mode is rarely used as it can only operate at a flickering interlaced mode. In contrast, higher-end PC systems of this era can operate at with a full 256-color display. AGA's highest resolution is in interlaced when overscan is used. These missed opportunities in the AGA upgrade contributed to the Amiga ultimately losing technical leadership in the area of multimedia. After the long-delayed AAA was finally suspended, AGA was to be succeeded by the Hombre chipset, but this was ultimately cancelled due to Commodore's bankruptcy. AGA is present in the CD32, Amiga 1200, and Amiga 4000. Technical details In order to increase memory bandwidth, the Chip RAM data bus was extended to 32-bit width as in the A3000 (unlike AGA, the A3000's Chip RAM is 32-bit for CPU access only) and the Alice chip (replacing OCS/ECS Agnus) was improved to be able to support full-width access for bitplane DMA. Bandwidth was doubled again (to 4x) by using Fast Page Mode RAM. Lisa (replacing former Denise) adds support for 8-bit bitplane data fetches, 256 instances of 24-bit palette registers, and for 32-bit data transfer for bitplane graphic and sprites. The rest of the chipset remains unchanged, as do the Blitter and Copper coprocessors in Alice, still working on 16-bit data. See also Amiga Advanced Architecture chipset (AAA chipset) Amiga Ranger Chipset Amiga Enhanced Chip Set (ECS) Commodore AA+ Chipset (AA+) Amiga Hombre chipset List of home computers by video hardware Original Amiga chipset (OCS) References External links mways.co.uk - How to Code the Amiga - AGA Chipset The AGA Chip Set Functional Specification Amiga chipsets Graphics chips AmigaOS
Amiga Advanced Graphics Architecture
Technology
828
1,831,103
https://en.wikipedia.org/wiki/Foamcore
Foamcore, foam board, or paper-faced foam board is a lightweight and easily cut material used for mounting of photographic prints, as backing for picture framing, for making scale models, and in painting. It consists of a board of polystyrene foam clad with an outer facing of paper on either side, typically white clay-coated paper or brown kraft paper. History The original white foamcore board was made in thicknesses for the graphic arts industry by Monsanto Company under the trade name "Fome-Cor®" starting in 1961. Construction, variants and composition The surface of the regular board, like many other types of paper, is slightly acidic. However, for modern archival picture framing and art mounting purposes it can be produced in a neutral, acid-free version with a buffered surface paper, in a wide range of sizes and thicknesses. Foam-cored materials are also now available with a cladding of solid (non-foamed) polystyrene and other rigid plastic sheeting, some with a textured finish. Foamcore does not adhere well to some glues, such as superglue, and certain types of paint. The foam tends to melt away and dissolve. Some glue works well in casual settings, however, the water in the glue can warp the fibers in the outer layers. Best results are typically obtained from higher-end spray adhesives. A hot glue gun can be used as a substitute, although the high viscosity of hot glues can affect finished projects in the form of board warping, bubbles, or other unsightly blemishes. Self-adhesive foam boards, intended for art and document mounting are also available, though these can be very tricky to use properly; this is because the glue sets very fast. It is considered cheaper to buy plain foam board and use re-positionable spray mount adhesive. Specialty constructions have been developed for engineering uses. Uses Foamcore is commonly used to produce architectural models, prototype small objects and to produce patterns for casting. Scenery for scale model displays, dioramas, and computer games are often produced by hobbyists from foamcore. Foamcore is also often used by photographers as a reflector to bounce light, in the design industry to mount presentations of new products, and in picture framing as a backing material; the latter use includes some archival picture framing methods, which utilize the acid-free versions of the material. Another use is with aero-modellers for building radio-controlled aircraft. Researchers at the University of Manchester created their Giant Foamboard Quadcopter (GFQ) claimed to be the largest possible Civil Aviation Authority licensed drone with an all-up weight (UWT) just below the maximum of 25 Kg (c.55 lbs). See also Corrugated fiberboard (Cardboard) Closed-cell PVC foamboard Arts and crafts Mat (picture framing) References Visual arts materials Composite materials
Foamcore
Physics
595
32,122,188
https://en.wikipedia.org/wiki/Fluorescein-labeled%20proaerolysin
Fluorescein-labeled proaerolysin (FLAER) is used in a flow cytometric assay to diagnose paroxysmal nocturnal hemoglobinuria (PNH). The assay takes advantage of the action of proaerolysin, a prototoxin of aerolysin, a virulence factor of the bacterium Aeromonas hydrophila. Proaerolysin binds to the glycophosphatidylinositol(GPI) anchor in the plasma membrane of cells. Cells affected by PNH lack GPI anchoring proteins, and thus are not bound by proaerolysin. Of note, the FLAER-based assay is not suitable for evaluation of erythrocytes and platelets in PNH but flow cytometry assays based on CD55, CD59 and others are suitable. References Blood tests
Fluorescein-labeled proaerolysin
Chemistry
189
11,471,865
https://en.wikipedia.org/wiki/Venturia%20pyrina
Venturia pyrina is a species of fungus in the family Venturiaceae. A plant pathogen, it causes scab or black spot of pear. It has a widespread distribution in temperate and subtropical regions wherever pears are grown. References Fungi described in 1896 Fungal tree pathogens and diseases Pear tree diseases Venturiaceae Fungus species
Venturia pyrina
Biology
68
11,996,219
https://en.wikipedia.org/wiki/Gooseneck%20%28piping%29
A gooseneck (or goose neck) is a 180° pipe fitting at the top of a vertical pipe that prevents entry of water. Common implementations of goosenecks are ventilator piping or ducting for bathroom and kitchen exhaust fans, ship holds, landfill methane vent pipes, or any other piping implementation exposed to the weather where water ingress would be undesired. It is so named because the word comes from the similarity of pipe fitting to the bend in a goose's neck. Gooseneck may also refer to a style of kitchen or bathroom faucet with a long vertical pipe terminating in a 180° bend. To avoid hydrocarbon accumulation, a thermosiphon should be installed at the low point of the gooseneck. Gooseneck, Lead (pigtail) Leaded goosenecks are short sections of lead pipe (1’ to 2’ long) used during the early 1900s up to World War Two in supplying water to a customer. These lead tubes could be easily bent, and allowed for a flexible connection between rigid service piping. The bent segments of pipe often took the shape of a goose's neck, and are referred to as “lead goosenecks.” Lead is no longer permitted in new water systems or new building construction. Goosenecks (also referred to as pigtails) are in-line components of a water service (i.e. piping, valves, fittings, tubing, and accessories) running from the distribution system water main to a meter or building inlet. The valve used to connect a small-diameter service line to a water main is called a corporation stop (also called a tap, or corp stop). One gooseneck joins the corporation stop to the water service pipe work. A second gooseneck links the supply pipeline to a water meter located outside the building. See also Swan neck duct Swan neck flask Trap (plumbing) References Piping
Gooseneck (piping)
Chemistry,Engineering
390
24,896,405
https://en.wikipedia.org/wiki/ILCD
iLCD (Lighting Cell Display) is a device developed by a research team from Universidad Politecnica de Valencia, a MIT educated bioengineer, undergraduate students of the Universidad Politéctica de Valencia and Universitat de València and several members of the faculty and research staff from Universidad de València (Manuel Porcar), UPV (Pedro De Cordoba) and University of Malaga (Emilio Navarro). It is based on yeast cells expressing aequorin protein sensitive to change in intracellular calcium. Upon electrical stimulation, the transient calcium wave emerges inside the yeast cells and translates into a measurable light signal. Assembly of multiple electrodes over lawn of yeast cells yields Thanks to electronic control and sub-second timescale it is one of the first examples of bioelectronic devices capable of bi-directional communication between a computer and a living system. It is also one of the first examples of design of simple synthetic biology circuits operating on orders of magnitude faster timescale than those based on gene expression. Fast response to a stimulus is essential in variety of applications such as biosensing, medical technology, or as stated before - bioelectronics. The project has been awarded a third place in 2009 iGEM competition References Vilanova C, Hueso A, Palanca C, Marco G, Pitarch M, Otero E, Crespo J, Szablowski J, Rivera S, Domínguez-Escribà L, Navarro E, Montagud A, de Córdoba PF, González A, Ariño J, Moya A, Urchueguía J, Porcar, M. Aequorin-expressing yeast emits light under electric control.J Biotechnol. 2011 Mar 20;152(3):93-5. External links Official website J Biotechnol. 2011 Mar 20;152(3):93-5. Epub 2011 Feb 1. Biotechnology
ILCD
Biology
398
33,747,254
https://en.wikipedia.org/wiki/Fritz%20Gassmann
Fritz Gassmann (1899–1990) was a Swiss mathematician and geophysicist. Life His Ph.D. advisors at ETH Zurich were George Pólya and Hermann Weyl. He was a geophysics professor at the ETH Zurich. Legacy Gassmann is the eponym for the Gassmann triple and Gassmann's equation. Selected publications Gassmann, Fritz (1951). Über die Elastizität poröser Medien. Viertel. Naturforsch. Ges. Zürich, 96, 1 – 23. (English translation available as pdf here). References Gerald L. Alexanderson, "The Random Walks of George Pólya". Mathematical Association of America, 1999. 303pp. . External links ETH Zurich Webpage 1951 Photo 1899 births 1990 deaths 20th-century Swiss mathematicians Number theorists Swiss geophysicists 20th-century Swiss physicists 20th-century Swiss geologists Academic staff of ETH Zurich ETH Zurich alumni
Fritz Gassmann
Mathematics
204
3,890,383
https://en.wikipedia.org/wiki/Caries%20vaccine
A caries vaccine is a vaccine to prevent and protect against tooth decay. Streptococcus mutans (S. mutans) has been identified as the major etiological agent of human dental caries. The development of a vaccine for tooth decay has been under investigation since the 1970s. In 1972, a caries vaccine was said to be in animal testing in England, and that it would have begun human testing soon. However, intrinsic difficulties in developing it, coupled with lack of strong economic interests, are the reasons why still no such vaccine is commercially available today. Several types of vaccines are being developed at research centres, with some kind of caries vaccines being considered to diminish or prevent dental caries' impact on young people. Attempts using antibodies Early attempts followed a traditional approach to vaccination where normal S. mutans was introduced to promote a reaction from the immune system, stimulating antibody production. Planet Biotechnology developed a monoclonal antibody against S. mutans, branded CaroRx, produced with transgenic tobacco plants. It is a therapeutic vaccine, applied once every several months. Phase II clinical trials were discontinued in 2016. The International Associations for Dental Research and American Association for Dental Research announced a study performed by the Chinese Academy of Sciences which looked at using an inhaled vaccine that uses a protein filament as a delivery vehicle. Trials performed in rats showed an increase in antibody response along with a decrease in the amount of Streptococcus mutans adhering to teeth, leading to significantly fewer cavities observed among the test population. Attempts using replacement therapy On a different line of research, Jeffrey Hillman from the University of Florida developed a genetically modified strain of Streptococcus mutans called BCS3-L1, that is incapable of producing lactic acid – the acid that dissolves tooth enamel – and aggressively replaces native flora. In laboratory tests, rats who were given BCS3-L1 were conferred with a lifetime of protection against S. mutans. BCS3-L1 colonizes the mouth and produces a small amount of a lantibiotic, called MU1140, which allows it to out-compete S. mutans. Hillman suggested that treatment with BCS3-L1 in humans could also provide a lifetime of protection, or, at worst, require occasional re-applications. He stated that the treatment would be available in dentists' offices and "will probably cost less than $100." The product was being developed by Oragenics, but was shelved in 2014, citing regulatory concerns and patent issues. In 2016, Oragenics received a 17-year patent for the product. In 2023, the startup Lumina Probiotic began developing a BCS3-L1 application in Próspera, Honduras|url=https://www.oragenics.com/news-media/press-releases/detail/163/oragenics-enters-into-agreement-with-lantern-bioworks-for| On rare occasions the native S. mutans strain escapes into the blood, potentially causing dangerous heart infections. It is unclear how likely BCS3-L1 is to do the same. Another approach is being pursued by BASF, focused on replacing native lactobacillus flora with a variety dubbed L. anti-caries, which prevents S. mutans from binding to enamel. However, it is not a long-term vaccination in that no attempt is being made to have a self-sustaining population of L. anti-caries. The intent is that the L. anti-caries population would be frequently replenished through use of a chewing gum containing the organism. The University of Leeds has also begun researching a recently discovered peptide known as P11-4. When applied to a cavity and coming in contact with saliva, this peptide assembles itself in a fibrous matrix or scaffold, attracting calcium and thereby allowing the tooth to regenerate. The Swiss-based company Credentis has licensed the peptide and launched a product called Curodont Repair in 2013. Recent studies show a positive clinical effect. DNA vaccines DNA vaccine approaches for dental cavities have had a history of success in animal models. Dental cavity vaccines directed to key components of S. mutans colonization and enhanced by safe and effective adjuvants and optimal delivery vehicles, are likely to be forthcoming. Some believe that the rational target for developing an anti-caries vaccine is a protein antigen, which has adherent functional and important immunogenic regions. Bacteriophage treatment The use of Enterococcus faecalis bacteriophages as a form of treatment for caries has been considered, as they are capable of maintaining persistent stability in human saliva. References Vaccines Dentistry Tooth decay
Caries vaccine
Biology
998
3,067,278
https://en.wikipedia.org/wiki/Heisenberg%27s%20microscope
Heisenberg's microscope is a thought experiment proposed by Werner Heisenberg that has served as the nucleus of some commonly held ideas about quantum mechanics. In particular, it provides an argument for the uncertainty principle on the basis of the principles of classical optics. The concept was criticized by Heisenberg's mentor Niels Bohr, and theoretical and experimental developments have suggested that Heisenberg's intuitive explanation of his mathematical result might be misleading. While the act of measurement does lead to uncertainty, the loss of precision is less than that predicted by Heisenberg's argument when measured at the level of an individual state. The formal mathematical result remains valid, however, and the original intuitive argument has also been vindicated mathematically when the notion of disturbance is expanded to be independent of any specific state. Heisenberg's argument Heisenberg supposes that an electron is like a classical particle, moving in the direction along a line below the microscope. Let the cone of light rays leaving the microscope lens and focusing on the electron make an angle with the electron. Let be the wavelength of the light rays. Then, according to the laws of classical optics, the microscope can only resolve the position of the electron up to an accuracy of An observer perceives an image of the particle because the light rays strike the particle and bounce back through the microscope to the observer's eye. We know from experimental evidence that when a photon strikes an electron, the latter has a Compton recoil with momentum proportional to , where is the Planck constant. However, the extent of "recoil cannot be exactly known, since the direction of the scattered photon is undetermined within the bundle of rays entering the microscope." In particular, the electron's momentum in the direction is only determined up to Combining the relations for and , we thus have , which is an approximate expression of Heisenberg's uncertainty principle. Analysis of argument Although the thought experiment was formulated as an introduction to Heisenberg's uncertainty principle, one of the pillars of modern physics, it attacks the very premises under which it was constructed, thereby contributing to the development of an area of physics—namely, quantum mechanics—that redefined the terms under which the original thought experiment was conceived. Some interpretations of quantum mechanics question whether an electron actually has a determinate position before it is disturbed by the measurement used to establish said determinate position. Under the Copenhagen interpretation, an electron has some probability of showing up at any point in the universe, though the probability that it will be far from where one expects becomes very low at great distances from the neighborhood in which it is originally found. In other words, the "position" of an electron can only be stated in terms of a probability distribution, as can predictions of where it may move. See also Atom localization Quantum mechanics Basics of quantum mechanics Interpretation of quantum mechanics Philosophical interpretation of classical physics Schrödinger's cat Uncertainty principle Quantum field theory Electromagnetic radiation References Sources External links History of Heisenberg's Microscope Lectures on Heisenberg's Microscope Thought experiments in quantum mechanics Werner Heisenberg
Heisenberg's microscope
Physics
626
5,933,061
https://en.wikipedia.org/wiki/Transfer%20%28public%20transit%29
A transfer allows the rider of a public transportation vehicle who pays for a single-trip fare to continue the trip on another bus or train. Depending on the network, there may or may not be an additional fee for the transfer. Historically, transfers may have been stamped or hole-punched with the time, date, and direction of travel to prevent their use for a return trip. More recently, magnetic or barcoded tickets may be recorded (as on international flights) or ticket barriers may only charge on entry and exit to a larger system (as on modern underground rail networks). Some public transportation systems allowing a rider to switch from one vehicle to another for free without paying an additional fare. A free transfer can be implemented by having both vehicles stop within the same fare control area, by issuing the rider a special ticket (also called a "free transfer") or by using an electronic smartcard system programmed to allow such transfers. Fare cards vastly simplify transfers, especially between different operators, since the transfer and payment (if any) is handled automatically by the card. Since transfers between services can significantly expand the effective range and coverage of another service, fare cards are often implemented specifically to improve a transit network's quality. References Public transport fare collection
Transfer (public transit)
Physics
253
75,077,678
https://en.wikipedia.org/wiki/Overhaul%20hook%20ball
An overhaul hook ball, also known as an overhaul ball or headache ball, is a heavy weight that is attached to the end of a crane's cable, above the lifting hook. It is used to keep the cable under sufficient tension even when no load is attached. Although commonly spherical as the name suggests, overhaul balls may also be ellipsoidal or cylindrical. Overhaul balls should be distinguished from wrecking balls, which although superficially similar looking, are different and serve a different purpose. References Lifting equipment
Overhaul hook ball
Physics,Technology
104
28,321,462
https://en.wikipedia.org/wiki/Gravity%20Discovery%20Centre
The Gravity Discovery Centre and Observatory is a "hands-on" science education, astronomy, Aboriginal culture and tourist centre, situated on the site of the Gravity Precinct in bushland near Gingin, north of Perth, Western Australia. It is a not-for-profit interactive science education centre, operated by The Gravity Discovery Centre Foundation Board Inc. It received government funding of $300,000 to cover the period 2021-2023. The Department of Biodiversity Conservation and Attractions manages the bushland surrounding the Discovery Centre and the observatory. In 2005, Emeritus Professor John de Laeter was awarded the Eureka Prize for "promoting [the] understanding of science" in recognition of his creation of the Gravity Discovery Centre. Exhibits The Discovery Centre Magnetic Cart Visitors can roll this cart, which has strong magnets attached to it, down a ramp. They are invited to notice how it slows down as it passes over the metal plates, which are made of copper or aluminium: both good electrical conductors. The moving magnet creates electricity in the metal plates – the kinetic energy of the cart is converted to electrical energy, slowing it down. OzGrav Model OzGrav is the abbreviation for the ARC Centre of Excellence for Gravitational Wave Discovery. Bernoulli Ball Bernoulli's principle explains that an increase in the speed of air produces a decrease in static pressure. The principle is named after Daniel Bernoulli who published it in his book Hydrodynamica in 1738. In this interactive display, when the ball moves to the side it is pushed back toward the centre of the air flow. The upward flow of air provides an upward force on the ball keeping it suspended – apparently defying gravity. Space Capsule This spinning display demonstrates gravitational forces. The Cosmology Gallery This gallery is topped with a diameter geodesic dome NIOBE, the first southern hemisphere gravitational wave detector. The search for gravitational waves began in the 1990s and this detector, called NIOBE, was one of five set up around the world as part of that search. At its heart is a niobium bar. This niobium bar gravitational wave detector, and associated superconducting electromechanical sensors, were developed by Professor David Blair of UWA. It came into operation in 1993 after 16 years, 12 PhD projects and several million dollars to build. This worldwide experiment set limits to the strength of gravitational waves and paved the way for the next generation of detectors. They achieved world-record sensitivity and opened a new area of research into quantum measurement and optomechanics. Note: Gravitational waves are ripples in the curvature of spacetime caused by huge cosmic events like the Big Bang and the collision of black holes. Timeline from the Big Bang to the present The Timeline of the Universe in the Cosmology Gallery tells the story of the creation of the universe – from the Big Bang right through to the present. It shows all the different stages of development and evolution of planet Earth. There are stories, as well as real fossils to look at on the Timeline. This display asks visitors to consider questions regarding themselves and the Universe they live in. Penrose Floor Attempting to tile a plane with regular pentagons must necessarily leave gaps. Mathematician Roger Penrose found a particular tiling in which the gaps may be filled with three other shapes: a star, a boat, and a diamond. In addition to the tiles, Penrose stated rules, usually called matching rules, that specify how tiles must be attached to one another; these rules are needed to ensure that the tilings are non-periodic. There are three distinct sets of matching rules for pentagonal tiles, shown in different colours in the illustration. This leads to a set of six tiles: a thin rhombus or “diamond”, a five-pointed star, a “boat” and three pentagons. Astrophotography The Australian Shaman Exhibition – Indigenous Australians’ connection to and understanding of the night sky, including many artworks depicting the stories told to explain what they observed. Coherence to Chaos Exhibition Multicultural artwork Southern Cross Cosmos Centre, home to the GDC Observatory The Zadko Telescope, a robotic optical telescope. The Solar System Walk The Solar System Walk is an educational 1km scale model of the Solar System. The walk begins at the Sun and disappears along a track through native bush. Alongside the track, model planets and their moons are located at the correct scaled distances from the Sun. Information plaques are located at each planet. The walk finishes at Pluto, although Pluto is now defined as a dwarf planet, rather than a planet. On the 1km scale model, the relative size of the Earth should be about the size of a peppercorn. And Saturn should only be the size of a peanut or a coffee bean, Mercury should be the size of a cake freckle, Jupiter should be almost the size of a golf ball and Pluto should be the size of a pin head. However, the centre staff have multiplied the size scale of the planets and moons by a factor of 200, to allow visitors to view more practically sized model planets and moons. The Solar System Walk is designed to give an understanding of true sizes and distances in the Solar System, and the vastness of the Universe. During the walk visitors might spot one of the centre's resident kangaroos. Wildflowers are abundant in this area during late winter and spring. As noted above, because Pluto is now considered a “dwarf planet”, visitors will find a replica of Pluto at his final resting place: a satin-lined coffin in the main exhibition area of the GDC. Biodiversity Gallery In the Biodiversity Gallery, visitors can view displays about some of the local flora and fauna. The south-west of Australia is regarded as one of the world's “biodiversity hotspots” with many endemic species that are under threat. Local bushland surrounding the centre is inhabited by some rare and endangered species of plants and animals. The gallery aims to celebrate the rich diversity of plant and animal species in the area. Insect specimens, such as a native bee, have been submersed in resin for visitors to view under the microscope. Samples of local wildflowers are also presented for examination. Biodiversity Walks around the site (often with a guide) enable visitors to view a great diversity of plant species in a short walk. Ancient paperbark trees (Melaleuca sp.), between 800 and 1000 years old, exist in the area beside the Leaning Tower. The Gravity Discovery Centre is located on state government managed land and the surrounding bushland is in its original state, unchanged for thousands of years. The wildflowers provide a colourful display every spring. The bright orange-yellow Morrison shrubs (Verticordia nitens) begin to bloom in November, with the display lasting through to mid-January. However, many other plants and animals can be found all year round. The Leaning Tower of Gingin The Leaning Tower of Gingin is a purpose built tall steel inclined tower, designed so that visitors can recreate the experiments of Galileo Galilei. There are 222 steps to the top from where balloons filled with water can be dropped through chutes. The tower leans at an angle of 15 degrees and held in place by 180 tons of concrete. The drop tower is also used by the YouTube channel "How Ridiculous" for various drop tests. Reception , Tripadvisor has 121 reviews of the centre, with an average score of four stars out of a possible five. References External links Official site Science museums in Australia Museums in Western Australia Astronomy museums Inclined towers Shire of Gingin Science and technology in Western Australia Solar System models
Gravity Discovery Centre
Astronomy
1,537
19,484,497
https://en.wikipedia.org/wiki/HEC-HMS
The Hydrologic Modeling System (HEC-HMS) is designed to simulate the precipitation-runoff processes of dendritic drainage basins. It is designed to be applicable in a wide range of geographic areas for solving the widest possible range of problems. This includes large river basin water supply and flood hydrology, and small urban or natural watershed runoff. Hydrographs produced by the program are used directly or in conjunction with other software for studies of water availability, urban drainage, flow forecasting, future urbanization impact, reservoir spillway design, flood damage reduction, floodplain regulation, and systems operation. The program is a generalized modeling system capable of representing many different watersheds. A model of the watershed is constructed by separating the water cycle into manageable pieces and constructing boundaries around the watershed of interest. Any mass or energy flux in the cycle can then be represented with a mathematical model. In most cases, several model choices are available for representing each flux. Each mathematical model included in the program is suitable in different environments and under different conditions. Making the correct choice requires knowledge of the watershed, the goals of the hydrologic study, and engineering judgement. HEC-HMS is a product of the Hydrologic Engineering Center within the U.S. Army Corps of Engineers. The program was developed beginning in 1992 as a replacement for HEC-1 which has long been considered a standard for hydrologic simulation. The new HEC-HMS provides almost all of the same simulation capabilities, but has modernized them with advances in numerical analysis that take advantage of the significantly faster desktop computers available today. It also includes a number of features that were not included in HEC-1, such as continuous simulation and grid cell surface hydrology. It also provides a graphical user interface to make it easier to use the software. The program is now widely used and accepted for many official purposes, such as floodway determinations for the Federal Emergency Management Agency in the United States. See also Hydrology References External links HEC-HMS Homepage at the Hydrologic Engineering Center Hydrology software
HEC-HMS
Environmental_science
416
68,063,156
https://en.wikipedia.org/wiki/Halococcaceae
Halococcaceae is a family of halophilic and mostly chemoorganotrophic archaea within the order Halobacteriales. The type genus of this family is Halococcus. Its biochemical characteristics are the same as the order Halobacteriales. The name Halococcaceae is derived from the Latin term Halococcus, referring to the type genus of the family and the suffix "-ceae", an ending used to denote a family. Together, Halococcaceae refers to a family whose nomenclatural type is the genus Halococcus. Current Taxonomy and Molecular Signatures As of 2021, Halococcaceae contains a single validly published genus, Halococcus. This family can be molecularly distinguished from other Halobacteria by the presence of 23 conserved signature proteins (CSPs) and nine conserved signature indels (CSIs) present in the following proteins: DNA gyrase subunit B, chaperone protein DnaK, HAD-superfamily hydrolase, glycosyltransferase, 2-Succinyl-6-hydroxy-2,4-cyclohexadiene-1-carboxylate synthase, iron-regulated ABC transporter, glycine dehydrogenase subunit 2, GMP synthase and a hypothetical protein. References Halobacteria Taxa described in 2016 Monotypic archaea taxa
Halococcaceae
Biology
290
2,702,790
https://en.wikipedia.org/wiki/Lambda1%20Tucanae
{{DISPLAYTITLE:Lambda1 Tucanae}} Lambda1 Tucanae is the Bayer designation for one member of a pair of stars sharing a common proper motion through space, which lie within the southern constellation of Tucana. As of 2013, the pair had an angular separation of 20.0 arc seconds along a position angle of 82°. Together, they are barely visible to the naked eye with a combined apparent visual magnitude of 6.21. Based upon an annual parallax shift for both stars of approximately 16.5 mas as seen from Earth, this system is located roughly 198 light years from the Sun. The brighter member, component A, is a magnitude 6.70 F-type star with a stellar classification of F7 IV-V. The luminosity class may indicate that, at the age of 2.6 billion years, it is beginning to evolve away from the main sequence. It has an estimated 1.55 times the mass of the Sun and is radiating 7 times the solar luminosity from its photosphere at an effective temperature of 6,325 K. The magnitude 7.35 companion, component B, has 1.38 times the mass of the Sun. If the pair are gravitationally bound, then their estimated orbital period is 27,000 years. References F-type subgiants Binary stars Tucanae, Lambda Tucana Durchmusterung objects 005190 208 004084 8 0252 Suspected variables G-type main-sequence stars
Lambda1 Tucanae
Astronomy
307
26,274,126
https://en.wikipedia.org/wiki/Statue%20of%20Hope
The Statue of Hope is an allegorical figure that is typically a private memorial or monumental sculpture displayed in a graveyard or cemetery, often a Rural cemetery. Hope is one of the Seven Virtues of the Christian religion. History Most commonly used in the Victorian era and believed to be popularized by the Statue of Liberty's initial construction in 1875 and dedication in 1886 and the installation of Civil War monuments during the same time period. Prior to this, other images such as Saint Philomena whose authorization of devotion began in 1837 and Danish sculptor Bertel Thorvaldsen's Goddess of Hope statue sculpted in 1817, displayed similar characteristics. One of the earliest signed Statue of Hope memorials was carved by Odoardo Fantacchiotti in 1863 for the grave of Samuel Reginald Routh of England in the Protestant Cemetery of Florence, Italy. Another variation was completed in 1791. The Custom House, Dublin Ireland features a 16 foot (about 5 meter) tall statue of a female resting on an anchor atop the dome. This statue has been called both the Statue of Hope and the Statue of Commerce. Construction In the United States, these statues were commonly carved out of limestone, granite or marble and were usually unsigned and surmounted on a tall pedestal. The Wilson memorial pictured on the right was sculpted using Carrara marble in Italy in the style of sculptor Pasquale Romanelli. Towards the latter part of the 19th and early 20th century, some were cast with bronze and zinc. Religious symbolism A female, typically shown wearing an Under Tunic, Roman Stola and Palla garments, stands with one arm resting on or holding an anchor. This is often an Anchored cross meaning hope and is the primary symbol of the statue. Further, the New Testament, Hebrews 6:19 states Which hope we have as an anchor of the soul, both sure and steadfast, and which entereth into that within the veil. Often, the opposite arm is raised with the index finger of the hand pointing towards the sky. This symbolizes the pathway to heaven. A hand held over the heart symbolizes faith. Other key elements can be a broken chain attached to the anchor or sometimes hanging from the neck. This symbolizes the cessation of life. Many statues have a single five pointed star rather than a circle of stars. The star is on the top of the forehead, usually on a Crown of Immortality or diadem, and represents the immortal soul. See also Sculpture Stone sculpture Stone carving References External links All known Statue of Hope memorials on findagrave Statue of Hope National Register Map Statue of Hope photo album A short video about a vandalized Statue of Hope Thorvaldsen Museum Copenhagen, Denmark St Philomena images Allegorical sculptures Star symbols Hope Christian symbols Burial monuments and structures
Statue of Hope
Mathematics
557
394,070
https://en.wikipedia.org/wiki/Congr%C3%A8s%20Internationaux%20d%27Architecture%20Moderne
The Congrès Internationaux d'Architecture Moderne (CIAM), or International Congresses of Modern Architecture, was an organization founded in 1928 and disbanded in 1959, responsible for a series of events and congresses arranged across Europe by the most prominent architects of the time, with the objective of spreading the principles of the Modern Movement focusing in all the main domains of architecture (such as landscape, urbanism, industrial design, and many others). Formation and membership The International Congresses of Modern Architecture (CIAM) was founded in June 1928, at the Chateau de la Sarraz in Switzerland, by a group of 28 European architects organized by Le Corbusier, Hélène de Mandrot (owner of the castle), and Sigfried Giedion, (the first secretary-general). CIAM was one of many 20th-century manifestos meant to advance the cause of architecture as a social art. Members Other founder members included Karl Moser (first president), Hendrik Berlage, Victor Bourgeois, Pierre Chareau, Sven Markelius, Josef Frank, Gabriel Guevrekian, Max Ernst Haefeli, Hugo Häring, Arnold Höchel, Huib Hoste, Pierre Jeanneret (cousin of Le Corbusier), André Lurçat, Ernst May, Max Cetto, Fernando García Mercadal, Hannes Meyer, Werner M. Moser, Carlo Enrico Rava, Gerrit Rietveld, Alberto Sartoris, Hans Schmidt, Mart Stam, Rudolf Steiger, Szymon Syrkus, Henri-Robert Von der Mühll, and Juan de Zavala. The Soviet delegates were El Lissitzky, Nikolai Kolli and Moisei Ginzburg, although at the Sarraz conference they were unable to obtain visas. Later members included Minnette de Silva, Walter Gropius, Alvar Aalto, Uno Åhrén, Louis Herman De Koninck (1929) and Fred Forbát. In 1941, Harwell Hamilton Harris was chosen as secretary of the American branch of CIAM, which was the Chapter for Relief and Post War Planning, founded in New York City. Josep Lluís Sert participated in the congresses as of 1929, and served as CIAM president from 1947 to 1956. He was co-founder of GATEPAC and GATCPAC (in Zaragoza and Barcelona, respectively) in 1930, as well as ADLAN (Friends of New Art) in Barcelona in 1932. CIRPAC The elected executive body of CIAM was CIRPAC, the Comité international pour la résolution des problèmes de l’architecture contemporaine (International Committee for the Resolution of Problems in Contemporary Architecture). Influence The organization was hugely influential. It was not only engaged in formalizing the architectural principles of the Modern Movement, but also saw architecture as an economic and political tool that could be used to improve the world through the design of buildings and through urban planning. The fourth CIAM meeting in 1933 was to have been held in Moscow. The rejection of Le Corbusier's competition entry for the Palace of the Soviets, a watershed moment and an indication that the Soviets had abandoned CIAM's principles, changed those plans. Instead it was held on board ship, the SS Patris II. which sailed from Marseille to Athens. Here the group discussed the principles of "The Functional City", which broadened CIAM's scope from architecture into urban planning. Based on an analysis of thirty-three cities, CIAM proposed that the social problems faced by cities could be resolved by strict functional segregation, and the distribution of the population into tall apartment blocks at widely spaced intervals. These proceedings went unpublished from 1933 until 1943, when Le Corbusier, acting alone, published them in heavily edited form as the Athens Charter. Separation As CIAM members travelled worldwide after the war, many of its ideas spread outside Europe, notably to the USA. The city planning ideas were adopted in the rebuilding of Europe following World War II, although by then some CIAM members had their doubts. Alison and Peter Smithson were chief among the dissenters. When implemented in the postwar period, many of these ideas were compromised by tight financial constraints, poor understanding of the concepts, or popular resistance. Mart Stam's replanning of postwar Dresden in the CIAM formula was rejected by its citizens as an "all-out attack on the city". The CIAM organization disbanded in 1959 as the views of the members diverged. Le Corbusier had left in 1955, objecting to the increasing use of English during meetings. For a reform of CIAM, the group Team 10 was active from 1953 onwards, and two different movements emerged from it: the Brutalism of the English members (Alison and Peter Smithson) and the Structuralism of the Dutch members (Aldo van Eyck and Jacob B. Bakema). Conferences CIAM's conferences consisted of: 1928, CIAM I, La Sarraz, Switzerland, Foundation of CIAM 1929, CIAM II, Frankfurt am Main, Germany, on The Minimum Dwelling 1930, CIAM III, Brussels, Belgium, on Rational Land Development (Rationelle Bebauungsweisen) 1933, CIAM IV, Athens, Greece, on The Functional City (Die funktionelle Stadt) 1937, CIAM V, Paris, France, on Dwelling and Recovery 1947, CIAM VI, Bridgwater, England, Reaffirmation of the aims of CIAM 1949, CIAM VII, Bergamo, Italy, on The Athens Charter in Practice 1951, CIAM VIII, Hoddesdon, England, on The Heart of the City 1953, CIAM IX, Aix-en-Provence, France, on Habitat 1956, CIAM X, Dubrovnik, Yugoslavia (now Croatia), on Habitat 1959, CIAM XI, Otterlo, the Netherlands, organized dissolution of CIAM by Team 10 See also Modern architecture Bibliography Eric Mumford, The CIAM Discourse on Urbanism – 1928–1960, Cambridge Mass. and London 2000. (Foreword by Kenneth Frampton). Sigfried Giedion, Space, Time and Architecture – The Growth of a New Tradition, Cambridge Mass. 2009, 5th edition. (CIAM, summary in Part VI). Max Risselada and Dirk van den Heuvel (eds.), TEAM 10 – In Search of a Utopia of the Present – 1953–1981, Rotterdam 2005. (TEAM 10 out of CIAM). Lorenzo Mingardi, Reweaving the city: the CIAM summer schools from London to Venice (1949–57), L. Ciccarelli, C. Melhuish (eds), Post-war Architecture between Italy and the UK.Exchanges and transcultural influences, London, UCL Press, 2021, 107-126. ISBN 9781800080836 Notes References Architecture groups Modernist architecture Modernist architects Urban planning organizations Architectural theory Arts organizations established in 1928 Organizations disestablished in 1959
Congrès Internationaux d'Architecture Moderne
Engineering
1,439
12,138,221
https://en.wikipedia.org/wiki/Molecular%20switch
A molecular switch is a molecule that can be reversibly shifted between two or more stable states. The molecules may be shifted between the states in response to environmental stimuli, such as changes in pH, light, temperature, an electric current, microenvironment, or in the presence of ions and other ligands. In some cases, a combination of stimuli is required. The oldest forms of synthetic molecular switches are pH indicators, which display distinct colors as a function of pH. Currently synthetic molecular switches are of interest in the field of nanotechnology for application in molecular computers or responsive drug delivery systems. Molecular switches are also important in biology because many biological functions are based on it, for instance allosteric regulation and vision. They are also one of the simplest examples of molecular machines. Biological molecular switches In cellular biology, proteins act as intracellular signaling molecules by activating another protein in a signaling pathway. In order to do this, proteins can switch between active and inactive states, thus acting as molecular switches in response to another signal. For example, phosphorylation of proteins can be used to activate or inactivate proteins. The external signal flipping the molecular switch could be a protein kinase, which adds a phosphate group to the protein, or a protein phosphatase, which removes phosphate groups. Acidochromic molecular switches The capacity of some compounds to change in function of the pH was known since the sixteenth century. This effect was even known before the development of acid-base theory. Those are found in a wide range of plants like roses, cornflowers, primroses and violets. Robert Boyle was the first person to describe this effect, employing plant juices (in the forms of solution and impregnated paper). Molecular switches are most commonly used as pH indicators, which are molecules with acidic or basic properties. Their acidic and basic forms have different colors. When an acid or a base is added, the equilibrium between the two forms is displaced. Photochromic molecular switches A widely studied class are photochromic compounds which are able to switch between electronic configurations when irradiated by light of a specific wavelength. Each state has a specific absorption maximum which can then be read out by UV-VIS spectroscopy. Members of this class include azobenzenes, diarylethenes, dithienylethenes, fulgides, stilbenes, spiropyrans and phenoxynaphthacene quinones. Chiroptical molecular switches are a specific subgroup with photochemical switching taking place between an enantiomeric pairs. In these compounds the readout is by circular dichroism rather than by ordinary spectroscopy. Hindered alkenes such as the one depicted below change their helicity (see: planar chirality) as response to irradiation with right or left-handed circularly polarized light Chiroptical molecular switches that show directional motion are considered synthetic molecular motors: When attached to the end of a helical poly (isocyanate) polymer, they can switch the helical sense of the polymer. Host–guest molecular switches In host–guest chemistry the bistable states of molecular switches differ in their affinity for guests. Many early examples of such systems are based on crown ether chemistry. The first switchable host is described in 1978 by Desvergne & Bouas-Laurent who create a crown ether via photochemical anthracene dimerization. Although not strictly speaking switchable the compound is able to take up cations after a photochemical trigger and exposure to acetonitrile gives back the open form. In 1980 Yamashita et al. construct a crown ether already incorporating the anthracene units (an anthracenophane) and also study ion uptake vs photochemistry. Also in 1980 Shinkai throws out the anthracene unit as photoantenna in favor of an azobenzene moiety and for the first time envisions the existence of molecules with an on-off switch. In this molecule light triggers a trans-cis isomerization of the azo group which results in ring expansion. Thus in the trans form the crown binds preferentially to ammonium, lithium and sodium ions while in the cis form the preference is for potassium and rubidium (both larger ions in same alkali metal group). In the dark the reverse isomerization takes place. Shinkai employs this devices in actual ion transport mimicking the biochemical action of monensin and nigericin: in a biphasic system ions are taken up triggered by light in one phase and deposited in the other phase in absence of light. Mechanically-interlocked molecular switches Some of the most advanced molecular switches are based on mechanically-interlocked molecular architectures where the bistable states differ in the position of the macrocycle. In 1991 Stoddart devices a molecular shuttle based on a rotaxane on which a molecular bead is able to shuttle between two docking stations situated on a molecular thread. Stoddart predicts that when the stations are dissimilar with each of the stations addressed by a different external stimulus the shuttle becomes a molecular machine. In 1993 Stoddart is scooped by supramolecular chemistry pioneer Fritz Vögtle who actually delivers a switchable molecule based not on a rotaxane but on a related catenane This compound is based on two ring systems: one ring holds the photoswichable azobenzene ring and two paraquat docking stations and the other ring is a polyether with to arene rings with binding affinity for the paraquat units. In this system NMR spectroscopy shows that in the azo trans-form the polyether ring is free to rotate around its partner ring but then when a light trigger activates the cis azo form this rotation mode is stopped Kaifer and Stoddart in 1994 modify their molecular shuttle such a way that an electron-poor tetracationic cyclophane bead now has a choice between two docking stations: one biphenol and one benzidine unit. In solution at room temperature NMR spectroscopy reveals that the bead shuttles at a rate comparable to the NMR timescale, reducing the temperature to 229K resolves the signals with 84% of the population favoring the benzidine station. However, on addition of trifluoroacetic acid, the benzidine nitrogen atoms are protonated and the bead is fixed permanently on the biphenol station. The same effect is obtained by electrochemical oxidation (forming the benzidine radical ion) and significantly both processes are reversible. In 2007 molecular shuttles were utilized in an experimental DRAM circuit. The device consists of 400 bottom silicon nanowire electrodes (16 nanometer (nm) wide at 33 nm intervals) crossed by another 400 titanium top-nanowires with similar dimensions sandwiching a monolayer of a bistable rotaxane depicted below: Each bit in the device consists of a silicon and a titanium crossbar with around 100 rotaxane molecules filling in the space between them at perpendicular angles. The hydrophilic diethylene glycol stopper on the left (gray) is specifically designed to anchor to the silicon wire (made hydrophilic by phosphorus doping) while the hydrophobic tetraarylmethane stopper on the right does the same to the likewise hydrophobic titanium wire. In the ground state of the switch, the paraquat ring is located around a tetrathiafulvalene unit (in red) but it moves to the dioxynaphthyl unit (in green) when the fulvalene unit is oxidized by application of a current. When the fulvalene is reduced back a metastable high conductance '1' state is formed which relaxes back to the ground state with a chemical half-life of around one hour. The problem of defects is circumvented by adopting a defect-tolerant architecture also found in the Teramac project. In this way a circuit is obtained consisting of 160,000 bits on an area the size of a white blood cell translating into 1011 bits per square centimeter. References Further reading Supramolecular chemistry Molecular machines
Molecular switch
Physics,Chemistry,Materials_science,Technology
1,697
37,633,023
https://en.wikipedia.org/wiki/HD%20220074
HD 220074 is a star located in the northern constellation of Cassiopeia, near the western border with Cepheus. It has a reddish hue and is dimly visible to the naked eye, having an apparent visual magnitude of +6.39. The star is located at a distance of approximately 1,070 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −37 km/s. This star was assigned a stellar classification of K1V in the Bright Star Catalogue but is now known to be a red giant with a class of M2III, based on its radius and surface gravity. With the supply of hydrogen at its core exhausted, the star has expanded and cooled off the main sequence. It is around 4.5 billion years old with an estimated mass equal to ~1.2 times the mass of the Sun but 60 times the Sun's radius. The star is radiating 783 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 3,935 K. Planetary system From September 2008 to June 2012, the team B.-C. Lee, I. Han, and M.-G. Park observed HD 220074 with "the high-resolution spectroscopy of the fiber-fed Bohyunsan Observatory Echelle Spectrograph (BOES) at Bohyunsan Optical Astronomy Observatory (BOAO)". In 2012, a long-period, wide-orbiting eccentric planet was deduced by radial velocity changes. This finding was published in November, gaining the designation HD 220074 b. Along with HD 208527 b this is one of the first two candidate planets found orbiting red giants. References M-type giants Planetary systems with one confirmed planet Cassiopeia (constellation) Durchmusterung objects 220074 115218 8881
HD 220074
Astronomy
388
38,496,244
https://en.wikipedia.org/wiki/Engineering%20disasters
Engineering disasters often arise from shortcuts in the design process. Engineering is the science and technology used to meet the needs and demands of society. These demands include buildings, aircraft, vessels, and computer software. In order to meet society’s demands, the creation of newer technology and infrastructure must be met efficiently and cost-effectively. To accomplish this, managers and engineers need a mutual approach to the specified demand at hand. This can lead to shortcuts in engineering design to reduce costs of construction and fabrication. Occasionally, these shortcuts can lead to unexpected design failures. Overview Failure occurs when a structure or device has been used past the limits of design that inhibits proper function. If a structure is designed to only support a certain amount of stress, strain, or loading and the user applies greater amounts, the structure will begin to deform and eventually fail. Several factors contribute to failure including a flawed design, improper use, financial costs, and miscommunication. Safety In the field of engineering, the importance of safety is emphasized. Learning from past engineering failures and infamous disasters such as the Challenger explosion brings the sense of reality to what can happen when appropriate safety precautions are not taken. Safety tests such as tensile testing, finite element analysis (FEA), and failure theories help provide information to design engineers about what maximum forces and stresses can be applied to a certain region of a design. These precautionary measures help prevent failures due to overloading and deformation. Static loading Static loading is when a force is applied slowly to an object or structure. Static load tests such as tensile testing, bending tests, and torsion tests help determine the maximum loads that a design can withstand without permanent deformation or failure. Tensile testing is common when calculating a stress-strain curve which can determine the yield strength and ultimate strength of a specific test specimen. The specimen is stretched slowly in tension until it breaks, while the load and the distance across the gage length are continuously monitored. A sample subjected to a tensile test can typically withstand stresses higher than its yield stress without breaking. At a certain point, however, the sample will break into two pieces. This happens because the microscopic cracks that resulted from yielding will spread to large scales. The stress at the point of complete breakage is called a material's ultimate tensile strength. The result is a stress–strain curve of the material's behavior under static loading. Through this tensile testing, the yield strength is found at the point where the material begins to yield more readily to the applied stress, and its rate of deformation increases. Fatigue When a material undergoes permanent deformation from exposure to radical temperatures or constant loading, the functionality of the material can become impaired. This time–dependent plastic distortion of material is known as creep. Stress and temperature are both major factors of the rate of creep. In order for a design to be considered safe, the deformation due to creep must be much less than the strain at which failure occurs. Once the static loading causes the specimen to surpass this point, the specimen will begin permanent, or plastic, deformation. In mechanical design, most failures are due to time-varying, or dynamic, loads that are applied to a system. This phenomenon is known as fatigue failure. Fatigue is known as the weakness in a material due to variations of stress that are repeatedly applied to said material. For example, when stretching a rubber band to a certain length without breaking it (i.e. not surpassing the yield stress of the rubber band) the rubber band will return to its original form after release; however, repeatedly stretching the rubber band with the same amount of force thousands of times would create micro-cracks in the band which would lead to the rubber band being snapped. The same principle is applied to mechanical materials such as metals. Fatigue failure always begins at a crack that may form over time or due to the manufacturing process used. The three stages of fatigue failure are: Crack initiation- when repeated stress creates a fracture in the material being used Crack propagation- when the initiated crack develops in the material to a larger scale due to tensile stress. Sudden fracture failure- caused by unstable crack growth to the point where the material will fail Note that fatigue does not imply that the strength of the material is lessened after failure. This notion was originally referred to a material becoming "tired" after cyclic loading. Miscommunication Engineering is a precise discipline, requiring communication among project developers. Several forms of miscommunication can lead to a flawed design. Various fields of engineering must intercommunicate, including civil, electrical, mechanical, industrial, chemical, biological, and environmental engineering. For example, a modern automobile design requires electrical engineers, mechanical engineers, and environmental engineers to work together to produce a fuel-efficient, durable product for consumers. If engineers do not adequately communicate among one another, a potential design could have flaws and be unsafe for consumer purchase. Engineering disasters can be a result of such miscommunication, including the 2005 levee failures in Greater New Orleans, Louisiana during Hurricane Katrina, the Space Shuttle Columbia disaster, and the Hyatt Regency walkway collapse. An exceptional example of this is the Mars Climate Orbiter. "The primary cause of the orbiter's violent demise was that one piece of ground software supplied by Lockheed Martin produced results in a United States customary unit, contrary to its Software Interface Specification (SIS), while a second system, supplied by NASA, expected those results to be in SI units, in accordance with the SIS." Lockheed Martin and the prime contractor spectacularly failed to communicate. Software Software has played a role in many high-profile disasters: Ariane flight V88 Mars Climate Orbiter TAURUS — UK share settlement system and dematerialized central share depository Therac-25 — A radiation therapy machine responsible for six overdoses due to faulty software Failure at Dharan — Patriot Missile clock issue Systems engineering Lion Air Flight 610 and Ethiopian Airlines Flight 302 — Faulty "MCAS" system on the Boeing 737 MAX Examples When larger projects such as infrastructures and airplanes fail, multiple people can be affected which leads to an engineering disaster. A disaster is defined as a calamity that results in significant damage which may include the loss of life. In-depth observations and post-disaster analysis have been documented to a large extent to help prevent similar disasters from occurring. Infrastructure Ashtabula River Bridge Disaster (1876) The Ashtabula River railroad disaster occurred December 29, 1876 when a bridge over the Ashtabula River near Ashtabula, Ohio failed as a Lake Shore and Michigan Southern Railway train passed over it killing at least 92 people. Modern analyses blame failure of an angle block lug, thrust stress and low temperatures. Tay Bridge Disaster (1879) On December 28, 1879, the Tay Bridge Disaster occurred when the first Tay Rail Bridge collapsed as a North British Railway passenger train on the Edinburgh–Dundee line passed over it, killing at least 59 people. The major cause was failure to allow for wind loadings. Johnstown Flood (1889) The Johnstown Flood occurred on May 31, 1889, when the South Fork Dam located on the Little Conemaugh River upstream of the town of Johnstown, Pennsylvania, failed after days of heavy rainfall killing at least 2,209 people. A 2016 hydraulic analysis confirmed that changes made to the dam severely reduced its ability to withstand major storms. Quebec Bridge collapse (1907) The road, rail and pedestrian Quebec Bridge in Quebec, Canada, failed twice during construction, in 1907 and 1916, at the cost of 88 lives. The first failure was improper design of the chords. The second failure occurred when the central span was being raised into position and fell into the river. St. Francis Dam collapse (1928) The St. Francis Dam was a concrete gravity dam located in San Francisquito Canyon in Los Angeles County, California, built from 1924 to 1926 to serve Los Angeles's growing water needs. It failed in 1928 due to a defective soil foundation and design flaws, triggering a flood that claimed the lives of at least 431 people. Tacoma Narrows Bridge collapse (1940) The first Tacoma Narrows Bridge was a suspension bridge in Washington that spanned the Tacoma Narrows strait of Puget Sound. It dramatically collapsed on November 7, 1940. The proximate cause was moderate winds which produced aeroelastic flutter that was self-exciting and unbounded, opposite to damping. Hyatt Regency Hotel walkway collapse (1981) On July 17, 1981, two overhead walkways loaded with partygoers at the Hyatt Regency Hotel in Kansas City, Missouri, collapsed. The concrete and glass platforms fell onto a tea dance in the lobby, killing 114 and injuring 216. Investigations concluded the walkway would have failed under one-third the weight it held that night because of a revised design. Federal levee failures in New Orleans (2005) Levees and floodwalls protecting New Orleans, Louisiana, and its suburbs failed in 50 locations on August 29, 2005, following the passage of Hurricane Katrina, killing 1,577 people. Four major investigations all concurred that the primary cause of the flooding was inadequate design and construction by the Army Corps of Engineers. Ponte Morandi collapse (2018) Ponte Morandi was a road viaduct in Genoa, Liguria, Italy. On August 14, 2018, a section of the viaduct collapsed during a rainstorm, killing forty-three people. The remains of the original bridge were demolished in August 2019. Surfside condominium building collapse (2021) On June 24, 2021, at 1:22 a.m., Champlain Towers South, a 12-story beachfront condominium in the Miami suburb of Surfside, Florida, partially collapsed killing ninety-eight people. The investigations are currently ongoing. Aeronautics Space Shuttle Challenger disaster (1986) The Space Shuttle Challenger disaster occurred on January 28, 1986, when the NASA Space Shuttle orbiter Challenger (OV-099) (mission STS-51-L) broke apart 73 seconds into its flight, leading to the deaths of its seven crew members. Disintegration of the vehicle began after an O-ring seal in its right solid rocket booster (SRB) failed at liftoff. Space Shuttle Columbia disaster (2003) The Space Shuttle Columbia (OV-102) disaster occurred on February 1, 2003, during the final leg of STS-107. While re-entering Earth's atmosphere over Louisiana and Texas, the shuttle unexpectedly disintegrated, resulting in the deaths of all seven astronauts on board. The cause was damage to thermal shielding tiles from impact with a falling piece of foam insulation from an external tank during the January 16 launch. Vessels Liberty ships in WWII Early Liberty ships suffered hull and deck cracks, and a few were lost to such structural defects. During World War II, there were nearly 1,500 instances of significant brittle fractures. Three of the 2,710 Liberties built broke in half without warning. In cold temperatures the steel hulls cracked, resulting in later ships being constructed using more suitable steel. Steamboat Sultana (1865) On the night of April 26, 1865, the passenger steamboat Sultana exploded on the Mississippi River north of Memphis, Tennessee. The explosion resulted in the loss of 1,547 lives. The cause was believed to be the result of an incorrectly repaired boiler exploding, which led to the explosion of two of the three other boilers. Titan submersible On 18 June 2023, the submersible Titan imploded during an expedition to the wreck of the Titanic, killing all five persons on board. Flaws in the design of the submersible and the carbon fibre pressure hull in particular were discussed as a possible cause of the implosion, with Titan's operator OceanGate having ignored multiple previous warnings about the potential for accidents. See also Lists of disasters List of engineering blunders Normalization of deviance Nuclear and radiation accidents and incidents Structural integrity and failure Engineering Failures in the U.S. References Man-made disasters
Engineering disasters
Technology,Engineering,Biology
2,431
29,127,124
https://en.wikipedia.org/wiki/Immunosurgery
Immunosurgery is a method of selectively removing the external cell layer (trophoblast) of a blastocyst through a cytotoxicity procedure. The protocol for immunosurgery includes preincubation with an antiserum, rinsing it with embryonic stem cell derivation media to remove the antibodies, exposing it to complement, and then removing the lysed trophoectoderm through a pipette. This technique is used to isolate the inner cell mass of the blastocyst. The trophoectoderm's cell junctions and tight epithelium "shield" the ICM from antibody binding by effectively making the cell impermeable to macromolecules. Immunosurgery can be used to obtain large quantities of pure inner cell masses in a relatively short period of time. The ICM obtained can then be used for stem cell research and is better to use than adult or fetal stem cells because the ICM has not been affected by external factors, such as manually bisecting the cell. However, if the structural integrity of the blastocyst is compromised prior to the experiment, the ICM is susceptible to the immunological reaction. Thus, the quality of the embryo used is imperative to the experiment's success. In addition, when using complement derived from animals, the source of the animals matters. They should be kept in a specific-pathogen-free environment to increase the likelihood that the animal has not developed natural antibodies against the bacterial carbohydrates present in the serum (which can be obtained from a different animal). Solter and Knowles developed the first method of immunosurgery with their 1975 paper "Immunosurgery of Mouse Blastocyst". They primarily used it for studying early embryonic development. Though immunosurgery is the most prevalent method of ICM isolation, various experiments have improved the process, such as through the use of lasers (performed by Tanaka, et al.) and micromanipulators (performed by Ding, et al.). These new methods reduce the risk of contamination with animal materials within the embryonic stem cells derived from the ICM, which can cause complications later on if the embryonic stem cells are transplanted into a human for cell therapy. References Wikipedia articles with sections published in WikiJournal of Medicine Immunology Surgery
Immunosurgery
Biology
495
24,175
https://en.wikipedia.org/wiki/Pattern%20welding
Pattern welding is a practice in sword and knife making by forming a blade of several metal pieces of differing composition that are forge-welded together and twisted and manipulated to form a pattern. Often called Damascus steel, blades forged in this manner often display bands of slightly different patterning along their entire length. These bands can be highlighted for cosmetic purposes by proper polishing or acid etching. Pattern welding was an outgrowth of laminated or piled steel, a similar technique used to combine steels of different carbon contents, providing a desired mix of hardness and toughness. Although modern steelmaking processes negate the need to blend different steels, pattern welded steel is still used by custom knifemakers for the cosmetic effects it produces. History Pattern welding developed out of the necessarily complex process of making blades that were both hard and tough from the erratic and unsuitable output from early iron smelting in bloomeries. The bloomery does not generate temperatures high enough to melt iron and steel, but instead reduces the iron oxide ore into particles of pure iron, which then weld into a mass of sponge iron, consisting of lumps of impurities in a matrix of relatively pure iron, which is too soft to make a good blade. Carburizing thin iron bars or plates forms a layer of harder, high carbon steel on the surface, and early bladesmiths would forge these bars or plates together to form relatively homogeneous bars of steel. This laminating process, in which different types of steel together produce patterns that can be seen in the surface of the finished blade, forms the basis for pattern welding. Pattern welding in Europe Pattern welding dates to the first millennium BC, with Celtic, and later Germanic swords exhibiting the technique, with the Romans describing the blade patternation. By the 2nd and 3rd century AD, the Celts commonly used pattern welding for decoration in addition to structural reasons. The technique involves folding and forging alternating layers of steel into rods, then twisting the steel to form complex patterns when forged into a blade. By the 6th and 7th centuries, pattern welding had reached a level where thin layers of patterned steel were being overlaid onto a soft iron core, making the swords far better as the iron gave them a flexible and springy core that would take any shock from sword blows to stop the blade bending or snapping. By the end of the Viking Age, pattern welding fell out of use in Europe. In medieval swords, pattern welding was more prevalent than commonly thought. However, the presence of rust makes detection difficult without repolishing. During the Middle Ages, Wootz steel was produced in India and exported globally, including to Europe. The similarities in the markings led many to believe it was the same process being used, and pattern welding was revived by European smiths who were attempting to duplicate the Damascus steel. While the methods used by Damascus smiths to produce their blades was lost over the centuries, recent efforts by metallurgists and bladesmiths (such as Verhoeven and Pendray) to reproduce steel with identical characteristics have yielded a process that does not involve pattern welding. The ancient swordmakers exploited the aesthetic qualities of pattern welded steel. The Vikings, in particular, were fond of twisting bars of steel around each other, welding the bars together by hammering and then repeating the process with the resulting bars, to create complex patterns in the final steel bar. Two bars twisted in opposite directions created the common chevron pattern. Often, the center of the blade was a core of soft steel, and the edges were solid high carbon steel, similar to the laminates of the Japanese. Modern decorative use Pattern welding is still popular with contemporary bladesmiths both for visual effect and for recreating historic patterns and swords. Modern steels and methods allow for patterns with much higher number of visible layers compared to historical artifacts. Large numbers of layers can either be produced by folding similar to historical processes or by forge welding a small number of layers together, then cutting the billet in pieces to stack and forge-weld it again. This can be repeated until the desired number of layers have been achieved. A blade ground from such a blank can show a pattern similar to wood grain with small random variations in pattern. Some manufactured objects can be re-purposed into pattern welded blanks. "Cable Damascus", forged from high carbon multi-strand cable, is a popular item for bladesmiths to produce, producing a finely grained, twisted pattern, while chainsaw chains produce a pattern of randomly positioned blobs of color. Some modern bladesmiths have taken pattern welding to new heights, with elaborate applications of traditional pattern welding techniques, as well as with new technology. A layered billet of steel rods with the blade blank cut perpendicular to the layers can also produce some spectacular patterns, including mosaics or even writing. Powder metallurgy allows alloys that would not normally be compatible to be combined into solid bars. Different treatments of the steel after it is ground and polished, such as bluing, etching, or various other chemical surface treatments that react differently to the different metals used can create bright, high-contrast finishes on the steel. Some master smiths go as far as to use techniques such as electrical discharge machining to cut interlocking patterns out of different steels, fit them together, then weld the resulting assembly into a solid block of steel. Blacksmiths will sometimes apply Wite-Out, Liquid Paper, or other types of correction fluid to metal that they want to not weld together, as the titanium dioxide in the correction fluid forms a barrier between the metal it is applied-to and any other pieces of metal. For example, when creating pattern-welded steel by filling a steel canister with pieces of metal and powdered steel and forging it together into a single mass ("canister damascus steel,") smiths frequently coat the inside of the canister with correction fluid and let it dry before adding their materials. Thus, when the canister is heated and compressed using a hammer or pneumatic press, the material on the inside of the correction fluid is forged together, but it does not forge to the canister, allowing the pattern created by forging the different materials together to be seen in the finished piece because it is not covered by the homologous steel of the canister. Etymology The term 'pattern welding' was coined by English archaeologist Herbert Maryon in a 1948 paper: "The welding of these swords represents an excessively difficult operation. I do not know of finer smith's work... I have named the technique ‘pattern welding’... Examples of pattern-welding range in date from the third century to the Viking Age." See also Bulat steel, a Russian crucible steel Damascus steel, a steel used in swordmaking during the medieval period Forged in Fire a History channel competitive television show on forged knife and sword making Hamon (swordsmithing) Japanese sword construction includes a specific form of pattern welding. Mokume-gane, a similar technique, often for precious metals, used to produce decorative pieces Wootz steel, an Indian crucible steel References Sources External links Pattern Welding Explained Ancient carburisation of iron to steel: a comment Mediæval Sword Virtual Museum, which contains close-up images of Viking swords, showing the pattern welding structures. Welding Steelmaking Edged and bladed weapons
Pattern welding
Chemistry,Engineering
1,491
679,987
https://en.wikipedia.org/wiki/Euclidean%20division
In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer (the dividend) by another (the divisor), in a way that produces an integer quotient and a natural number remainder strictly smaller than the absolute value of the divisor. A fundamental property is that the quotient and the remainder exist and are unique, under some conditions. Because of this uniqueness, Euclidean division is often considered without referring to any method of computation, and without explicitly computing the quotient and the remainder. The methods of computation are called integer division algorithms, the best known of which being long division. Euclidean division, and algorithms to compute it, are fundamental for many questions concerning integers, such as the Euclidean algorithm for finding the greatest common divisor of two integers, and modular arithmetic, for which only remainders are considered. The operation consisting of computing only the remainder is called the modulo operation, and is used often in both mathematics and computer science. Division theorem Euclidean division is based on the following result, which is sometimes called Euclid's division lemma. Given two integers and , with , there exist unique integers and such that and , where denotes the absolute value of . In the above theorem, each of the four integers has a name of its own: is called the , is called the , is called the and is called the . The computation of the quotient and the remainder from the dividend and the divisor is called , or in case of ambiguity, . The theorem is frequently referred to as the (although it is a theorem and not an algorithm), because its proof as given below lends itself to a simple division algorithm for computing and (see the section Proof for more). Division is not defined in the case where ; see division by zero. For the remainder and the modulo operation, there are conventions other than , see . Generalization Although originally restricted to integers, Euclidean division and the division theorem can be generalized to univariate polynomials over a field and to Euclidean domains. In the case of univariate polynomials, the main difference is that the inequalities are replaced with or where denotes the polynomial degree. In the generalization to Euclidean domains, the inequality becomes or where denote a specific function from the domain to the natural numbers called a "Euclidean function". The uniqueness of the quotient and the remainder remains true for polynomials, but it is false in general. History Although "Euclidean division" is named after Euclid, it seems that he did not know the existence and uniqueness theorem, and that the only computation method that he knew was the division by repeated subtraction. Before the discovery of Hindu–Arabic numeral system, which was introduced in Europe during the 13th century by Fibonacci, division was extremely difficult, and only the best mathematicians were able to do it. Presently, most division algorithms, including long division, are based on this notation or its variants, such as binary numerals. A notable exception is Newton–Raphson division, which is independent from any numeral system. The term "Euclidean division" was introduced during the 20th century as a shorthand for "division of Euclidean rings". It has been rapidly adopted by mathematicians for distinguishing this division from the other kinds of division of numbers. Intuitive example Suppose that a pie has 9 slices and they are to be divided evenly among 4 people. Using Euclidean division, 9 divided by 4 is 2 with remainder 1. In other words, each person receives 2 slices of pie, and there is 1 slice left over. This can be confirmed using multiplication, the inverse of division: if each of the 4 people received 2 slices, then 4 × 2 = 8 slices were given out in total. Adding the 1 slice remaining, the result is 9 slices. In summary: 9 = 4 × 2 + 1. In general, if the number of slices is denoted and the number of people is denoted , then one can divide the pie evenly among the people such that each person receives slices (the quotient), with some number of slices being the leftover (the remainder). In which case, the equation holds. If 9 slices were divided among 3 people instead of 4, then each would receive 3 and no slice would be left over, which means that the remainder would be zero, leading to the conclusion that 3 evenly divides 9, or that 3 divides 9. Euclidean division can also be extended to negative dividend (or negative divisor) using the same formula; for example −9 = 4 × (−3) + 3, which means that −9 divided by 4 is −3 with remainder 3. Examples If a = 7 and b = 3, then q = 2 and r = 1, since 7 = 3 × 2 + 1. If a = 7 and b = −3, then q = −2 and r = 1, since 7 = −3 × (−2) + 1. If a = −7 and b = 3, then q = −3 and r = 2, since −7 = 3 × (−3) + 2. If a = −7 and b = −3, then q = 3 and r = 2, since −7 = −3 × 3 + 2. Proof The following proof of the division theorem relies on the fact that a decreasing sequence of non-negative integers stops eventually. It is separated into two parts: one for existence and another for uniqueness of and . Other proofs use the well-ordering principle (i.e., the assertion that every non-empty set of non-negative integers has a smallest element) to make the reasoning simpler, but have the disadvantage of not providing directly an algorithm for solving the division (see for more). Existence For proving the existence of Euclidean division, one can suppose since, if the equality can be rewritten So, if the latter equality is a Euclidean division with the former is also a Euclidean division. Given and there are integers and such that for example, and if and otherwise and Let and be such a pair of numbers for which is nonnegative and minimal. If we have Euclidean division. Thus, we have to prove that, if then is not minimal. Indeed, if one has with and is not minimal This proves the existence in all cases. This provides also an algorithm for computing the quotient and the remainder, by starting from (if ) and adding to it until However, this algorithm is not efficient, since its number of steps is of the order of Uniqueness The pair of integers and such that is unique, in the sense that there can be no other pair of integers that satisfy the same condition in the Euclidean division theorem. In other words, if we have another division of by , say {{math|1=a = bq' + ''r}} with , then we must have that . To prove this statement, we first start with the assumptions that Subtracting the two equations yields . So is a divisor of . As by the above inequalities, one gets , and . Since , we get that and , which proves the uniqueness part of the Euclidean division theorem. Effectiveness In general, an existence proof does not provide an algorithm for computing the existing quotient and remainder, but the above proof does immediately provide an algorithm (see Division algorithm#Division by repeated subtraction), even though it is not a very efficient one as it requires as many steps as the size of the quotient. This is related to the fact that it uses only additions, subtractions and comparisons of integers, without involving multiplication, nor any particular representation of the integers such as decimal notation. In terms of decimal notation, long division provides a much more efficient algorithm for solving Euclidean divisions. Its generalization to binary and hexadecimal notation provides further flexibility and possibility for computer implementation. However, for large inputs, algorithms that reduce division to multiplication, such as Newton–Raphson, are usually preferred, because they only need a time which is proportional to the time of the multiplication needed to verify the result—independently of the multiplication algorithm which is used (for more, see Division algorithm#Fast division methods). Variants The Euclidean division admits a number of variants, some of which are listed below. Other intervals for the remainder In Euclidean division with as divisor, the remainder is supposed to belong to the interval of length . Any other interval of the same length may be used. More precisely, given integers , , with , there exist unique integers and with such that . In particular, if then . This division is called the centered division, and its remainder is called the centered remainder or the least absolute remainder''. This is used for approximating real numbers: Euclidean division defines truncation, and centered division defines rounding. Montgomery division Given integers , and with and let be the modular multiplicative inverse of (i.e., with being a multiple of ), then there exist unique integers and with such that . This result generalizes Hensel's odd division (1900). The value is the -residue defined in Montgomery reduction. In Euclidean domains Euclidean domains (also known as Euclidean rings) are defined as integral domains which support the following generalization of Euclidean division: Given an element and a non-zero element in a Euclidean domain equipped with a Euclidean function (also known as a Euclidean valuation or degree function'''), there exist and in such that and either or . Uniqueness of and is not required. It occurs only in exceptional cases, typically for univariate polynomials, and for integers, if the further condition is added. Examples of Euclidean domains include fields, polynomial rings in one variable over a field, and the Gaussian integers. The Euclidean division of polynomials has been the object of specific developments. See also Euclid's lemma Euclidean algorithm Notes References Articles containing proofs Division (mathematics)
Euclidean division
Mathematics
2,031
25,596,814
https://en.wikipedia.org/wiki/Configuration%20design
Configuration design is a kind of design where a fixed set of predefined components that can be interfaced (connected) in predefined ways is given, and an assembly (i.e. designed artifact) of components selected from this fixed set is sought that satisfies a set of requirements and obeys a set of constraints. The associated design configuration problem consists of the following three constituent tasks: Selection of components, Allocation of components, and Interfacing of components (design of ways the components interface/connect with each other). Types of knowledge involved in configuration design include: Problem-specific knowledge: Input knowledge: Requirements Constraints Technology Case knowledge Persistent knowledge (knowledge that remains valid over multiple problem solving sessions): Case knowledge Domain-specific, method-independent knowledge Method-specific domain knowledge Search-control knowledge See also Systems design Modular design Morphological analysis (problem-solving) Constraint satisfaction problem References Mittal, S. and Frayman, F. (1989), Towards a generic model of configuration tasks, Proceedings of the 11th IJCAI, San Mateo, CA, USA, Morgan Kaufmann, pages 1395-1401. Levin, Mark Sh. (2015) Modular systems design and evaluation. Springer. B. Wielinga and G. Schreiber (1997), Configuration Design Problem Solving, IEEE Intelligent Systems, Vol. 12, pages 49–56. Design
Configuration design
Engineering
286
50,556,155
https://en.wikipedia.org/wiki/Tobias%20de%20Boer
Pieter Cornelis Tobias de Boer (21 May 1930 – 2 May 2016) was a Dutch scientist. He was a professor at the Sibley School of Mechanical and Aerospace Engineering of Cornell University. His research interest were in the field of thermodynamics and fluid mechanics. Career De Boer was born on 21 May 1930 in Leiden, the Netherlands. He studied mechanical engineering at Delft University of Technology, where he obtained his Bachelor's degree and in 1954 his Master's. He subsequently served in the Dutch Armed Forces for two years. De Boer married in 1956 and a short time later moved to the United States where they settled in Maryland. He continued his studies and obtained his doctorate at the University of Maryland in 1962 under Jan Burgers. He subsequently was an assistant professor at the University until 1964. The de Boer family then moved to Ithaca, New York and he was employed by Cornell University as an assistant professor at the Graduate School of Aeronautical Engineering. In 1968 he became associate professor. In 1972 the Sibley School of Mechanical and Aerospace Engineering was founded and two years later de Boer became a full professor there. He retired in 2000. At the University, de Boer did research on pulse tube cryocoolers, the physics of shock waves, and Stirling engines, among other topics. In 1988 de Boer was elected a corresponding member of the Royal Netherlands Academy of Arts and Sciences. Personal life De Boer married Joan Lieshout in 1956, the couple had three children. Joan recorded the Dutch text on the Voyager Golden Record. De Boer was a sportsman and especially fond of cycling. When he was 48 he cycled in 24 hours, thereby setting a national record for his age category. He died on 2 May 2016 in the retirement community of Kendal at Ithaca, age 85. References External links Interview with Tob de Boer by Francis Moon 1930 births 2016 deaths Cornell University faculty Delft University of Technology alumni Dutch emigrants to the United States 20th-century Dutch engineers Fluid dynamicists Members of the Royal Netherlands Academy of Arts and Sciences Scientists from Leiden University of Maryland, College Park alumni
Tobias de Boer
Chemistry
420
56,824,574
https://en.wikipedia.org/wiki/Field%20effect%20%28chemistry%29
A field effect is the polarization of a molecule through space. The effect is a result of an electric field produced by charge localization in a molecule. This field, which is substituent and conformation dependent, can influence structure and reactivity by manipulating the location of electron density in bonds and/or the overall molecule. The polarization of a molecule through its bonds is a separate phenomenon known as induction. Field effects are relatively weak, and diminish rapidly with distance, but have still been found to alter molecular properties such as acidity. Field sources Field effects can arise from the electric dipole field of a bond containing an electronegative atom or electron-withdrawing substituent, as well as from an atom or substituent bearing a formal charge. The directionality of a dipole, and concentration of charge, can both define the shape of a molecule's electric field which will manipulate the localization of electron density toward or away from sites of interest, such as an acidic hydrogen. Field effects are typically associated with the alignment of a dipole field with respect to a reaction center. Since these are through space effects, the 3D structure of a molecule is an important consideration. A field may be interrupted by other bonds or atoms before propagating to a reactive site of interest. Atoms of differing electronegativities can move closer together resulting in bond polarization through space that mimics the inductive effect through bonds. Bicycloheptane and bicyclooctane (seen left) are pounds in which the change in acidity with substitution was attributed to the field effect. The C-X dipole is oriented away from the carboxylic acid group, and can draw electron density away because the molecule center is empty, with a low dielectric constant, so the electric field is able to propagate with minimal resistance. Utility of effect A dipole can align to stabilize or destabilize the formation or loss of a charge, thereby decreasing (if stabilized) or increasing (if destabilized) the activation barrier to a chemical event. Field effects can therefore tune the acidity or basicity of bonds within their fields by donating or withdrawing charge density. With respect to acidity, a common trend to note is that, inductively, an electron-withdrawing substituent in the vicinity of an acidic proton will lower the pKa (i.e. increase the acidity) and, correspondingly, an electron-donating substituent will raise the pKa. The reorganization of charge due to field effects will have the same result. An electric dipole field propagated through the space around, or in the middle of, a molecule in the direction of an acidic proton will decrease the acidity, while a dipole pointed away will increase the acidity and concomitantly elongate the X-H bond. These effects can therefore help to tune the acidity/basicity of a molecule to protonate/deprotonate a specific compound, or enhance hydrogen bond-donor ability for molecular recognition or anion sensing applications. Field effects have also been shown in substituted arenes to dominate the electrostatic potential maps, which are maps of electron density used to explain intermolecular interactions. Evidence for field effects Localized electronic effects are a combination of inductive and field effects. Due to the similarity in these effects, it is difficult to separate their contributions to the electronic structure of a molecule. There is, however, a large body of literature devoted to developing an understanding of the relative significance of induction and field effects by analyzing related compounds in an attempt to quantify each effect based on the present substituents and molecular geometry. For example, the three compounds to the right, all octanes, differ only in the number of linkers between the electron withdrawing group X and an acidic functional group, which are approximately the same spatial distance apart in each compound. It is known that an electron-withdrawing substituent will decrease the pKa of a given proton (i.e. increase the acidity) inductively. If induction was the dominant effect in these compounds, acidity should increase linearly with the number of available inductive pathways (linkers). However, the experimental data shows that effect on acidity in related octanes and cubanes is very similar, and therefore the dominant effect must be through space. In the cis-11,12-dichloro-9,10-dihydro-9,10-ethano-2-anthroic acid syn and anti isomers seen below and to the left, the chlorines provide a field effect. The concentration of negative charge on each chlorine has a through space effect which can be seen in the relative pKa values. When the chlorines are pointed over the carboxylic acid group, the pKa is higher because loss of a proton is less favorable due to the increase in negative charge in the area. Loss of a proton results in a negative charge which is less stable if there is already an inherent concentration of electrons. This can be attributed to a field effect because in the same compound with the chlorines pointed away from the acidic group the pKa is lower, and if the effect were inductive the conformational position would not matter. References Chemical properties Chemistry Electrostatics Electromagnetism Molecular physics Molecules Physical chemistry
Field effect (chemistry)
Physics,Chemistry
1,113
77,548,143
https://en.wikipedia.org/wiki/James%20Haglund
James Haglund is an American mathematician who specializes in algebraic combinatorics and enumerative combinatorics, and works as a professor of mathematics at the University of Pennsylvania. Education Haglund received his Ph.D. in 1993 from the University of Georgia, with the dissertation Compositions, Rook Placements, and Permutations of Vectors supervised by Earl Rodney Canfield. Research contributions In 2005, together with M. Haiman and N. Loehr gave the first proof of a combinatorial interpretation of the Macdonald polynomials. In 2007, Haglund, Haiman and Loehr gave a combinatorial formula for the non-symmetric Macdonald polynomials. Haglund is the author of The -Catalan Numbers and the Space of Diagonal Harmonics: With an Appendix on the Combinatorics of Macdonald Polynomials. Academic talks In 2024, Haglund gave a talk at KAIST on Superization of Symmetric Functions. In 2015, together with Alexandre Kirillov and Ching-Li Chai, Haglund gave a talk at Penn Wharton China Center on Penn Math Day, sponsored by the University of Pennsylvania and Peking University. In 2006, he gave a Plenary Address at the 18th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC '06), San Diego (USA). Editorial Haglund was on the editorial boards of Transactions of the AMS, Journal of Combinatorics, and a few other academic journals. Students Among the Ph.D. students supervised by Haglund are Frederick M. Butler, Mahir Bilen Can, Logan Crew, Sarah Katherine Mason, Anna Pun, Chunwei Song, and Meesue Yoo. Recognition In 2013, Haglund became an inaugural Fellow of the American Mathematical Society. References Year of birth missing (living people) Living people 20th-century American mathematicians 21st-century American mathematicians Combinatorialists University of Georgia alumni University of Pennsylvania faculty Fellows of the American Mathematical Society.
James Haglund
Mathematics
400
39,474,050
https://en.wikipedia.org/wiki/Consumer%20network
The notion of consumer networks expresses the idea that people's embeddedness in social networks affects their behavior as consumers. Interactions within consumer networks such as information exchange and imitation can affect demand and market outcomes in ways not considered in the neoclassical theory of consumer choice. Economics Economic research on the topic is not ample. In attempts to incorporate consumer networks into standard microeconomic models, some interesting implications have been found concerning market structure, market dynamics and the firm's profit maximizating decision. It has been shown that under certain assumptions the structure of the consumer network can affect market structure. In certain scenarios, where consumers have a higher inclination to compare their habitually consumed product to that of their acquaintances, the equilibrium market structure can switch from oligopoly to monopoly. In another model, which incorporates small world consumer networks into the profit function of the firm, it has been demonstrated that the density of the network significantly affects the optimal price the firm should charge and the optimal referral fee (paid to consumers who can convince another one to buy). On the other hand, the size of the network does not have an important effect on these. A 2007 laboratory experiment found that increased density of consumer networks can reduce market inefficiencies caused by moral hazard. The ability of consumers to exchange information with more neighbors increases firms’ incentives to build a reputation through selling high quality products. Even a low level of density was found to isolated consumers who can rely only on their own experience. Marketing Exploiting consumer networks for marketing purposes, through techniques such as viral marketing, word-of-mouth marketing, or network marketing, is increasingly experimented with by marketers, to the extent that "some developments in customer networking are ahead of empirical research, and a few seem ahead even of accepted theory". These might often be more effective than more traditional forms of advertising. A key task of such forms of marketing is to target the people who are opinion leaders regarding consumption, having many contacts and positive reputation. They are, in network science language, the hubs of consumer networks. See also Viral marketing Word-of-mouth marketing Notes and references Network Network theory
Consumer network
Mathematics
430
6,051,606
https://en.wikipedia.org/wiki/Safety%20harness
A safety harness is a form of protective equipment designed to safeguard the user from injury or death from falling. The core item of a fall arrest system, the harness is usually fabricated from rope, braided wire cable, or synthetic webbing. It is attached securely to a stationary object directly by a locking device or indirectly via a rope, cable, or webbing and one or more locking devices. Some safety harnesses are used in combination with a shock-absorbing lanyard, which is used to regulate deceleration and thereby prevent a serious G-force injury when the end of the rope is reached. An unrelated use with a materially different arresting mechanism is bungee jumping. Though they share certain similar attributes, a safety harness is not to be confused with a climbing harness used for mountaineering, rock climbing, and climbing gyms. Specialized harnesses for animal rescue or transfer, as from a dock to a vessel, are also made. Safety harnesses have restraints that prevent the wearer from falling from a height. By wearing the belt or harness the risk of injury from a fall is greatly reduced. The harness allows the user to attach themselves to an object that is stationary, ensuring they will not hit the ground in the event of a possible fall. Falling from high areas is one of the most common causes of injuries in the workplace, so it is very important to make sure you are properly equipped when working up high. Before safety harnesses were required by OSHA (The Occupational Safety and Health Administration), workers wore body belts to connect to fall protection systems. Workers had the belts fastened around the waist, resulting in the entire force being exerted on the abdomen and often causing significant injury. OSHA implementing this requirement really made sure the amount of casualties decreased from falling, as well as injuries caused from the old belts they used to wear. Safety harnesses are essential while working in high areas to prevent significant injury or death, and OSHA making these a requirement made everyone understand the importance of safety-harnesses Standards In North America, safety harnesses designed for protection against falls from heights in industrial and construction activities are covered by performance standards issued by the American National Standards Institute (ANSI) in the United States and by CSA Group (formerly known as the Canadian Standards Association) in Canada. Specifically, the standards issued are ANSI Z359.1, and CSA Z259.10. These standards are updated approximately every four to five years. The main purpose of the safety standards is to "act as a standard to drive best-in-class harnesses through rigorous design and test requirements. in addition to having requirements for manufacturers to create an ANSI-approved full body harness." The update to Z359.11 includes revisions and new requirements, including A modified, headfirst, dynamic test procedure, New stretch-out requirements for frontal connections, Alternative fall arrest indicator testing and new label requirements, Allowance for harnesses with integrated energy absorbers, and Changes to labeling requirements. It. requires harness label packs to have pictograms showing the approved usage of different connections and diagrams explaining the difference between deployed and non-deployed visual load indicators. These new standards help give the user a level a confidence, while knowing it has gone through rigorous testing to ensure that what you are using is completely safe and effective. Classifications There are four classes of fall protection systems: Class 1 is body belts (single or double D-ring), designed to restrain a person in a hazardous work position, prevent a fall, or arrest it completely within 3 feet (90 cm) (OSHA). Class 2 is chest harnesses, used only with limited fall hazards (including no vertical free fall), or for retrieving persons, as from a tank or bin. Class 3 is full body harnesses, designed to arrest the most severe free falls. Class 4 is suspension belts, independent supports used to suspend a worker, such as boatswain's chairs or raising or lowering harnesses. Other types Other forms of safety harnesses include seat belts and child safety seats in cars, which are helping passengers be and feel more safe in a car, Over-the-shoulder restraints, which are mainly used on roller coaster at amusement parks, a seat with a full-body harness like ones used by fighter pilots and racing car drivers, as well as diving harnesses, which are used in surface supplied diving by professional divers. Uses Fall arrest A fall arrest harness is the equipment which safely stops a person already falling. A window cleaner who can be working up high on buildings need to use harnesses to keep them from falling many stories if they slip. That is the most common safety harness. A theatrical fly crew member will need a harness because they are up above the theater floor and they need to be safe in case they slip as well. Construction workers might need a harness because they could be building on the higher floors and without the harness they could fall to the ground. Crane operators will be at a height that will cause great injury or death if they were to fall off, so they need something to ensure their safety as they are working. A lineman has to climb power lines and they need to be secured in place so they can work on the high voltage lines without moving around. Harnesses are used when sailing to prevent the crew from being thrown overboard in case the seas are rough. Climbing A climbing harness is a device which allows a climber access to the safety of a rope. Rock climbers use harnesses to belay each other, this is when they use a rope to connect to one another so if the climber falls they can catch them with the rope instead of falling all the way to the ground. Guide or support A jackstay harness is a substantial line between two points used to guide or support. Bungee jumping requires a harness to get the person to bounce back up, without it there would be no way to prevent the jumper from falling straight to the ground. In motorsport harnesses keep the driver secure in the event of a crash, keeping the driver secured in place gives the person more of a chance to leave without injury. Diving A lifeline harness is a rope connecting the diver to an attendant, usually at the surface. A professional diver would use a safety harness to keep close to something they are working on underwater, without the harness they could be pulled away from their work by a current. Inspection To be sure that a safety harness is not going to fail when in use there are things the user needs to check before they put their lives in the hands of said equipment. The main problem would most likely be frying from prior usage. Another problem would be from improper storage and that would be cracks in the harness, this would make the harness much less durable and much weaker. Finally the wearer needs to be sure to check the fasteners for damage so that part does not fail independently from the harness itself. References "ANSI / ASSP Z359 Fall Protection and Fall Restraint Standards". assp.org. American Society of Safety Professionals. Archived from the original on May 27, 2020. Retrieved June 4, 2020. "CAN/CSA-Z259.10-12 (R2016) - Standards Council of Canada - Conseil canadien des normes". scc.ca. Standards Council of Canada. Retrieved June 4, 2020. "What the Updated Z359.11 Standard Means for Full Body Harnesses". American Society of Safety Professionals. Retrieved 2023-04-10. "Fall Protection Information". Archived from the original on 2016-08-04. Retrieved 2017-03-17 . "When to Use a Safety Harness". www.safety-harness.com. Retrieved 2023-04-10 "TODO". SafetyLiftinGear. Retrieved 2023-04-10. Roux, Lance. "Harness Safety Facts and Tips". www.safetyproresources.com. Retrieved 2023-05-01. Admin, FallTech. "The Complete Guide to Full Body Harnesses". blog.falltech.com. Retrieved 2023-05-01. "Fall Protection Safety Harness Guide". SafetyCulture. Retrieved 2023-05-01. Personal protective equipment
Safety harness
Engineering,Environmental_science
1,677
43,106,038
https://en.wikipedia.org/wiki/IGR%20J11014%E2%88%926103
IGR J11014−6103, also called the Lighthouse Nebula, is a pulsar wind nebula trailing the neutron star which has the longest relativistic jet observed in the Milky Way galaxy. Description The object consists of a neutron star with a radius of about 12 km, which formed about 10,000–30,000 years ago in a supernova explosion. The supernova explosion "kicked" the neutron star, which is now moving through space with a velocity of between 0.3% and 0.8% of the speed of light, faster than almost all other known runaway neutron stars. The pulsar is now about 60 light-years from the original supernova location. The neutron star is the source of a relativistic helical jet, which is observed in X-rays but has no detected radio signature. In the composite processed image (right) the neutron star pulsar is the point-like object with a pulsar wind nebula tail trailing behind it for about 3 light-years. The jet, aligned with the pulsar rotation axis, is perpendicular to the pulsar's trajectory and extends out over 37 light-years (about nine times the distance from the Sun to the nearest visible star). The estimated velocity of the jet is about 80% of the speed of light. The star was initially presumed to be rapidly spinning but later measurements indicate that its spin rate is only 15.9 Hz. This rather slow spin rate and the fact that there is no evidence of accretion suggests that the jet is neither rotation nor accretion powered. A counter-jet (not shown in the image) has been detected, but is much fainter, possibly due to relativistic beaming. The origin of the glitch at about a third of the jet length is not known, but it might be due to the jet switching off and on or the jet orientation changing. References External links The long helical jet of the Lighthouse nebula The Lighthouse nebula, NASA: Astronomy Picture of the Day, 2014 February 21 A lighthouse pulsar (German) Pulsar wind nebulae Carina (constellation)
IGR J11014−6103
Astronomy
444
5,068,268
https://en.wikipedia.org/wiki/HD%2073390
HD 73390, also called e1 Carinae, is a binary star system in the constellation Carina. It is approximately 870 light years from Earth. The primary is a blue-white B-type main sequence dwarf with an apparent magnitude of +5.27. It displays an infrared excess and is a candidate host of an orbiting debris disk. The secondary is a magnitude 8.9 star which has a mass and temperature similar to the Sun. References B-type main-sequence stars Carina (constellation) Carinae, e1 Durchmusterung objects 073390 042129 3415
HD 73390
Astronomy
125
51,322,860
https://en.wikipedia.org/wiki/Vera%20Danchakoff
Vera Mikhaĭlovna Danchakoff (née Grigorevskaya, March 21, 1879 – September 22, 1950) was an anatomist, cell biologist and embryologist from the Russian Empire. In 1908 she was the first woman in the Russian Empire to be appointed as a professor and she became a pioneer in stem cell research. She emigrated to the United States in 1915 where she was a leading exponent of the idea that all types of blood cell develop from a single type of cell. She has sometimes been called "the mother of stem cells". She later returned to Europe to continue with her research. Early life Danchakoff was born in St Petersburg where her parents wanted her to study music or drawing. Determined otherwise, she left home to take a degree in natural sciences before moving to Lausanne University for a medical degree, producing her thesis in 1906. Returning to Russia she took a Russian medical degree at Kharkov University and then became the first woman to be awarded a doctorate in medical sciences at the St Petersburg Academy of Medicine – Russia's first medical college for women. She married and her daughter, Vera Evgenevna, who was born in 1902 in Zurich, went on to study at Columbia University and to marry Mikhail Lavrentyev, the mathematician. In 1915 Danchakoff emigrated to the United States where she was politically active, writing as the New York correspondent of the Moscow newspaper Utro Rossi (Russian Morning) and by helping the American Relief Administration with publicising the difficulties of Soviet scientists in working in Russia during the Great War, the Bolshevik Revolution and afterwards. During the Russian famine of 1921–22 Danchakoff appealed for food parcels to be sent to Russia by publicizing the correspondence she had been receiving from scientific colleagues in Russia. Although internationally eminent, they had been denounced as "parasites and idlers" and were dying of starvation. At the time there was a strong Russian émigré community in New York and, with her husband, Danchakoff hosted lavish gatherings of friends. She was a talented pianist and she took part in the musical evenings of Juan and Olga Codina, who were professional singers. She used to look after their daughter Lina when her parents were away on their extended tours – Lina later was to marry Serge Prokofiev. Scientific career In 1908 Danchakoff became an assistant professor in histology and embryology at Moscow University – the first woman to become a professor in Russia. In 1915 she emigrated to the United States where she first worked at the Rockefeller Institute for Medical Research in New York City. Then at the Columbia University College of Physicians and Surgeons, led by Thomas Hunt Morgan, she was "instructor in anatomy" at a time when women were first being allowed admittance as students. In a 1916 lecture she said In his 2001 keynote address to the Acute Leukemia Forum Marshall Lichtman described her presentation as an "extraordinary lecture" and considered that "The rest of the century has been spent filling in the details of [her] experimental insights!". It has been claimed that a paper of Danchakoff's is the first publication to use the term "stem cell", for example "These stem cells develop on the one hand into the small lymphocytes, and on the other hand into granulocytoblasts, and further into granulocytes". It has now been confirmed that hematopoietic stem cells give rise to all other blood cells. For these reasons Danchakoff has sometimes been called the "mother of stem cells". However, in terms of the actual terminology, in 1909 Alexander A. Maximow wrote in German of "Stammzelle" for the same concept in his paper "The lymphocyte as a stem cell, common to different blood elements in embryonic development and during the post-fetal life of mammals" (English translation). In 1916 Danchakoff and James Bumgardner Murphy independently reported on a surprising discovery concerning the chick embryo – one that turned out to be of great importance. When the embryo was injected with adult lymphocytes the spleen greatly enlarged. With other types of cell this did not occur. Murphy's and Danchakoff's explanations for the effect were wrong but much later these observations led to an understanding of lymphocyte migration and graft-versus-host disease. By 1919 Danchakoff was a full professor of anatomy in Columbia's College of Physicians and Surgeons. In 1934 she left Columbia and until 1937 worked in the Department of Histology and Embryology at the Lithuanian University of Health Sciences. In 1938 she conducted important experiments which involved exposing female guinea pig foetuses to testosterone. She showed for the first time that this can give rise to an increase of masculine sexual behavior in adulthood. Danchakoff published many books as well as scientific papers, possibly her last publications being Le sexe; rôle de l'hérédité et des hormones dans sa réalisation in 1949 and Effects of cancer provoking chemical substances on gravid guinea pigs and their fruits in 1950. Notes References Embryologists from the Russian Empire Women anatomists 20th-century anatomists Anatomists from the Russian Empire 1879 births 1950 deaths Scientists from Saint Petersburg Academic staff of Moscow State University Columbia University faculty Rockefeller University faculty Stem cell researchers Emigrants from the Russian Empire to the United States Women scientists from the Russian Empire
Vera Danchakoff
Biology
1,090
36,379,613
https://en.wikipedia.org/wiki/R-spondin%204
For chromosome 20, R-spondin 4 is a protein in humans that is encoded by the RSPO4 gene. This gene encodes a member of the R-spondin family of proteins that share a common domain organization consisting of a signal peptide, cysteine-rich/furin-like domain, thrombospondin domain and a C-terminal basic region. The encoded protein may be involved in activation of Wnt/beta-catenin signaling pathways. Mutations in this gene are associated with anonychia. Alternate splicing results in multiple transcript variants.[provided by RefSeq, Sep 2009]. References Further reading Genes on human chromosome 20 Glycoproteins Extracellular matrix proteins
R-spondin 4
Chemistry
151
30,283,480
https://en.wikipedia.org/wiki/Structure%20of%20liquids%20and%20glasses
The structure of liquids, glasses and other non-crystalline solids is characterized by the absence of long-range order which defines crystalline materials. Liquids and amorphous solids do, however, possess a rich and varied array of short to medium range order, which originates from chemical bonding and related interactions. Metallic glasses, for example, are typically well described by the dense random packing of hard spheres, whereas covalent systems, such as silicate glasses, have sparsely packed, strongly bound, tetrahedral network structures. These very different structures result in materials with very different physical properties and applications. The study of liquid and glass structure aims to gain insight into their behavior and physical properties, so that they can be understood, predicted and tailored for specific applications. Since the structure and resulting behavior of liquids and glasses is a complex many body problem, historically it has been too computationally intensive to solve using quantum mechanics directly. Instead, a variety of diffraction, nuclear magnetic resonance (NMR), molecular dynamics, and Monte Carlo simulation techniques are most commonly used. Pair distribution functions and structure factors The pair distribution function (or pair correlation function) of a material describes the probability of finding an atom at a separation r from another atom. A typical plot of g versus r of a liquid or glass shows a number of key features: At short separations (small r), g(r) = 0. This indicates the effective width of the atoms, which limits their distance of approach. A number of obvious peaks and troughs are present. These peaks indicate that the atoms pack around each other in 'shells' of nearest neighbors. Typically the 1st peak in g(r) is the strongest feature. This is due to the relatively strong chemical bonding and repulsion effects felt between neighboring atoms in the 1st shell. The attenuation of the peaks at increasing radial distances from the center indicates the decreasing degree of order from the center particle. This illustrates vividly the absence of "long-range order" in liquids and glasses. At long ranges, g(r) approaches a limiting value of 1, which corresponds to the macroscopic density of the material. The static structure factor, S(q), which can be measured with diffraction techniques, is related to its corresponding g(r) by Fourier transformation where q is the magnitude of the momentum transfer vector, and ρ is the number density of the material. Like g(r), the S(q) patterns of liquids and glasses have a number of key features: For monoatomic systems the S(q=0) limit is related to the isothermal compressibility. Also a rise at the low-q limit indicates the presence of small angle scattering, due to large scale structure or voids in the material. The sharpest peaks (or troughs) in S(q) typically occur in the q=1–3 ångström range. These normally indicate the presence of some medium range order corresponding to structure in the 2nd and higher coordination shells in g(r). At high-q the structure is typically a decaying sinusoidal oscillation, with a 2π/r1 wavelength where r1 is the 1st shell peak position in g(r). At very high-q the S(q) tends to 1, consistent with its definition. Diffraction The absence of long-range order in liquids and glasses is evidenced by the absence of Bragg peaks in X-ray and neutron diffraction. For these isotropic materials, the diffraction pattern has circular symmetry, and in the radial direction, the diffraction intensity has a smooth oscillatory shape. This diffracted intensity is usually analyzed to give the static structure factor, S(q), where q is given by q=4πsin(θ)/λ, where 2θ is the scattering angle (the angle between the incident and scattered quanta), and λ is the incident wavelength of the probe (photon or neutron). Typically diffraction measurements are performed at a single (monochromatic) λ, and diffracted intensity is measured over a range of 2θ angles, to give a wide range of q. Alternatively a range of λ, may be used, allowing the intensity measurements to be taken at a fixed or narrow range of 2θ. In x-ray diffraction, such measurements are typically called "energy dispersive", whereas in neutron diffraction this is normally called "time-of-flight" reflecting the different detection methods used. Once obtained, an S(q) pattern can be Fourier transformed to provide a corresponding radial distribution function (or pair correlation function), denoted in this article as g(r). For an isotropic material, the relation between S(q) and its corresponding g(r) is The g(r), which describes the probability of finding an atom at a separation r from another atom, provides a more intuitive description of the atomic structure. The g(r) pattern obtained from a diffraction measurement represents a spatial, and thermal average of all the pair correlations in the material, weighted by their coherent cross-sections with the incident beam. Atomistic simulation By definition, g(r) is related to the average number of particles found within a given volume of shell located at a distance r from the center. The average density of atoms at a given radial distance from another atom is given by the formula: where n(r) is the mean number of atoms in a shell of width Δr at distance r. The g(r) of a simulation box can be calculated easily by histograming the particle separations using the following equation where Na is the number of a particles, |rij| is the magnitude of the separation of the pair of particles i,j. Atomistic simulations can also be used in conjunction with interatomic pair potential functions in order to calculate macroscopic thermodynamic parameters such as the internal energy, Gibbs free energy, entropy and enthalpy of the system. Theories of glass formation and criterion Structural theory of glass formation, Zachariasen While studying glass, Zachariasen began to notice repeating properties in glasses. He postulated rules and patterns that, when atoms followed these rules, they were likely to form glasses. The following rules make up Zachariasen's theory, applying only to oxide glasses. Each oxygen atom in a glass can be bonded to no more than two glass-forming cations The coordination number of the glass forming cation is 3 or 4 The oxygen coordination polyhedra only share corners, not edges or faces At least 3 corners of every polyhedra must be shared, creating a continuous random network. All of these rules provide the correct amount of flexibility to form a glass and not a crystal. While these rules only apply to oxide glasses, they were the first rules to establish the idea of a continuous random network for glass structure. He was also the first to classify structural roles for various oxides, some being main glass formers (SiO2, GeO2 , P2O5), and some being glass modifiers (Na2O, CaO). Energy criterion of K.H. Sun This criterion established a connection between the chemical bond strength and its glass forming tendency. When a material is quenched to form glass, the stronger the bonds, the easier the glass formation. If a bond strength is higher than 80 kcal per bond (high bond strength), it will be glass network forming, meaning it is likely to form a glass. If a bond strength is less than 60 kcal per bond (low bond strength), it will be glass network modifying, since it would only form weak bonds, it would disrupt glass forming networks. If a bond strength is between 60 and 80 kcal per bond (intermediate bond strength, it will be an intermediate. This means it will not form a glass on its own, but it partially can while combined with other network forming atoms. Dietzel's field strength criterion Dietzel looked at direct Coulombic interactions between atoms. He categorized cations using field strength where FS=zc/(rc+ra)2, where zc is the charge of the cation, and rc and ra are the radii of the cation and anion respectively. High field strength cations would have a high cation-oxygen bond energy. If FS was greater than 1.3 (small cation with high charge), it would be a glass network former. If FS was less than 0.4 (large cation with small charge), it would be a glass network modifier. If FS was between 0.4 and 1.3 (medium-sized cation with medium charge) it would be an intermediate. These three criterion help establish three different ways to determine whether or not certain oxides molecules will form glasses, and the likeliness of it. Other techniques Other experimental techniques often employed to study the structure of glasses include nuclear magnetic resonance, X-ray absorption fine structure and other spectroscopy methods including Raman spectroscopy. Experimental measurements can be combined with computer simulation methods, such as reverse Monte Carlo or molecular dynamics simulations, to obtain more complete and detailed description of the atomic structure. Network glasses Early theories relating to the structure of glass included the crystallite theory whereby glass is an aggregate of crystallites (extremely small crystals). However, structural determinations of vitreous SiO2 and GeO2 made by Warren and co-workers in the 1930s using x-ray diffraction showed the structure of glass to be typical of an amorphous solid In 1932, Zachariasen introduced the random network theory of glass in which the nature of bonding in the glass is the same as in the crystal but where the basic structural units in a glass are connected in a random manner in contrast to the periodic arrangement in a crystalline material. Despite the lack of long range order, the structure of glass does exhibit a high degree of ordering on short length scales due to the chemical bonding constraints in local atomic polyhedra. For example, the SiO4 tetrahedra that form the fundamental structural units in silica glass represent a high degree of order, i.e. every silicon atom is coordinated by 4 oxygen atoms and the nearest neighbour Si-O bond length exhibits only a narrow distribution throughout the structure. The tetrahedra in silica also form a network of ring structures which leads to ordering on more intermediate length scales of up to approximately 10 angstroms. The structure of glasses differs from the structure of liquids just above the glass transition temperature Tg which is revealed by the XRD analysis and high-precision measurements of third- and fifth-order non-linear dielectric susceptibilities. Glasses are generally characterised by a higher degree of connectivity compared liquids. Alternative views of the structure of liquids and glasses include the interstitialcy model and the model of string-like correlated motion. Molecular dynamics computer simulations indicate these two models are closely connected Oxide glass components can be classified as network formers, intermediates, or network modifiers. Traditional network formers (e.g. silicon, boron, germanium) form a highly cross-linked network of chemical bonds. Intermediates (e.g. titanium, aluminium, zirconium, beryllium, magnesium, zinc) can behave both as a network former or a network modifier, depending on the glass composition. The modifiers (calcium, lead, lithium, sodium, potassium) alter the network structure; they are usually present as ions, compensated by nearby non-bridging oxygen atoms, bound by one covalent bond to the glass network and holding one negative charge to compensate for the positive ion nearby. Some elements can play multiple roles; e.g. lead can act both as a network former (Pb4+ replacing Si4+), or as a modifier. The presence of non-bridging oxygens lowers the relative number of strong bonds in the material and disrupts the network, decreasing the viscosity of the melt and lowering the melting temperature. The alkali metal ions are small and mobile; their presence in a glass allows a degree of electrical conductivity. Their mobility decreases the chemical resistance of the glass, allowing leaching by water and facilitating corrosion. Alkaline earth ions, with their two positive charges and requirement for two non-bridging oxygen ions to compensate for their charge, are much less mobile themselves and hinder diffusion of other ions, especially the alkali's. The most common commercial glass types contain both alkali and alkaline earth ions (usually sodium and calcium), for easier processing and satisfying corrosion resistance. Corrosion resistance of glass can be increased by dealkalization, removal of the alkali ions from the glass surface by reaction with sulphur or fluorine compounds. Presence of alkaline metal ions has also detrimental effect to the loss tangent of the glass, and to its electrical resistance; glass manufactured for electronics (sealing, vacuum tubes, lamps ...) have to take this in account. Crystalline SiO2 Silica (the chemical compound SiO2) has a number of distinct crystalline forms: quartz, tridymite, cristobalite, and others (including the high pressure polymorphs stishovite and coesite). Nearly all of them involve tetrahedral SiO4 units linked together by shared vertices in different arrangements. Si-O bond lengths vary between the different crystal forms. For example, in α-quartz the bond length is 161 pm, whereas in α-tridymite it ranges from 154 to 171 pm. The Si–O–Si bond angle also varies from 140° in α-tridymite to 144° in α-quartz to 180° in β-tridymite. Glassy SiO2 In amorphous silica (fused quartz), the SiO4 tetrahedra form a network that does not exhibit any long-range order. However, the tetrahedra themselves represent a high degree of local ordering, i.e. every silicon atom is coordinated by 4 oxygen atoms and the nearest neighbour Si-O bond length exhibits only a narrow distribution throughout the structure. If one considers the atomic network of silica as a mechanical truss, this structure is isostatic, in the sense that the number of constraints acting between the atoms equals the number of degrees of freedom of the latter. According to the rigidity theory, this allows this material to show a great forming ability. Despite the lack of ordering on extended length scales, the tetrahedra also form a network of ring-like structures which lead to ordering on intermediate length scales (up to approximately 10 angstroms or so). Under the application of high pressure (approximately 40 GPa) silica glass undergoes a continuous polyamorphic phase transition into an octahedral form, i.e. the Si atoms are surrounded by 6 oxygen atoms instead of four in the ambient pressure tetrahedral glass. See also Amorphous solid Chemical structure Glass Liquid Neutron diffraction Pair distribution function Polyamorphism Structure factor Surface layering X-ray diffraction References Further reading Condensed matter physics Glass physics Liquids
Structure of liquids and glasses
Physics,Chemistry,Materials_science,Engineering
3,116
4,810,981
https://en.wikipedia.org/wiki/Computer%20operator
A computer operator is a role in IT which oversees the running of computer systems, ensuring that the machines, and computers are running properly. The job of a computer operator as defined by the United States Bureau of Labor Statistics is to "monitor and control ... and respond to ... enter commands ... set controls on computer and peripheral devices. This Excludes Data Entry." Overview The position has evolved from its beginnings in the punched card era. A Bureau of Labor Statistics report published in 2018 showed that, in the public sector, a major employer of those categorized as Computer Operator was United States Postal Service. In the private sector, companies involved in data processing, hosting, or related services employed computer operators at an even higher rate. The states with the highest employment for computer operators, as of 2018, are: New York, Texas, California, New Jersey, and Florida. Job role description The former role of a computer operator was to work with mainframe computers which required a great deal of management day-to-day including manually running batch jobs; however, now they often work with a variety of different systems and applications. The computer operator normally works in a server room or a data center, but can also work remotely so that they can operate systems across multiple sites. Most of their duties are taught on the job, as their job description will vary according to the systems they help to manage. Responsibilities of a computer operator may include: Monitor and control electronic computer and peripheral electronic data processing equipment to process business, scientific, engineering, and other data according to operating instructions. Monitor and respond to operating and error messages. May enter commands at a computer terminal and set controls on computer and peripheral devices. Excludes "Computer Occupations" (15-1100) and "Data Entry Keyers" (43-9021). The role also includes maintaining records and logging events, listing each backup that is run, each machine malfunction and program abnormal termination. Operators assist system administrators and programmers in testing and debugging of new systems and programs prior to their becoming production environments. Modern-day computing has led to a greater proliferation of personal computers, with a rapid change from older mainframe systems to newer self-managing systems. This is reflected in the operator's role. Tasks may include managing the backup systems, cycling tapes or other media, filling and maintaining printers. Overall the operator fills in as a lower level system administrator or operations analyst. Most operations departments work 24x7. A computer operator also has knowledge of disaster recovery and business continuity procedures. Formerly, this would have meant sending physical data tapes offsite, but now the data is more than likely transmitted over computer networks. Specializations Console operator A console operator interacts with a front panel or a multi-user system's console entering system commands via a keyboard entering commands for a subsystem, e.g., HASP, via a keyboard replying to requests for information taking actions such as mounting computer tapes that were "pulled" by a tape librarian supervising a tape operator, especially when there is a non-specific mount request. These individuals would be trained to use specialized equipment related to their duties. Beyond the IBM System/360 era One example of specific hardware used by a console operator is the IBM 3066 Model 2 system console, which included a light pen as an interface device. Other then-new features were: replaced "most switch, pushbutton, and indicator functions" as with the 165's Model 1, had a microfiche document viewer, a feature introduced for the 360/85's console. A console printer (up to 85 characters per second) to provide hard copy was optional when the console was in display mode, and required when it was in printer-keyboard mode. Peripherals operator A peripherals operator uses dedicated peripheral equipment connected to computer(s) such as printers, scanners, or storage devices for data transfer to and/or from computers. Tape operator Historically, tape operators were in charge of swapping out reels of paper tape, reels of magnetic tape or magnetic tape cartridges that stored computer data or instructions. Card reader operator Depending on the type of card reader, either the "9-edge" or the "12-edge" was towards the card reader operator inserting the cards - but the deck of cards was always placed face down. The United States Army's wordings were: Load cards in hopper face down, 12 edge out, column 1 to the left (1977) Place cards in hopper face down with 12 edge to operator (1981) 12 edge / face down : IBM orientation. nine-edge (also face down) : some other card readers. Printer operator In addition to filing or delivering computer printouts, a printer operator at times loads standard or, as directed by a console operator or a remote console, specialized forms. Tab operator The tab operator (short for tabulating) would be responsible for preparing and operating tabulating machines to produce statistical results. Hardware such as the IBM 08x sorter series were called tabulating equipment. The 1980 census specifically counted Tab operators ("Tabulating-machine operator"). Tape librarian A tape librarian is responsible for the management, storage, and reporting involving data storage tapes. The tape librarian would develop and/or maintain an organization system for the storage and retrieval of tapes, and assist in disaster recovery. Additionally, the librarian would ensure the integrity of the tapes, and submit recommendations for replacement when needed. Some examples of equipment a tape librarian may work with are the IBM 3850. Gallery See also System administration Notes References Computer occupations
Computer operator
Technology
1,134
13,526,853
https://en.wikipedia.org/wiki/Column%20wave
The column wave is a 16th-century stage machine created to mimic movement of the ocean. Developed by Nicola Sabbatini, the machine was an effective way to give the appearance of a wave-filled sea. It was used to great effect through the following centuries. The machine was documented in Practica di Fabricar Scene e Machine ne' Teatri as the third method of showing a sea. The column wave was built by attaching slightly bent bars through cylinders made of wood and burlap. The burlap was painted blue and black (with hints of silver for the whitecaps). These tubes were attached to cranks that, when turned, made the stretched burlap quiver while the disks created a flowing motion. Combining several of these in a row gave the audience a more realistic sea than had been seen on stage before. References Sabbatini, N. Pratica di fabricar scene e macchine ne' teatri, Ravenna, 1638. https://web.archive.org/web/20010104221100/http://www.acs.appstate.edu/orgs/spectacle/index.html Scenic design
Column wave
Engineering
247
49,269,127
https://en.wikipedia.org/wiki/Isoxicam
Isoxicam is a nonsteroidal anti-inflammatory drug (NSAID) that was taken or applied to reduce inflammation and as an analgesic reducing pain in certain conditions. The drug was introduced in 1983 by the Warner-Lambert Company. Isoxicam is a chemical analog of piroxicam (Feldene) which has a pyridine ring in lieu of an isoxazole ring. In 1985 isoxicam was withdrawn from the French market, due to adverse effects, namely toxic epidermal necrolysis resulting in death. Although these serious side effects were observed only in France, the drug was withdrawn worldwide. References Dermatoxins Isoxazoles Nonsteroidal anti-inflammatory drugs Drugs developed by Pfizer Sultams Withdrawn drugs
Isoxicam
Chemistry
161
6,945,073
https://en.wikipedia.org/wiki/Muhammad%20Raziuddin%20Siddiqui
Muhammad Raziuddin Siddiqui, FPAS, NI, HI, SI (Urdu: , ; 8 January 1908 – 8 January 1998), also known as Dr. Razi, was a Pakistani theoretical physicist and a mathematician who played a role in Pakistan's education system, and Pakistan's indigenous development of nuclear weapons. An educationist and a scientist, Siddiqui established educational research institutes and universities in his country. During the 1940s in Europe, he contributed in mathematical physics and worked on general relativity and the theory of relativity, nuclear energy, and quantum gravity. He was one of the notable students of Albert Einstein. He had been the vice-chancellor of four Pakistani universities, and the first vice-chancellor of Quaid-e-Azam University and served as the Emeritus professor of Physics there until his death in 1998. Biography Life and education Raziuddin Siddiqui was born on 8 January 1908 in Hyderabad- Deccan, India to Mohammed Muzaffer uddin Siddiqui and Baratunnisa Begum. His family consisted of one elder brother, Mohammed Zakiuddin Siddiqui and two sisters, Abida Begum and Sajida Begum, he was the youngest in the family. He attended the newly established Osmania University. After passing the Rashidia Exams in 1918, Siddiqui completed his matriculation from Osmania University in 1921, and earned a Bachelor of Arts (BA) in mathematics, with distinction, in 1925. Siddiqui in Europe Siddiqui was then awarded a scholarship from the Government of the State of Hyderabad to pursue higher studies in United Kingdom where he completed his MA in mathematics, under Paul Dirac from the University of Cambridge in 1928. Then, he proceeded further to work for his PhD at the University of Leipzig in Germany (Weimar Republic). He studied mathematics and quantum mechanics under Albert Einstein. He completed his PhD in theoretical physics, writing a brief research thesis on the Theory of relativity and the nuclear binding energy. He did his post doctoral work at the University of Paris, France. Research in theoretical physics In Europe, while Siddique was working on his post-doctoral research at the Paris University, he had the opportunity to meet with the members of "The Paris Group" where he had led the discussions on unsolved problems in physics and in mathematics. During his stay in Great Britain, he studied Quantum mechanics and published scientific papers at the Cavendish Laboratory. Return to India In 1931, Siddiqui then returned to Hyderabad, British Indian Empire, and joined Osmania University there as an associate professor of mathematics. During 1948–49, he served as vice-chancellor of Osmania, appointed by the governor of Andhra Pradesh. Move to Pakistan After the Partition of India led to the independence of Pakistan in 1947, at the request of the Government of Pakistan, Siddiqui migrated to Karachi, Pakistan in 1950, along with some of his family. His brother Zakiuddin and one of his sisters, Sajida Begum, remained in Hyderabad, India with their families and parents. His father Muzaffer uddin Siddiqui died while his visit to Raziuddin Siddiqui in Pakistan later in his years. In Karachi, Siddiqui joined the Karachi University's teaching faculty and taught as professor of applied mathematics there. In 1953, he was simultaneously appointed to the post of vice-chancellor of the University of Sindh and the University of Peshawar. Siddiqui founded the first mathematical society in Pakistan in 1952 by the name of "All Pakistan Mathematics Association", and remained its president until 1972. In 1956, Siddiqui helped establish Nuclear power in Pakistan and its expansion in the country by first joining the newly established Pakistan Atomic Energy Commission (PAEC) and then establishing the first science directorate on mathematical physics. In 1964, he moved to Islamabad, where he joined PAEC. There he began his academic research in theoretical physics. In 1965, with the establishment of Quaid-e-Azam University (QAU), Siddiqui was appointed as its first vice-chancellor by the then foreign minister Zulfikar Ali Bhutto. He was one of the first professors of Physics at Quaid-e-Azam University where he also served as the chairman of the Physics Department. He continued his tenure until 1972, when he rejoined PAEC at the request of Prime Minister Bhutto. During the 1960s, he helped convince President of Pakistan Ayub Khan to make a proposed university a research institution. He, at first, established the "Institute of Physics" at the QAU, and invited Professor Riazuddin to be its first director, and the dean of the faculty. Then, Riazuddin, with the help of his mentor, Dr. Abdus Salam, convinced the then PAEC chairman Dr. Ishrat Hussain Usmani to send all the theoreticians to the Institute of Physics to form a physics group. This established the "Theoretical Physics Group" (TPG), which later designed nuclear weapons for Pakistan. With the establishment of TPG, Siddiqui began to work with Abdus Salam, and on his advice began research in Theoretical Physics at PAEC. In 1970, he established the Mathematical Physics Group (MPG) at PAEC, where he led academic research in advanced mathematics. He also delegated mathematicians to PAEC to specialise in their fields at the MPG Division of PAEC. Pakistan and its nuclear deterrent program After the Indo-Pakistani War of 1971, Siddiqui joined the Pakistan Atomic Energy Commission (PAEC) at the request of Prime Minister Zulfikar Ali Bhutto. Siddiqui was the first full-time Technical Member of PAEC and was responsible for preparation of its charter. During the 1970s, Siddiqui worked on problems in theoretical physics with Pakistani theoretical physicists in the nuclear weapons programme. Previously, he had worked in Europe, including carrying out nuclear research in the British nuclear weapon program, and the French atomic program. At PAEC, he became a mentor to some of the country's academic scientists. At PAEC, he was the director of the Mathematical Physics Group (MPG) and was tasked with performing mathematical calculations involved in nuclear fission and supercomputing. While both MPG and Theoretical Physics Group (TPG) had reported directly to Abdus Salam, Siddiqui co-ordinated each meeting with the scientists of TPG and mathematicians of the MPG. At PAEC, he directed the mathematical research directly involving the theory of general relativity, and helped establish the quantum computers laboratories at PAEC. Since theoretical physics plays a major role in identifying the parameters of nuclear physics, Siddiqui started the work on special relativity's complex applications, the 'relativity of simultaneity'. His Mathematical Physics Group undertook the research and performed calculations on the 'relativity of simultaneity' during the process of weapon detonation, where multiple explosive energy rays are bound to release in the same isolate and close medium at the same time interval. Post-war After his work at PAEC, Siddiqui again joined Quaid-e-Azam University's Physics Faculty. As professor of physics, he continued his research at the Institute of Physics, QAU. He helped develop the higher education sector, and placed mainframe policies in the institution. Death and legacy Siddiqui remained in Islamabad, and had associated himself with Quaid-e-Azam University. In 1990, he was made Professor Emeritus of Physics and Mathematics there. He died on 8 January 1998, at the age of 90. Siddiqui's biography was written by scientists who had worked with him. In 1960, due to his efforts to expand education, he was awarded the third-highest civilian award of Pakistan, Sitara-i-Imtiaz, from the then-President of Pakistan, Field Marshal Ayub Khan. In 1981, he was awarded the second highest civilian award, Hilal-i-Imtiaz, from President General Muhammad Zia-ul-Haq due to his efforts in Pakistan's atomic program, and for popularising science in Pakistan. In May 1998, the Government of Pakistan awarded him the highest civilian award, the Nishan-i-Imtiaz, posthumously by Prime Minister Nawaz Sharif when Pakistan conducted its first successful nuclear tests, 'Chagai-I'. Family His eldest daughter, Dr. Shirin Tahir-Kheli, is a former special assistant to the president of the United States of America, and Senior Adviser for women's empowerment. Civil awards Sitara-i-Imtiaz (1960) Hilal-i-Imtiaz (1981) Nishan-e-Imtiaz (1998) Gold Medal, Pakistan Academy of Sciences (1950) Gold Medal, Pakistan Mathematical Society (1980) Gold Medallion, Pakistan Physical Society (1953) Doctorate of Science Honoris Causa, Osmania University (1938) Books Quantum Mechanics and its Physics Dastan-e-Riazi (The Tale of Mathematics) Izafiat Tasawur-e-Zaman-o-Makaan Experiences in science and education by M. Raziuddin Siddiqui, published in 1977. Establishing a new university in a developing country: Policies and procedures by M. Raziuddin Siddiqui, published in 1990. See also Abdus Salam Salimuzzaman Siddiqui Quaid-i-Azam University Nuclear weapon References Bibliography External links Muhammad Raziuddin Siddiqui Dr. Raziuddin Siddiqui Memorial Library Ias.ac.in Iiit.ac.in, Iqbal Ka Tasawwuf-e-Zaman-o-MakaN at Digital Library of India 1908 births 1998 deaths Recipients of Sitara-i-Imtiaz Recipients of Nishan-e-Imtiaz Pakistani scientists Pakistani scholars Pakistani academics Pakistani educational theorists Project-706 Recipients of Hilal-i-Imtiaz Pakistani physicists Pakistani mathematicians Pakistani nuclear physicists Osmania University alumni University of Paris alumni Academic staff of the University of Peshawar Academic staff of the University of Sindh Alumni of the University of Cambridge Leipzig University alumni Scientists from Hyderabad, India People from Karachi People from Islamabad Nuclear weapons programme of Pakistan Fellows of Pakistan Academy of Sciences Academic staff of the University of Karachi Academic staff of Quaid-i-Azam University Vice-chancellors of the University of Sindh Theoretical physicists Pakistani people of Hyderabadi descent
Muhammad Raziuddin Siddiqui
Physics
2,138
4,544,328
https://en.wikipedia.org/wiki/Galilei%20number
In fluid dynamics, the Galilei number (Ga), sometimes also referred to as Galileo number (see discussion), is a dimensionless number named after Italian scientist Galileo Galilei (1564-1642). It may be regarded as proportional to gravity forces divided by viscous forces. The Galilei number is used in viscous flow and thermal expansion calculations, for example to describe fluid film flow over walls. These flows apply to condensers or chemical columns. g: gravitational acceleration, (SI units: m/s2) L: characteristic length, (SI units: m) ν: characteristic kinematic viscosity, (SI units: m2/s) See also Archimedes number References VDI-Wärmeatlas; 5., extended Edition; VDI Verlag Düsseldorf; 1988; page Bc 1 (German) W. Wagner; Wärmeübertragung; 5., revised Edition; Vogel Fachbuch; 1998; page 119 (German) External links Website referring to the Galileo number with calculator Table of dimensionless numbers (German) Table of dimensionless numbers (German) Dimensionless numbers of fluid mechanics Fluid dynamics
Galilei number
Chemistry,Engineering
240
2,149,972
https://en.wikipedia.org/wiki/Convective%20inhibition
Convective inhibition (CIN or CINH) is a numerical measure in meteorology that indicates the amount of energy that will prevent an air parcel from rising from the surface to the level of free convection. CIN is the amount of energy required to overcome the negatively buoyant energy the environment exerts on an air parcel. In most cases, when CIN exists, it covers a layer from the ground to the level of free convection (LFC). The negatively buoyant energy exerted on an air parcel is a result of the air parcel being cooler (denser) than the air which surrounds it, which causes the air parcel to accelerate downward. The layer of air dominated by CIN is warmer and more stable than the layers above or below it. The situation in which convective inhibition is measured is when layers of warmer air are above a particular region of air. The effect of having warm air above a cooler air parcel is to prevent the cooler air parcel from rising into the atmosphere. This creates a stable region of air. Convective inhibition indicates the amount of energy that will be required to force the cooler packet of air to rise. This energy comes from fronts, heating, moistening, or mesoscale convergence boundaries such as outflow and sea breeze boundaries, or orographic lift. Typically, an area with a high convection inhibition number is considered stable and has very little likelihood of developing a thunderstorm. Conceptually, it is the opposite of CAPE. CIN hinders updrafts necessary to produce convective weather, such as thunderstorms. Although, when large amounts of CIN are reduced by heating and moistening during a convective storm, the storm will be more severe than in the case when no CIN was present. CIN is strengthened by low altitude dry air advection and surface air cooling. Surface cooling causes a small capping inversion to form aloft allowing the air to become stable. Incoming weather fronts and short waves influence the strengthening or weakening of CIN. CIN is calculated by measurements recorded electronically by a Rawinsonde (weather balloon) which carries devices which measure weather parameters, such as air temperature and pressure. A single value for CIN is calculated from one balloon ascent by use of the equation below. The z-bottom and z-top limits of integration in the equation represent the bottom and top altitudes (in meters) of a single CIN layer, is the virtual temperature of the specific parcel and is the virtual temperature of the environment. In many cases, the z-bottom value is the ground and the z-top value is the LFC. CIN is an energy per unit mass and the units of measurement are joules per kilogram (J/kg). CIN is expressed as a negative energy value. CIN values greater than 200 J/kg are sufficient to prevent convection in the atmosphere. The CIN energy value is an important figure on a skew-T log-P diagram and is a helpful value in evaluating the severity of a convective event. On a skew-T log-P diagram, CIN is any area between the warmer environment virtual temperature profile and the cooler parcel virtual temperature profile. CIN is effectively negative buoyancy, expressed B-; the opposite of convective available potential energy (CAPE), which is expressed as B+ or simply B. As with CAPE, CIN is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CIN is sometimes referred to as negative buoyant energy (NBE). See also Atmospheric thermodynamics Convective instability Equilibrium level Thermodynamic diagrams References External links CINH Help Page Atmospheric thermodynamics Meteorological quantities Severe weather and convection
Convective inhibition
Physics,Mathematics
783
75,634,850
https://en.wikipedia.org/wiki/Amanita%20friabilis
Amanita gioiosa is a species of Amanita found across Europe. It grows amongst Alder. References External links friabilis Fungi of Europe Fungus species
Amanita friabilis
Biology
36
19,513,545
https://en.wikipedia.org/wiki/Controversy%20over%20the%20discovery%20of%20Haumea
was the first of the IAU-recognized dwarf planets to be discovered since Pluto in 1930. Its naming as a dwarf planet was delayed by several years due to controversy over who should receive credit for its discovery. A California Institute of Technology (Caltech) team headed by Michael E. Brown first noticed the object, but a Spanish team headed by José Luis Ortiz Moreno were the first to announce it, and so normally would receive credit. Brown accused the Spanish team of fraud, using Caltech observations without credit to make their discovery, while the Ortiz team accused the American team of political interference with the International Astronomical Union (IAU). The IAU officially recognized the Californian team's proposed name Haumea over the name proposed by the Spanish team, Ataecina, in September 2008. Discovery and announcement On December 28, 2004, Mike Brown and his team discovered Haumea on images they had taken with the 1.3 m SMARTS Telescope from the Cerro Tololo Inter-American Observatory in Chile at the Palomar Observatory in the United States on May 6, 2004, while looking for what he hoped would be the tenth planet. The Caltech discovery team used the nickname "Santa" among themselves, because they had discovered Haumea on December 28, 2004, just after Christmas. However, it was clearly too small to be a planet, because it was significantly smaller than Pluto, and Brown did not announce the discovery. Instead he kept it under wraps, along with several other large trans-Neptunian objects (TNOs), pending additional observation to better determine their natures. When his team discovered Haumea's moons, they realized that Haumea was more rocky than other TNOs, and that its moons were mostly ice. They then discovered a small family of nearby icy TNOs, and concluded that these were remnants of Haumea's icy mantle, which had been blasted off by a collision. On July 7, 2005, while he was finishing the paper describing the discovery, Brown's daughter Lilah was born, which delayed the announcement further. On July 20, the Caltech team published an online abstract of a report intended to announce the discovery at a conference the following September. In this Haumea was given the code K40506A. At around that time, Pablo Santos Sanz, a student of José Luis Ortiz Moreno at the Instituto de Astrofísica de Andalucía at Sierra Nevada Observatory in southern Spain, claims to have examined the backlog of photos that the Ortiz team had started taking in December 2002. He says that he found Haumea in late July 2005, on images taken on March 7, 9, and 10, 2003. He further said that in checking whether this was a known object, the team came across Brown's internet summary, describing a bright TNO much like the one they had just found. Googling the reference number for object K40506A on the morning of July 26, they found the Caltech observation logs of Haumea, but according to their account, those logs contained too little information for Ortiz to tell if they were the same object. The Ortiz team also checked with the Minor Planet Center (MPC), which had no record of this object. Wanting to establish priority, they emailed the MPC with their discovery on the night of July 27, 2005, titled "Big TNO discovery, urgent", without making any mention of the Caltech logs. The next morning they again accessed the Caltech logs, including observations from several additional nights. They then asked Reiner Stoss at the amateur Astronomical Observatory of Mallorca for further observations. Stoss found precovery images of Haumea in digitized Palomar Observatory slides from 1955, and located Haumea with his own telescope that night, July 28. Within an hour, the Ortiz team submitted a second report to the MPC that included this new data. Again, no mention was made of having accessed the Caltech logs. The data was published by the MPC on July 29. In a press release on the same day, the Ortiz team called Haumea the "tenth planet". On July 29, 2005, Haumea was given its first official label, the temporary designation 2003 EL61, with the "2003" based on the date of the Spanish discovery image. On September 7, 2006, it was numbered and admitted into the official minor planet catalogue as (136108) 2003 EL61. Reaction to the announcement The same day as the MPC publication, Brown's group announced the discovery of another Kuiper belt object, , more distant, brighter and apparently larger than Pluto, as the tenth planet. The announcement was made earlier than planned to forestall the possibility of similar events with that discovery, when the MPC told them that their observational data was publicly accessible, and they realized that not only Haumea data but by that time their Eris data had been publicly accessed. The same day Ortiz announced the discovery of Haumea, Brown submitted his own draft with the data on the first of its moons that he had discovered on January 26, 2005, to The Astrophysical Journal. Brown, though disappointed at being scooped, congratulated the Ortiz team on their discovery. He apologized for immediately overshadowing their announcement of Haumea with his announcement of Eris, and explained that someone had accessed their data and he was afraid of being scooped again. Ortiz did not volunteer to say that it had been he who accessed the data. Upon learning from web server records that it was a computer at the Sierra Nevada Observatory that had accessed his observation logs the day before the discovery announcementlogs which included enough information to allow the Ortiz team to precover Haumea in their 2003 imagesBrown came to suspect fraud. He emailed Ortiz on August 9 and asked for an explanation. Ortiz responded saying that Brown's penchant for "hiding objects" had alienated other astronomers and harmed science (see Timeline of discovery of Solar System planets and their moons to verify typical time scale of observation and publication of discoveries). On August 15 the Caltech team filed a formal complaint with the IAU, accusing the Ortiz team of a serious breach of scientific ethics in failing to acknowledge their use of the Caltech data, and asked the MPC to strip them of discovery status. Ortiz later admitted he had accessed the Caltech observation logs but denied any wrongdoing, stating this was merely part of verifying whether they had discovered a new object. Brown began to wonder if the Spanish team had actually identified Haumea at all before they saw his own abstract and telescope log, noting that Ortiz' team claimed to have sat on their data for a period of 28 months until Brown's upload of his abstract, then coincidentally identified Haumea within the six days prior to accessing his abstract. Official naming IAU protocol is that discovery credit for a minor planet goes to whoever first submits a report to the MPC with enough positional data for a decent orbit determination, and that the credited discoverer has priority in naming it. This was Ortiz et al., and they proposed the name Ataecina, an Iberian goddess of the underworld. She is the equivalent of the Roman goddess Proserpina, who was in turn one of Pluto's lovers. However, as a chthonic deity, Ataecina would only have been an appropriate name for an object in a stable orbital resonance with Neptune (see astronomical naming conventions), and Haumea's resonance (if known of by the Spanish team) was unstable. Following guidelines established by the IAU that classical Kuiper belt objects be given names of mythological beings associated with creation, in September 2006 the Caltech team submitted formal names from Hawaiian mythology to the IAU for both (136108) 2003 EL61 and its moons, in order "to pay homage to the place where the satellites were discovered". The names were proposed by David Rabinowitz of the Caltech team. Haumea is the tutelary goddess of the island of Hawaii, where the Mauna Kea Observatory is located. In addition, she is identified with Papa, the goddess of the earth and wife of Wākea (space), which is appropriate because 2003 EL61 is thought to be composed almost entirely of solid rock, without the thick ice mantle over a small rocky core typical of other known Kuiper belt objects. Lastly, Haumea is the goddess of fertility and childbirth, with many children who sprang from different parts of her body; this corresponds to the swarm of icy bodies thought to have broken off the dwarf planet during an ancient collision. The two known moons, also believed to have been born in this manner, were thus named after two of Haumea's daughters, Hiiaka and Nāmaka. The dispute over who had discovered the object delayed the acceptance of either name. On 17 September 2008, the IAU announced that the two bodies in charge of naming dwarf planets, the Committee on Small Body Nomenclature (CSBN) and the Working Group for Planetary System Nomenclature (WGPSN), had decided on the Caltech proposal of Haumea. At the CSBN, the outcome of the voting was very close, eventually being decided by a single vote. However, the date of the discovery was listed on the announcement as March 7, 2003, the location of discovery as the Sierra Nevada Observatory, and the name of the discoverer was left blank. Aftermath Brian G. Marsden, head of the MPC at Harvard who had supported Brown on previous naming disputes, again supported Brown, saying that "Sooner or later, posterity will realise what happened, and Mike Brown will get the full credit". He went on to state, in reference to the name of the discoverer, which was left blank in the IAU listing, that "It's deliberately vague about the discoverer of the object [...] We don't want to cause an international incident." He called the whole controversy the worst since the early 17th-century dispute over who found the four biggest satellites of Jupiter between Galileo Galilei and Simon Marius, ultimately won by Galileo. The Ortiz team objected, suggesting that if Ataecina were not accepted the IAU could at least have chosen a third name favoring neither party, and accusing the IAU of political bias. Rumors appeared that Dagda, the name of a god from Irish mythology and a "neutral" name, was indeed proposed by a member of the CSBM but was not used in the end. Ortiz went on to say "I am not happy, I think the [IAU] decision is unfortunate and sets a bad precedent." The Spanish newspaper ABC went on to call the decision a "US conquest", asserting that politics played a major role as the US had 10 times more astronomers in the IAU than Spain had. Immediately after the announcement of the name, Brown noted that it is unusual to be allowed to name an object without being acknowledged as its official discoverer but declared that he is pleased with the outcome and that he "think[s] this is as good a resolution as we'll get". He did get full recognition for the discovery of the two moons, Hiiaka and Namaka. On the fifth anniversary of the discovery he wrote a blog post with his thoughts on the importance of the discovery, but did not mention any events regarding the controversy. References External links Mike Brown's Planets: Haumea Mike Brown's blog on the controversy La historia de Ataecina vs Haumea Pablo Sanz's account of what happened The electronic trail of the discovery of 2003 EL61 A Caltech timeline of the Spanish discovery announcements and access of the Caltech observation logs 2007 KCET interview of Mike Brown about Eris and Haumea with Julia Sweeney Aloha, Haumea Blog on NationalGeographic.com on the events surrounding the naming process Haumea (dwarf planet) 2000s controversies Astronomical controversies Haumea Michael E. Brown
Controversy over the discovery of Haumea
Astronomy
2,469
45,242,025
https://en.wikipedia.org/wiki/Foam%20concrete
Foam concrete, also known as Lightweight Cellular Concrete (LCC) and Low Density Cellular Concrete (LDCC), and by other names, is defined as a cement-based slurry, with a minimum of 20% (per volume) foam entrained into the plastic mortar. As mostly no coarse aggregate is used for production of foam concrete the correct term would be called mortar instead of concrete; it may be called "foamed cement" as well. The density of foam concrete usually varies from 400 kg/m to 1600 kg/m. The density is normally controlled by substituting all or part of the fine aggregate with the foam. Terminology It is also called foamed concrete, foam concrete, aerated concrete, aircrete, cellular lightweight concrete, or reduced density concrete. History The history of foam concrete dates back to the early 1920s and the production of autoclaved aerated concrete, which was used mainly as insulation. A detailed study concerning the composition, physical properties and production of foamed concrete was first carried out in the 1950s and 60s. Following this research, new admixtures were developed in the late 1970s and early 80s, which led to the commercial use of foamed concrete in construction projects. Initially, it was used in the Netherlands for filling voids and for ground stabilisation. Further research carried out in the Netherlands helped bring about the more widespread use of foam concrete as a building material. More recently, foam concrete is being made with a continuous foam generator. The foam is produced by agitating a foaming agent with compressed air to make "aircrete" or "foamcrete". This material is fireproof, insect proof, and waterproof. It offers significant thermal and acoustic insulation and can be cut, carved, drilled and shaped with wood-working tools. This construction material can be used to make foundations, subfloors, building blocks, walls, domes, or even arches that can be reinforced with a construction fabric. Manufacturing Foamed concrete typically consists of a slurry of cement or fly ash and sand and water, although some suppliers recommend pure cement and water with the foaming agent for very lightweight mixes. This slurry is further mixed with a synthetic aerated foam in a concrete mixing plant. The foam is created using a foaming agent, mixed with water and air from a generator. The foaming agent must be able to produce air bubbles with a high level of stability, resistant to the physical and chemical processes of mixing, placing, and hardening. Foamed concrete mixture may be poured or pumped into molds, or directly into structural elements. The foam enables the slurry to flow freely due to the thixotropic behavior of the foam bubbles, allowing it to be easily poured into the chosen form or mold. The viscous material requires up to 24 hours to solidify (or as little as two hours if steam cured with temperatures up to 70 °C to accelerate the process.), depending on variables including ambient temperature and humidity. Once solidified, the formed product may be released from its mold. A new application in foam concrete manufacturing is to cut large concrete cakes into blocks of different sizes by a cutting machine using special steel wires. The cutting action takes place before the concrete has fully cured. Properties Foam concrete is a versatile building material with a simple production method that is relatively inexpensive compared to autoclave aerated concrete. Foam concrete compounds utilising fly ash in the slurry mix is cheaper still, and has less environmental impact. Foam concrete is produced in a variety of densities from 200 kg/m to 1,600 kg/m depending on the application. Lighter density products may be cut into different sizes. While the product is considered a form of concrete (with air bubbles replacing aggregate), its high thermal and acoustical insulating qualities make it a very different application than conventional concrete. Advantages In terms of thermal conductivity, foam concrete is not inferior to wood. A 40 cm wall is able to withstand -30°C frost. Foam concrete withstands one-sided exposure to fire for at least three hours, and on average, five hours. Applications Foamed concrete can be produced with dry densities of 400 to 1600 kg/m (25 lb/ft to 100 lb/ft), with 7-day strengths of approximately 1 to 10 N/mm (145 to 1450 psi) respectively. Foam concrete is fire resistant, and its thermal and acoustical insulation properties make it ideal for a wide range of purposes, from insulating floors and roofs to void filling. It is also particularly useful for trench reinstatement. A few of the applications of foam concrete are: bridge approaches / embankments pipeline abandonment / annular fill trench backfill precast blocks precast wall elements / panels cast-in-situ / cast-in-place walls insulating compensation laying insulation floor screeds insulation roof screeds sunken portion filling trench reinstatement sub-base in highways filling of hollow blocks prefabricated insulation boards Trends and development Until the mid-1990s, foam concrete was regarded as weak and non-durable with high shrinkage characteristics. This is due to the unstable foam bubbles resulted in foam concrete having properties unsuitable for producing very low density (Less than 300 kg/m dry density) as well as load bearing structural applications. It is therefore important to ensure that the air entrained into the foamed concrete is contained in stable, very tiny, uniform bubbles that remain intact and isolated, and do not thus increase the permeability of the cement paste between the voids. The development of synthetic-enzyme based foaming agents; foam stability enhancing admixtures; and specialized foam generating, mixing, and pumping equipment has improved the stability of the foam and hence foam concrete, making it possible to manufacture as light as 75 kg/m density, a density that is just 7.5% of water. The enzyme consists of highly active proteins of biotechnological origin not based on protein hydrolysis. In recent years foamed concrete has been used extensively in highways, commercial buildings, disaster rehabilitation buildings, schools, apartments and housing developments in countries such as Germany, USA, Brazil, Singapore, India, Malaysia, Kuwait, Nigeria, Bangladesh, Botswana, Mexico, Indonesia, Libya, Saudi Arabia, Algeria, Iraq, Egypt, and Vietnam. Shock-absorption Foamed concrete has been investigated for use as a bullet trap in high intensity US military firearm training ranges. This work resulted in the product SACON being fielded by the U.S. Army Corps of Engineers, which when worn out, can be shipped directly to metal recycling facilities without requiring the separation of the trapped bullets, as the calcium carbonate in the concrete acts as a flux. The energy absorption capacity of foamed concrete was approximated from drop testing and found to vary from 4 to 15 MJ/m depending on its density. With optimum absorption estimated from a 1000 kg/m moderate density mix at water to cement (w/c) ratios from 0·6 to 0·7. References Concrete Masonry
Foam concrete
Engineering
1,435
74,430,591
https://en.wikipedia.org/wiki/Pool%20skimmer
A skimmer or surface separator (it separates substances from the surface of a liquid) is an essential accessory for the maintenance and cleaning of the water in a swimming pool. It is used to remove all the surface dirt floating on the water surface, such as leaves, tanning oil and human secretions. These impurities remain suspended on the surface, affect the appearance of the water and are not always removed by the conventional vacuuming process. The skimmer is installed directly in the surface water suction system and also has the function of controlling the water level to prevent accidental overflows. In the United States and Portugal, the use of skimmers in the construction of swimming pools is mandatory, regulated and standardized by competent bodies. Types There are different types of skimmers that can be used for different purposes. The most common types of skimmers include: Manual skimmers: These are basic skimmers that consist of a strong net strung on the end of a long pole. They are used to remove waste and pollutants from the surface of the water. Automatic skimmers: These are the most common variety of skimmers and the one you will most likely see running in any given pool. They are installed at surface level and are designed to remove debris and contaminants from the surface of the water. Standalone skimmers: These skimmers are designed to be used in conjunction with a pump and filter system. They are installed at surface level and are designed to remove debris and contaminants from the surface of the water Drainage opening Typically a skimmer draws water from the pool through a rectangular opening in the wall, at the top of the pool, connected through a device installed in one (or more) walls of the pool. The internal parts of the skimmer are accessed from the pool deck through a circular or rectangular cover, approximately one foot in diameter. If the pool's water pump is operational, it draws water from the pool through a hinged floating chute (which operates from a vertical position at a 90-degree angle to the pool, to prevent leaves and debris from being washed back into the pool by wave action), and down into a removable "skimmer basket", whose purpose is to catch leaves, dead insects and other larger floating debris. The opening visible from the side of the pool is usually 1'0" (300 mm) wide by 6" (150 mm) high, which cuts the water halfway down the center of the opening. Skimmers with wider openings are called "wide angle" skimmers and can be up to 2'0" wide (600 mm). Floating skimmers have the advantage of not being affected by water level, as they adjust to work with the suction rate of the pump and will maintain optimal skimming regardless of water level, leading to a significantly reduced amount of biomaterial in the water. Skimmers should always have a leaf basket or filter between it and the pump to avoid clogging the pipes leading to the pump and filter. Consecutive dilution A consecutive dilution system is usually provided to remove the organic waste in stages after passing through the skimmer. The waste material is trapped within one or more sequential skimmer basket sieves, each with a finer mesh to further dilute the size of the contaminant. Dilution here is defined as the act of making something weaker in strength, content or value. The first basket is placed very close to the mouth of the skimmer. The second is connected to the circulation pump. Here 25% of the water drawn from the main drain at the bottom of the pool meets 75% drawn from the surface. The circulation pump sieve basket is easily accessible for maintenance and should be emptied daily. The third sieve is the sand unit. Here the smallest organic waste that has slipped through the previous sieves is trapped by the sand. If not removed regularly, organic waste will continue to decompose and affect water quality. The dilution process makes it easy to remove organic waste. Ultimately, the sand screen can be backwashed to remove smaller trapped organic debris that otherwise leaches ammonia and other compounds into the recirculated water. These additional solutes eventually lead to the formation of disinfection byproducts (DBPs). The sieve baskets are easily removed each day for cleaning, as is the sand unit, which should be backwashed at least once a week. A perfectly maintained back-to-back dilution system dramatically reduces the build-up of chloramines and other DBPs. The water returned to the pool must have been cleaned of all organic debris larger than 10 microns in size. Recirculation jets Return water from the consecutive dilution system passes through subsurface return jets. These are designed to impact a turbulent flow as the water enters the pool. This flow as a force is much smaller than the mass of water in the pool and takes the path of least pressure upwards where eventually the surface tension reforms it into a laminar flow at the surface. As the returned water disturbs the surface, it creates a capillary wave. If the return jets are positioned correctly, this wave creates a circular motion within the surface tension of the water, allowing the surface to slowly circulate around the pool walls. Organic debris that floats to the surface through this capillary wave circulation slowly passes through the skimmer mouth, where it is attracted by laminar flow and surface tension over the skimmer dump. In a well-designed pool, the circulation caused by disturbed return water helps remove organic debris from the pool surface and directs it to be trapped within the back-dilution system for easy disposal. Some return jets are equipped with a rotating filter. Used correctly, it induces deeper circulation, cleaning the water even more. Rotating the jet filters at an angle imparts rotation within the entire depth of the pool water. Orientation to the left or to the right would result in a clockwise or counterclockwise rotation, respectively. This has the advantage of cleaning the bottom of the pool and slowly moving the sunken inorganic debris into the main drain, where it is removed by the circulator basket screen. In a properly constructed pool the circulation of water caused by the way it returns from the back-dilution system will reduce or even eliminate the need to vacuum the bottom. To obtain maximum rotational force on the main body of water, the back-to-back dilution system must be as clean and unblocked as possible to allow maximum flow pressure from the pump. As the water swirls, it also disturbs the organic debris in the lower layers of the water, forcing it up. The rotational force created by the pool's return jets is the most important part of cleaning the pool water and pushing organic debris through the skimmer mouth. If the pool is designed and operated correctly, this circulation is visible and, after a period, reaches even the deepest end, inducing a low-speed vortex over the main drain due to suction. Correct use of return jets is the most effective way to remove disinfection byproducts caused by deeper decaying organic debris and bring them into the back-to-back dilution system for immediate disposal. Additional sanitation methods Salt chlorination units, electronic oxidation systems, ionization systems, microbial disinfection systems with ultraviolet lamps and Tri-Chlor Feeders are other independent or auxiliary systems of skimmers for pool sanitation. Apart from this, the temperature of the water is very important, since if it remains high, it favors the proliferation of algae. Mineral disinfectants Mineral disinfectants for pools and spas use minerals, metals, or elements derived from the natural environment to produce water quality benefits that would otherwise occur with harsh or synthetic chemicals . Companies cannot sell a mineral disinfectant in the United States unless it has been registered with the US Environmental Protection Agency (EPA). Two mineral disinfectants are currently registered with the EPA: one is a silver salt with a controlled release mechanism that is applied to calcium carbonate granules that help neutralize pH; the other uses a colloidal form of silver released into water from ceramic beads. Mineral technology takes advantage of the cleaning and filtering qualities of common substances. Silver and copper are well-known oligodynamic substances that are effective in destroying pathogens. Silver has been shown to be effective against harmful bacteria, viruses, protozoa and fungi . Copper is widely used as an algaecide. Alumina derived from aluminates filters harmful materials at the molecular level and can be used to control the rate of delivery of desirable metals such as copper. Working through the pool or spa's filtration system, mineral disinfectants use combinations of these minerals to inhibit algae growth and remove contaminants. Unlike chlorine or bromine, metals and minerals do not evaporate and do not degrade. Minerals can make water noticeably softer and, by replacing harsh chemicals in water, reduce the possibility of red eyes, dry skin and bad odors. Oil and grease on the surface The density of fresh water is 1,000 kilograms per cubic meter, while the density of seawater varies between 1,020 and 1,030 kilograms per cubic meter. Oil is less dense than fresh water and seawater, so it floats in both types of water, but due to the difference in density the oil and fat particles, dispersed in the water, will reach the surface more quickly in a saltwater or seawater pool, being in both cases, "the only possible means" to remove them, to recirculate the water surface through skimmers . References External links Automatic Skimmer Robotic Pool Cleaner Swimming pool equipment Cleaning
Pool skimmer
Chemistry
1,999
66,374,807
https://en.wikipedia.org/wiki/Mary%20Jane%20Dockeray
Mary Jane Patricia Dockeray (March 8, 1927 – August 18, 2020) was an American environmental educator, founder of the Blandford Nature Center and Environmental Education Center in Grand Rapids, Michigan. In 2012, she was admitted to the Michigan Women's Hall of Fame. Early life Dockeray grew up on poultry farm in Walker Township. Her father, Winfield Dockeray, was a bookkeeper who also raised chickens. The family owned 2.5 acres just outside of Grand Rapids, an area where neighbors raised goats and open space was plentiful. Dockeray attended Oakleigh School, a 7th Day Adventist Academy. Her 5th grade teacher, Anna Nelson, discovered that she had an interest in geology and the world around her. Nelson, who was typically quite austere, realized the interest in the young girl and helped Dockeray grow her love of geology. Career Dockeray was a curator of natural history at the Grand Rapids Public Museum in the 1950s and 1960s. She taught summer programs and visited schools to give science presentations. She began developing the Blandford Nature Center with an initial land donation in the 1960s, and the visitor center opened in 1968. She worked at the center until her retirement in 1990, but continued to volunteer at the center after that milestone. She taught at Michigan State University and at Aquinas College, hosted a radio program, Nature Spy, wrote a book, Let’s Go Exploring: Suggestions for Field Trips and Associated Studies in Environmental-Conservation Education, and narrated an educational film, These Things Are Ours (1963). Into her eighties, she was still giving geological tours of Grand Rapids. Dockeray served on the executive board of the Michigan Audubon Society, and was recognized by the society with an Outstanding Member Award in 1985. She was inducted into the Michigan Women's Hall of Fame in 2012. The Blandford Nature Center's Mary Jane Dockeray Visitor Center opened in 2017. "If people can become better informed about the natural world around, they’ll take better care and their lives will be richer," she explained of her work. In addition, Dockeray created and made the movie "These Things Are Ours," which was shown as part of National Audubon Society lecture tours across the US and Canada. Dockeray's school visits and lectures with her vintage slide projector were also memorable. Personal life Dockeray self-published a memoir, Rock On, Lady: Memoirs of Dr. Mary Jane Dockeray, Geologist Naturalist, in 2014. At a young age Dockeray made the decision to put her career before marriage, and she dedicated her life to her work. Later on in her life she became engaged to her longtime partner, fellow Blandford volunteer Bertrand L. Hewett, until his death on June 18, 2008. She died 12 years later on August 18, 2020. She was dedicated to the Blandford Centre right up until the day she died, and this dedication continued after her passing as she requested that people donate to the Mary Jane Dockeray Endowment Fund at the Blandford Nature Center on her behalf. Legacy Dockeray's legacy is the Blandford Nature Center she founded in 1968 and the Blandford Environmental Education Program The center, located in Grand Rapids, Michigan, began as Collins Woods which was a part of a family farm where Dockeray grew up and developed her love of the outdoors. Blandford Center preserves over 143 acres of land. Their mission as stated on their website is to engage, empower, and educate their community through enriching experiences in nature.  The center hosts nature trails, an interpretive center, farm demonstrations, several historic buildings, and a wildlife care program.  The Blandford  Environmental Education Program allows children to experience a full year in nature, it connects every academic subject to nature. Grand Valley State University offers a Scholarship (Mary Jane Dockeray Scholarship) honoring  her legacy. The scholarship is awarded to students pursuing education in science. Publications and projects Book: Let's Go Exploring: Suggestions for Field Trip and Associated Studies in Environmental-Conservation Education Film: These Things Are Ours Memoir: Rock on, Lady: Memoirs of Dr. Mary Jane Dockeray, Geologist Naturalist Radio Program: Nature Spy Awards Michigan Audubon Society Outstanding Member Award Michigan Women's Hall of Fame Inaugural Association of Nature Center Administrators’ President's Award for exemplary leadership in the Nature and Environmental Learning Center Profession References External links Blandford Nature Center website Mary Jane Dockeray profile at Michigan Women Forward "Paving the Way for Nature" (2017), a half-hour interview with Dockeray; on YouTube "Go on an Adventure with Mary Jane Dockeray" (2016), a video about Blandford Nature Center, featuring Dockeray; on YouTube Howard Meyerson (October 4, 2013), "A Life of Conservation: Mary Jane Dockeray" The Outdoor Journal. A blogpost profiling Dockeray Shelley Irwin (August 21, 2020), "Blandford Nature Center", a WGVU radio interview with Jason Meyer, on the occasion of Dockeray's death 1927 births 2020 deaths American educators American women geologists Michigan State University alumni Environmental education People from Grand Rapids, Michigan Michigan State University faculty Aquinas College (Michigan) faculty
Mary Jane Dockeray
Environmental_science
1,076
36,952,901
https://en.wikipedia.org/wiki/Nothopanus%20noctilucens
Nothopanus noctilucens is a species of agaric fungus in the family Marasmiaceae. Found in Japan, the fruit bodies of the fungus are bioluminescent. See also List of bioluminescent fungi References External links Marasmiaceae Bioluminescent fungi Fungi described in 1844 Fungi of Asia Taxa named by Joseph-Henri Léveillé Fungus species
Nothopanus noctilucens
Biology
79
2,681,733
https://en.wikipedia.org/wiki/Beta%20Sextantis
Beta Sextantis, Latinized from β Sextantis, is a variable star in the equatorial constellation of Sextans. With an apparent visual magnitude of 5.07, it is faintly visible to the naked eye on a dark night. According to the Bortle scale, it can be viewed from brighter lit suburban skies. The distance to this star, based upon an annual parallax shift of 8.96 mas, is around 364 light years. This star served as a primary standard in the MK spectral classification system with a stellar classification of B6 V, indicating that it is a B-type main sequence star. However, Houk and Swift (1999) list a classification of B5 IV/V, suggesting it may be transitioning into a subgiant star. It has served as a uvby photometric standard, but is also categorized as an Alpha2 Canum Venaticorum variable with a suspected period of 15.4 days. This lengthy a period conflicts with a relatively high projected rotational velocity of 85 km/s, leaving the explanation for the variance unresolved. References Alpha2 Canum Venaticorum variables B-type main-sequence stars Sextantis, Beta Sextans Durchmusterung objects Sextantis, 30 051437 090994 04119
Beta Sextantis
Astronomy
273
9,302,344
https://en.wikipedia.org/wiki/European%20Nuclear%20Energy%20Tribunal
The European Nuclear Energy Tribunal (ENET) is an international tribunal, established 1 January 1960, that operates under the auspices of the Organisation for Economic Co-operation and Development (OECD). Its member states are Austria, Belgium, Denmark, France, Germany, Ireland, Italy, Luxembourg, Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, Turkey, and the United Kingdom. The tribunal was established by the Convention on the Establishment of the Security Control in the Field of Nuclear Energy, signed in 1957. The purpose of the tribunal is to hear cases concerning liability over nuclear accidents. Formerly it also had the role of hearing cases concerning the violation of the European regional nuclear safeguards system operated by the OECD but that jurisdiction was suspended in the 1970s due to its duplication of the IAEA and the Euratom systems. The tribunal consists of seven judges appointed to five-year terms. The OECD Council appointed judges for a term from 1 January 2020 to 31 December 2024, with Mr. Francis Delaporte serving as the President of the Tribunal. The appointed judges come from Finland, Italy, Luxembourg, Norway, Portugal, Spain, and the United Kingdom. The Registrar of the Tribunal is currently Ximena Vásquez-Maignan, Head of Legal Affairs at the Nuclear Energy Agency. Located at the OECD headquarters in Paris, France, the Tribunal's seat is established by Article 7(b) of its Protocol. In the over fifty years of its existence the tribunal has never been presented with a case. References External links European Nuclear Energy Tribunal page at the OECD Nuclear Energy Agency International nuclear energy organizations International courts and tribunals OECD Nuclear liability Tribunals
European Nuclear Energy Tribunal
Engineering
348
35,609,401
https://en.wikipedia.org/wiki/William%20Spencer%20%28navigational%20instrument%20maker%29
William Spencer (c. 1751 – c. 1816) was an English mathematical instrument maker of the 18th and 19th centuries. Spencer entered into a partnership with Samuel Browning to form the company of Spencer & Browning after he apprenticed with instrument maker Richard Rust. When Ebenezer Rust joined the partnership, the resultant firm was known as Spencer, Browning & Rust. The company manufactured navigational instruments for both domestic and international markets. Apprenticeship William Spencer, son of Anthony Spencer, was born in around 1751 in England. On 4 November 1766, at the approximate age of fifteen, William Spencer signed a seven-year contract of indenture (pictured) to Richard Rust, "Citizen and Grocer" of London, England. The contract indicated that his father Anthony Spencer was a shoemaker of the parish of Church Minshull in the County Palatine of Chester, now simply known as Cheshire. In consideration of the sum of ten pounds paid by William's father, Richard Rust agreed to instruct his apprentice, as well as provide him with the necessities of life, including food, drink, clothing, and lodging. The agreement also outlined a strict code of conduct for the apprentice. Among other things, the contract stipulated that the apprentice could not: "waste the Goods of his said Master, nor lend them unlawfully to any. He shall not commit Fornication, nor contract Matrimony, within the said Term. He shall not play at Cards, Dice, Tables, or any other unlawful Games, whereby his said Master may have any loss. With his own Goods or others, during the said Term, without Licence of his said Master he shall neither buy nor sell. He shall not haunt Taverns, or Playhouses, nor absent himself from his said Master's Service Day nor Night unlawfully." The agreement also indicated that Richard Rust was responsible for paying a duty to the Stamp Office, usually within one month of the date of the contract. William Spencer's master Richard Rust was a well-known mathematical instrument maker who ran a busy shop on Tower Hill in London. As in this case, a mathematical instrument maker often specialized in navigational instruments. Rust himself had apprenticed, and received his freedom in 1752. In William Spencer's contract of indenture, Richard Rust was referred to as a Grocer. This signified that he was a member of the Grocers' Company. Richard Rust died in around 1785; his will was proved in December 1785. Grocers' Company The term "grocer" originally had a meaning different from the current customary usage. It referred to a retailer who "traded in gross quantities" and, therefore, encompassed a wide variety of merchants. This included manufacturers and purveyors of mathematical instruments. The Worshipful Company of Grocers, more colloquially known as the Grocers' Company, is one of the Great Twelve Livery Companies of London. The company found its origins in the Ancient Guild of Pepperers and the first record of the guild dates back to 1100. In 1345, members of the Ancient Guild of Pepperers established a fraternity in the City of London. The first reference to the fraternity as the Company of Grossers was recorded in 1373. Three years later, in 1376, the name of the organization was changed to the Company of Grocers. Of the Great Twelve Livery Companies, the Worshipful Company of Grocers ranks second in order of precedence, behind the Worshipful Company of Mercers, as determined by the mayor and aldermen of London in 1515. The Ancient Guild of Pepperers chose a camel as its symbol. Black pepper is one of the world's most popular spices and has been for centuries. Peppercorns have at times been considered portable wealth. Pepper originally came over land, and this is the reason for the choice of the camel as a symbol. The camel is incorporated into the coat of arms of the Worshipful Company of Grocers, which also includes two griffins holding a shield (pictured). Spencer & Browning William Spencer and Samuel Browning first formed a partnership between 1778 and 1781. The resultant company of Spencer & Browning manufactured instruments for navigational use. The partners of the firm were also referred to as "optical and mathematical instrument" makers. Both of the partners, William Spencer and Samuel Browning, were members of the Worshipful Company of Grocers, and thus were referred to as Grocers, in addition to instrument makers. Samuel Browning was married to William Spencer's sister Catherine (17 May 1777). Spencer, Browning & Rust When Ebenezer Rust joined the partnership of Spencer and Browning in 1784, the firm of Spencer, Browning & Rust was established. The company was in operation in London from 1784 to 1840, initially doing business from 327 Wapping High Street. Later, the firm operated out of 66 Wapping High Street. Spencer, Browning & Rust was a successful company, given the large number of surviving nautical instruments. The firm manufactured a variety of navigational instruments, including octants, sextants, telescopes, and compasses, for both domestic and international markets. Nautical instruments marked with the SBR logo are found in the museums of a number of countries. One of the oldest items in the collection of the United States Geological Survey Museum is a quintant sextant or lattice sextant (pictured) that was manufactured by Spencer, Browning & Rust. The last surviving original partner, Samuel Browning, died in about 1819. The firm continued as Smith, Browning & Rust, operated by relatives of the original partners, until 1840. After the 1838 death of Ebenezer Rust's son Ebenezer Rust, Junior, the firm was renamed Spencer, Browning & Co. Family William Spencer and his wife Ann had no children but he was followed in the business by the sons of his brother John Spencer: Samuel, John, Anthony and William Spencer as well the sons of his sister Catherine Spencer (the wife of his partner Samuel Browning): Richard, William and Samuel Browning. William Spencer, died about 1816. The Grocer's will was proved on 20 August 1816. See also John Browning (scientific instrument maker) References British scientific instrument makers History of navigation Optical engineers Place of birth missing 1750s births 1810s deaths Year of birth uncertain Year of death uncertain Engineers from London Astronomical instrument makers
William Spencer (navigational instrument maker)
Astronomy
1,289
77,691,279
https://en.wikipedia.org/wiki/Veronica%20Milligan
Veronica “Ronnie” Jean Kathleen Milligan (11 March 1926 – 3 September 1989) was an electrical engineer with expertise in construction management and president of the Women's Engineering Society. Early life and education Veronica Jean Kathleen O’Neil was born on 11 March 1926 in Pontypridd, South Wales to Jennie K. and Gilbert O'Neil. She attended the Pontypridd Girls' Grammar School. She studied English and Economics at the University College of South Wales and then undertook teacher training. She married Francis Milligan in 1945. Her mother had made her promise not to neglect her career when she married. When her husband and brother began studying for a Higher National Certificate in electrical engineering, Milligan joined them part-time whilst raising her children. Milligan later completed a diploma in management studies. Career When Milligan started a paid graduate traineeship at South Wales Electricity Board she became their first woman in engineering at the company. Milligan stayed at the Board and moved into a supervisory role as maintenance engineer, then later becoming a district planning engineer. Milligan became a chartered engineer in 1959 with the Institution of Electrical Engineers. She was considered for the position of district manager at the electricity board, but senior management did not feel that a woman should be in charge of professional men, so she decided to leave. Milligan set up a consultancy in 1961 called Civlec Industrial Advisory Services where she was a management and engineering consultant. Milligan was appointed as a manpower advisor with the Department of Employment and Productivity, later becoming a headquarters consultant providing expertise on the construction industry. In 1972, Milligan was appointed to the Gwent Area Health Authority board by the Secretary of State for Wales. She was then re-appointed in 1976. She was also a member of the National Water Council and sat on an industrial tribunals panel. In 1978, Milligan became a member of the newly created Commission on Energy and the Environment. Milligan was recorded in Who's Who and was a senior advisor on industry to Monmouth District Council. Memberships Milligan joined the Women's Engineering Society (WES) in 1964 and created the Wales and South-Western branch of the society in 1966. She was awarded a bursary by the Caroline Haslett Memorial Trust to attend the First International Conference of Women Engineers and Scientists in 1964. At the third International Conference of Women Engineers and Scientists in 1971, Milligan presented on construction management practices. Milligan played a significant role in the society, particularly with regards to delivering careers talks to school girls to encourage them into careers in engineering, she did this through her role as Career's Officer. Milligan later became the President of the Women's Engineering Society (WES) in 1978, succeeding Henrietta Bussell in the role. Milligan's successor as president was Maria Watkins. Milligan also supported careers counselling through her membership and role within the Institute of Electrical Engineers. She was vice chair of the South Wales branch. She was also an associate member of the British Institute of Management. Personal Life Millgan grew up in Pontypridd, South Wales with her father, a school teacher, and brother Maitland O'Neil who was also an electrical engineer. She married Francis Milligan in 1945 and they had two sons. One of the sons, Neil, drowned aged 15 whilst on holiday with his parents in 1964. Milligan spent her whole life living in South Wales and died in Newport in 1989. References 1926 births 1989 deaths People from Pontypridd British women engineers Electrical engineers Women's Engineering Society Welsh engineers 20th-century Welsh engineers
Veronica Milligan
Engineering
721
44,286,896
https://en.wikipedia.org/wiki/Growth%20curve%20%28statistics%29
The growth curve model in statistics is a specific multivariate linear model, also known as GMANOVA (Generalized Multivariate Analysis-Of-Variance). It generalizes MANOVA by allowing post-matrices, as seen in the definition. Definition Growth curve model: Let X be a p×n random matrix corresponding to the observations, A a p×q within design matrix with q ≤ p, B a q×k parameter matrix, C a k×n between individual design matrix with rank(C) + p ≤ n and let Σ be a positive-definite p×p matrix. Then defines the growth curve model, where A and C are known, B and Σ are unknown, and E is a random matrix distributed as Np,n(0,Ip,n). This differs from standard MANOVA by the addition of C, a "postmatrix". History Many writers have considered the growth curve analysis, among them Wishart (1938), Box (1950) and Rao (1958). Potthoff and Roy in 1964; were the first in analyzing longitudinal data applying GMANOVA models. Applications GMANOVA is frequently used for the analysis of surveys, clinical trials, and agricultural data, as well as more recently in the context of Radar adaptive detection. Other uses In mathematical statistics, growth curves such as those used in biology are often modeled as being continuous stochastic processes, e.g. as being sample paths that almost surely solve stochastic differential equations. Growth curves have been also applied in forecasting market development. When variables are measured with error, a Latent growth modeling SEM can be used. Footnotes References Analysis of variance Statistical forecasting Multivariate time series Ordinary differential equations Exponentials Biostatistics Growth curves
Growth curve (statistics)
Mathematics
358
32,776,711
https://en.wikipedia.org/wiki/Coherent%20states%20in%20mathematical%20physics
Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states (see also). However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys. A general definition Let be a complex, separable Hilbert space, a locally compact space and a measure on . For each in , denote a vector in . Assume that this set of vectors possesses the following properties: The mapping is weakly continuous, i.e., for each vector in , the function is continuous (in the topology of ). The resolution of the identity holds in the weak sense on the Hilbert space , i.e., for any two vectors in , the following equality holds: A set of vectors satisfying the two properties above is called a family of generalized coherent states. In order to recover the previous definition (given in the article Coherent state) of canonical or standard coherent states (CCS), it suffices to take , the complex plane and Sometimes the resolution of the identity condition is replaced by a weaker condition, with the vectors simply forming a total set in and the functions , as runs through , forming a reproducing kernel Hilbert space. The objective in both cases is to ensure that an arbitrary vector be expressible as a linear (integral) combination of these vectors. Indeed, the resolution of the identity immediately implies that where . These vectors are square integrable, continuous functions on and satisfy the reproducing property where is the reproducing kernel, which satisfies the following properties Some examples We present in this section some of the more commonly used types of coherent states, as illustrations of the general structure given above. Nonlinear coherent states A large class of generalizations of the CCS is obtained by a simple modification of their analytic structure. Let be an infinite sequence of positive numbers (). Define and by convention set . In the same Fock space in which the CCS were described, we now define the related deformed or nonlinear coherent states by the expansion The normalization factor is chosen so that . These generalized coherent states are overcomplete in the Fock space and satisfy a resolution of the identity being an open disc in the complex plane of radius , the radius of convergence of the series (in the case of the CCS, .) The measure is generically of the form (for ), where is related to the through the moment condition. Once again, we see that for an arbitrary vector in the Fock space, the function is of the form , where is an analytic function on the domain . The reproducing kernel associated to these coherent states is Barut–Girardello coherent states By analogy with the CCS case, one can define a generalized annihilation operator by its action on the vectors , and its adjoint operator . These act on the Fock states as Depending on the exact values of the quantities , these two operators, together with the identity and all their commutators, could generate a wide range of algebras including various types of deformed quantum algebras. The term 'nonlinear', as often applied to these generalized coherent states, comes again from quantum optics where many such families of states are used in studying the interaction between the radiation field and atoms, where the strength of the interaction itself depends on the frequency of radiation. Of course, these coherent states will not in general have either the group theoretical or the minimal uncertainty properties of the CCS (they might have more general ones). Operators and of the general type defined above are also known as ladder operators . When such operators appear as generators of representations of Lie algebras, the eigenvectors of are usually called Barut–Girardello coherent states. A typical example is obtained from the representations of the Lie algebra of SU(1,1) on the Fock space. Gazeau–Klauder coherent states A non-analytic extension of the above expression of the non-linear coherent states is often used to define generalized coherent states associated to physical Hamiltonians having pure point spectra. These coherent states, known as Gazeau–Klauder coherent states, are labelled by action-angle variables. Suppose that we are given the physical Hamiltonian , with , i.e., it has the energy eigenvalues and eigenvectors , which we assume to form an orthonormal basis for the Hilbert space of states . Let us write the eigenvalues as by introducing a sequence of dimensionless quantities ordered as: . Then, for all and , the Gazeau–Klauder coherent states are defined as where again is a normalization factor, which turns out to be dependent on only. These coherent states satisfy the temporal stability condition, and the action identity, While these generalized coherent states do form an overcomplete set in , the resolution of the identity is generally not given by an integral relation as above, but instead by an integral in Bohr's sense, like it is in use in the theory of almost periodic functions. Actually the construction of Gazeau–Klauder CS can be extended to vector CS and to Hamiltonians with degenerate spectra, as shown by Ali and Bagarello. Heat kernel coherent states Another type of coherent state arises when considering a particle whose configuration space is the group manifold of a compact Lie group K. Hall introduced coherent states in which the usual Gaussian on Euclidean space is replaced by the heat kernel on K. The parameter space for the coherent states is the "complexification" of ; e.g., if is then the complexification is . These coherent states have a resolution of the identity that leads to a Segal-Bargmann space over the complexification. Hall's results were extended to compact symmetric spaces, including spheres, by Stenzel. The heat kernel coherent states, in the case , have been applied in the theory of quantum gravity by Thiemann and his collaborators. Although there are two different Lie groups involved in the construction, the heat kernel coherent states are not of Perelomov type. The group-theoretical approach Gilmore and Perelomov, independently, realized that the construction of coherent states may sometimes be viewed as a group theoretical problem. In order to see this, let us go back for a while to the case of CCS. There, indeed, the displacement operator is nothing but the representative in Fock space of an element of the Heisenberg group (also called the Weyl–Heisenberg group), whose Lie algebra is generated by and . However, before going on with the CCS, take first the general case. Let be a locally compact group and suppose that it has a continuous, irreducible representation on a Hilbert space by unitary operators . This representation is called square integrable if there exists a non-zero vector in for which the integral converges. Here is the left invariant Haar measure on . A vector for which is said to be admissible, and it can be shown that the existence of one such vector guarantees the existence of an entire dense set of such vectors in . Moreover, if the group is unimodular, i.e., if the left and the right invariant measures coincide, then the existence of one admissible vector implies that every vector in is admissible. Given a square integrable representation and an admissible vector , let us define the vectors These vectors are the analogues of the canonical coherent states, written there in terms of the representation of the Heisenberg group (however, see the section on Gilmore-Perelomov CS, below). Next, it can be shown that the resolution of the identity holds on . Thus, the vectors constitute a family of generalized coherent states. The functions for all vectors in are square integrable with respect to the measure and the set of such functions, which in fact are continuous in the topology of , forms a closed subspace of . Furthermore, the mapping is a linear isometry between and and under this isometry the representation gets mapped to a subrepresentation of the left regular representation of on . An example: wavelets A typical example of the above construction is provided by the affine group of the line, . This is the group of all 2×2 matrices of the type, and being real numbers with . We shall also write , with the action on given by . This group is non-unimodular, with the left invariant measure being given by (the right invariant measure being ). The affine group has a unitary irreducible representation on the Hilbert space . Vectors in are measurable functions of the real variable and the (unitary) operators of this representation act on them as If is a function in such that its Fourier transform satisfies the (admissibility) condition then it can be shown to be an admissible vector, i.e., Thus, following the general construction outlined above, the vectors define a family of generalized coherent states and one has the resolution of the identity on . In the signal analysis literature, a vector satisfying the admissibility condition above is called a mother wavelet and the generalized coherent states are called wavelets. Signals are then identified with vectors in and the function is called the continuous wavelet transform of the signal . This concept can be extended to two dimensions, the group being replaced by the so-called similitude group of the plane, which consists of plane translations, rotations and global dilations. The resulting 2D wavelets, and some generalizations of them, are widely used in image processing. Gilmore–Perelomov coherent states The construction of coherent states using group representations described above is not sufficient. Already it cannot yield the CCS, since these are not indexed by the elements of the Heisenberg group, but rather by points of the quotient of the latter by its center, that quotient being precisely . The key observation is that the center of the Heisenberg group leaves the vacuum vector invariant, up to a phase. Generalizing this idea, Gilmore and Perelomov consider a locally compact group and a unitary irreducible representation of on the Hilbert space , not necessarily square integrable. Fix a vector in , of unit norm, and denote by the subgroup of consisting of all elements that leave it invariant up to a phase, that is, where is a real-valued function of . Let be the left coset space and an arbitrary element in . Choosing a coset representative , for each coset , we define the vectors The dependence of these vectors on the specific choice of the coset representative is only through a phase. Indeed, if instead of , we took a different representative for the same coset , then since for some , we would have . Hence, quantum mechanically, both and represent the same physical state and in particular, the projection operator depends only on the coset. Vectors defined in this way are called Gilmore–Perelomov coherent states. Since is assumed to be irreducible, the set of all these vectors as runs through is dense in . In this definition of generalized coherent states, no resolution of the identity is postulated. However, if carries an invariant measure, under the natural action of , and if the formal operator defined as is bounded, then it is necessarily a multiple of the identity and a resolution of the identity is again retrieved. Gilmore–Perelomov coherent states have been generalized to quantum groups, but for this we refer to the literature. Further generalization: Coherent states on coset spaces The Perelomov construction can be used to define coherent states for any locally compact group. On the other hand, particularly in case of failure of the Gilmore–Perelomov construction, there exist other constructions of generalized coherent states, using group representations, which generalize the notion of square integrability to homogeneous spaces of the group. Briefly, in this approach one starts with a unitary irreducible representation and attempts to find a vector , a subgroup and a section such that where , is a bounded, positive operator with bounded inverse and is a quasi-invariant measure on . It is not assumed that be invariant up to a phase under the action of and clearly, the best situation is when is a multiple of the identity. Although somewhat technical, this general construction is of enormous versatility for semi-direct product groups of the type , where is a closed subgroup of . Thus, it is useful for many physically important groups, such as the Poincaré group or the Euclidean group, which do not have square integrable representations in the sense of the earlier definition. In particular, the integral condition defining the operator ensures that any vector in can be written in terms of the generalized coherent states namely, which is the primary aim of any kind of coherent states. Coherent states: a Bayesian construction for the quantization of a measure set We now depart from the standard situation and present a general method of construction of coherent states, starting from a few observations on the structure of these objects as superpositions of eigenstates of some self-adjoint operator, as was the harmonic oscillator Hamiltonian for the standard CS. It is the essence of quantum mechanics that this superposition has a probabilistic flavor. As a matter of fact, we notice that the probabilistic structure of the canonical coherent states involves two probability distributions that underlie their construction. There are, in a sort of duality, a Poisson distribution ruling the probability of detecting excitations when the quantum system is in a coherent state , and a gamma distribution on the set of complex parameters, more exactly on the range of the square of the radial variable. The generalization follows that duality scheme. Let be a set of parameters equipped with a measure and its associated Hilbert space of complex-valued functions, square integrable with respect to . Let us choose in a finite or countable orthonormal set : In case of infinite countability, this set must obey the (crucial) finiteness condition: Let be a separable complex Hilbert space with orthonormal basis in one-to-one correspondence with the elements of . The two conditions above imply that the family of normalized coherent states in , which are defined by resolves the identity in : Such a relation allows us to implement a coherent state or frame quantization of the set of parameters by associating to a function that satisfies appropriate conditions the following operator in : The operator is symmetric if is real-valued, and it is self-adjoint (as a quadratic form) if is real and semi-bounded. The original is an upper symbol, usually non-unique, for the operator . It will be called a classical observable with respect to the family if the so-called lower symbol of , defined as has mild functional properties to be made precise according to further topological properties granted to the original set . A last point of this construction of the space of quantum states concerns its statistical aspects. There is indeed an interplay between two probability distributions: See also Coherent states Lieb conjecture Quantization References Mathematical physics
Coherent states in mathematical physics
Physics,Mathematics
3,125
582,127
https://en.wikipedia.org/wiki/Antenna%20tuner
An antenna tuner, a matchbox, transmatch, antenna tuning unit (ATU), antenna coupler, or feedline coupler is a device connected between a radio transmitter or receiver and its antenna to improve power transfer between them by matching the impedance of the radio to the antenna's feedline. Antenna tuners are particularly important for use with transmitters. Transmitters feed power into a resistive load, very often 50 ohms, for which the transmitter is optimally designed for power output, efficiency, and low distortion. If the load seen by the transmitter departs from this design value due to improper tuning of the antenna/feedline combination the power output will change, distortion may occur and the transmitter may overheat. ATUs are a standard part of almost all radio transmitters; they may be a circuit included inside the transmitter itself or a separate piece of equipment connected between the transmitter and the antenna. In transmitters in which the antenna is mounted separate from the transmitter and connected to it by a transmission line (feedline), there may be a second ATU (or matching network) at the antenna to match the impedance of the antenna to the transmission line. In low power transmitters with attached antennas, such as cell phones and walkie-talkies, the ATU is fixed to work with the antenna. In high power transmitters like radio stations, the ATU is adjustable to accommodate changes in the antenna or transmitter, and adjusting the ATU to match the transmitter to the antenna is an important procedure done after any changes to these components have been made. This adjustment is done with an instrument called a SWR meter. In radio receivers ATUs are not so important, because in the low frequency part of the radio spectrum the signal to noise ratio (SNR) is dominated by atmospheric noise. It does not matter if the impedance of the antenna and receiver are mismatched so some of the incoming power from the antenna is reflected and does not reach the receiver, because the signal can be amplified to make up for it. However in high frequency receivers the receiver's SNR is dominated by noise in the receiver's front end, so it is important that the receiving antenna is impedance-matched to the receiver to give maximum signal amplitude in the front end stages, to overcome noise. Overview An antenna's impedance is different at different frequencies. An antenna tuner matches a radio with a fixed impedance (typically 50 Ohms for modern transceivers) to the combination of the feedline and the antenna; useful when the impedance seen at the input end of the feedline is unknown, complex, or otherwise different from the transceiver. Coupling through an ATU allows the use of one antenna on a broad range of frequencies. However, despite its name, an antenna tuner ' actually matches the transmitter only to the complex impedance reflected back to the input end of the feedline. If both tuner and transmission line were lossless, tuning at the transmitter end would indeed produce a match at every point in the transmitter-feedline-antenna system. However, in practical systems feedline losses limit the ability of the antenna 'tuner' to match the antenna or change its resonant frequency. If the loss of power is very low in the line carrying the transmitter's signal into the antenna, a tuner at the transmitter end can produce a worthwhile degree of matching and tuning for the antenna and feedline network as a whole. With lossy feedlines (such as commonly used 50 Ohm coaxial cable) maximum power transfer only occurs if matching is done at both ends of the line. If there is still a high SWR (multiple reflections) in the feedline beyond the ATU, any loss in the feedline is multiplied several times by the transmitted waves reflecting back and forth between the tuner and the antenna, heating the wire instead of sending out a signal. Even with a matching unit at both ends of the feedline – the near ATU matching the transmitter to the feedline and the remote ATU matching the feedline to the antenna – losses in the circuitry of the two ATUs will reduce power delivered to the antenna. Therefore, operating an antenna far from its design frequency and compensating with a transmatch between the transmitter and the feedline is not as efficient as using a resonant antenna with a matched-impedance feedline, nor as efficient as a matched feedline from the transmitter to a remote antenna tuner attached directly to the antenna. Broad band matching methods Transformers, autotransformers, and baluns are sometimes incorporated into the design of narrow band antenna tuners and antenna cabling connections. They will all usually have little effect on the resonant frequency of either the antenna or the narrow band transmitter circuits, but can widen the range of impedances that the antenna tuner can match, and/or convert between balanced and unbalanced cabling where needed. Ferrite transformers Solid-state power amplifiers operating from 1–30 MHz typically use one or more wideband transformers wound on ferrite cores. MOSFETs and bipolar junction transistors are designed to operate into a low impedance, so the transformer primary typically has a single turn, while the 50 Ohm secondary will have 2 to 4 turns. This feedline system design has the advantage of reducing the retuning required when the operating frequency is changed. A similar design can match an antenna to a transmission line; For example, many TV antennas have a 300 Ohm impedance and feed the signal to the TV via a 75 Ohm coaxial line. A small ferrite core transformer makes the broad band impedance transformation. This transformer does not need, nor is it capable of adjustment. For receive-only use in a TV the small SWR variation with frequency is not a major problem. It should be added that many ferrite based transformers perform a balanced to unbalanced transformation along with the impedance change. When the balanced to unbalanced function is present these transformers are called a balun (otherwise an unun). The most common baluns have either a 1:1 or a 1:4 impedance transformation. Autotransformers There are several designs for impedance matching using an autotransformer, which is a single-wire transformer with different connection points or taps spaced along the windings. They are distinguished mainly by their impedance transform ratio (1:1, 1:4, 1:9, etc., the square of the winding ratio), and whether the input and output sides share a common ground, or are matched from a cable that is grounded on one side (unbalanced) to an ungrounded (usually balanced) cable. When autotransformers connect balanced and unbalanced lines they are called baluns, just as two-winding transformers. When two differently-grounded cables or circuits must be connected but the grounds kept independent, a full, two-winding transformer with the desired ratio is used instead. The circuit pictured at the right has three identical windings wrapped in the same direction around either an "air" core (for very high frequencies) or ferrite core (for middle, or low frequencies). The three equal windings shown are wired for a common ground shared by two unbalanced lines (so this design is called an unun), and can be used as 1:1, 1:4, or 1:9 impedance match, depending on the tap chosen. (The same windings could be connected differently to make a balun instead.) For example, if the right-hand side is connected to a resistive load of 10 Ohms, the user can attach a source at any of the three ungrounded terminals on the left side of the autotransformer to get a different impedance. Notice that on the left side, the line with more windings measures greater impedance for the same 10 Ohm load on the right. Narrow band design The "narrow-band" methods described below cover a very much smaller span of frequencies, by comparison with the broadband methods described above. Antenna matching methods that use transformers tend to cover a wide range of frequencies. A single, typical, commercially available balun can cover frequencies from 3.5–30.0 MHz, or nearly the entire shortwave radio band. Matching to an antenna using a cut segment of transmission line (described below) is perhaps the most efficient of all matching schemes in terms of electrical power, but typically can only cover a range about 3.5–3.7 MHz wide – a very small range indeed, compared to a broadband balun. Antenna coupling or feedline matching circuits are also narrowband for any single setting, but can be re-tuned more conveniently. However they are perhaps the least efficient in terms of power-loss (aside from having no impedance matching at all!). Transmission line antenna tuning methods The insertion of a special section of transmission line, whose characteristic impedance differs from that of the main line, can be used to match the main line to the antenna. An inserted line with the proper impedance and connected at the proper location can perform complicated matching effects with very high efficiency, but spans a very limited frequency range. The simplest example this method is the quarter-wave impedance transformer formed by a section of mismatched transmission line. If a quarter-wavelength of 75 Ohm coaxial cable is linked to a 50 Ohm load, the SWR in the 75 Ohm quarter wavelength of line can be calculated as 75Ω / 50Ω = 1.5; the quarter-wavelength of line transforms the mismatched impedance to 112.5 Ohms (75 Ohms × 1.5 = 112.5 Ohms). Thus this inserted section matches a 112 Ohm antenna to a 50 Ohm main line. The  wavelength coaxial transformer is a useful way to match 50 to 75 Ohms using the same general method. The theoretical basis is discussion by the inventor, and wider application of the method is found here: Branham, P. (1959). A Convenient Transformer for matching Co-axial lines. Geneva: CERN. A second common method is the use of a stub: A shorted, or open section of line is connected in parallel with the main line. With coax this is done using a 'T'-connector. The length of the stub and its location can be chosen so as to produce a matched line below the stub, regardless of the complex impedance or SWR of the antenna itself. The J-pole antenna is an example of an antenna with a built-in stub match. Basic lumped circuit matching using the L network The basic circuit required when lumped capacitances and inductors are used is shown below. This circuit is important in that many automatic antenna tuners use it, and also because more complex circuits can be analyzed as groups of L-networks. This is called an L network not because it contains an inductor, (in fact some L-networks consist of two capacitors), but because the two components are at right angles to each other, having the shape of a rotated and sometimes reversed English letter 'L'. The 'T' ("Tee") network and the π ("Pi") network also have a shape similar to the English and Greek letters they are named after. This basic network is able to act as an impedance transformer. If the output has an impedance consisting of resistance Rload and reactance j Xload, while the input is to be attached to a source which has an impedance of Rsource resistance and j Xsource reactance, then and . In this example circuit, XL and XC can be swapped. All the ATU circuits below create this network, which exists between systems with different impedances. For instance, if the source has a resistive impedance of 50 Ω and the load has a resistive impedance of 1000 Ω : If the frequency is 28 MHz, As, then, So, While as, then, Theory and practice A parallel network, consisting of a resistive element (1000 Ω) and a reactive element (−j 229.415 Ω), will have the same impedance and power factor as a series network consisting of resistive (50 Ω) and reactive elements (−j 217.94 Ω). By adding another element in series (which has a reactive impedance of +j 217.94 Ω), the impedance is 50 Ω (resistive). Types of L networks and their use The L-network can have eight different configurations, six of which are shown here. The two missing configurations are the same as the bottom row, but with the parallel element (wires vertical) on the right side of the series element (wires horizontal), instead of on the left, as shown. In discussion of the diagrams that follows the in connector comes from the transmitter or "source"; the out connector goes to the antenna or "load". The general rule (with some exceptions, described below) is that the series element of an L-network goes on the side with the lowest impedance. So for example, the three circuits in the left column and the two in the bottom row have the series (horizontal) element on the out side are generally used for stepping up from a low-impedance input (transmitter) to a high-impedance output (antenna), similar to the example analyzed in the section above. The top two circuits in the right column, with the series (horizontal) element on the in side, are generally useful for stepping down from a higher input to a lower output impedance. The general rule only applies to loads that are mainly resistive, with very little reactance. In cases where the load is highly reactive – such as an antenna fed with a signals whose frequency is far away from any resonance – the opposite configuration may be required. If far from resonance, the bottom two step down (high-in to low-out) circuits would instead be used to connect for a step up (low-in to high-out that is mostly reactance). The low- and high-pass versions of the four circuits shown in the top two rows use only one inductor and one capacitor. Normally, the low-pass would be preferred with a transmitter, in order to attenuate harmonics, but the high-pass configuration may be chosen if the components are more conveniently obtained, or if the radio already contains an internal low-pass filter, or if attenuation of low frequencies is desirable – for example when a local AM station broadcasting on a medium frequency may be overloading a high frequency receiver. The Low R, high C circuit is shown feeding a short vertical antenna, such as would be the case for a compact, mobile antenna or otherwise on frequencies below an antenna's lowest natural resonant frequency. Here the inherent capacitance of a short, random wire antenna is so high that the L-network is best realized with two inductors, instead of aggravating the problem by using a capacitor. The Low R, high L circuit is shown feeding a small loop antenna. Below resonance this type of antenna has so much inductance, that more inductance from adding a coil would make the reactance even worse. Therefore, the L-network is composed of two capacitors. An L-network is the simplest circuit that will achieve the desired transformation; for any one given antenna and frequency, once a circuit is selected from the eight possible configurations (of which six are shown above) only one set of component values will match the in impedance to the out impedance. In contrast, the circuits described below all have three or more components, and hence have many more choices for inductance and capacitance that will produce an impedance match. The radio operator must experiment, test, and use judgement to choose among the many adjustments that produce the same impedance match. Antenna system losses Loss in Antenna tuners Every means of impedance match will introduce some power loss. This will vary from a few percent for a transformer with a ferrite core, to 50% or more for a complex ATU that is improperly tuned or working at the limits of its tuning range. With the narrow band tuners, the L-network has the lowest loss, partly because it has the fewest components, but mainly because it necessarily operates at the lowest possible for a given impedance transformation. With the L-network, the loaded is not adjustable, but is fixed midway between the source and load impedances. Since most of the loss in practical tuners will be in the coil, choosing either the low-pass or high-pass network may reduce the loss somewhat. The L-network using only capacitors will have the lowest loss, but this network only works where the load impedance is very inductive, making it a good choice for a small loop antenna. Inductive impedance also occurs with straight-wire antennas used at frequencies slightly above a resonant frequency, where the antenna is too long – for example, between a quarter and a half wave long at the operating frequency. However, problematic straight-wire antennas are typically too short for the frequency in use. With the high-pass T-network, the loss in the tuner can vary from a few percent – if tuned for lowest loss – to over 50% if the tuner is not properly adjusted. Using the maximum available capacitance will give less loss, than if one simply tunes for a match without regard for the settings. This is because using more capacitance means using fewer inductor turns, and the loss is mainly in the inductor. With the SPC tuner the losses will be somewhat higher than with the T-network, since the added capacitance across the inductor will shunt some reactive current to ground which must be cancelled by additional current in the inductor. The trade-off is that the effective inductance of the coil is increased, thus allowing operation at lower frequencies than would otherwise be possible. If additional filtering is desired, the inductor can be deliberately set to larger values, thus providing a partial band pass effect. Either the high-pass T, low-pass π, or the SPC tuner can be adjusted in this manner. The additional attenuation at harmonic frequencies can be increased significantly with only a small percentage of additional loss at the tuned frequency. When adjusted for minimum loss, the SPC tuner will have better harmonic rejection than the high-pass T due to its internal tank circuit. Either type is capable of good harmonic rejection if a small additional loss is acceptable. The low-pass π has exceptional harmonic attenuation at any setting, including the lowest-loss. ATU location An ATU will be inserted somewhere along the line connecting the radio transmitter or receiver to the antenna. The antenna feedpoint is usually high in the air (for example, a dipole antenna) or far away (for example, an end-fed random wire antenna). A transmission line, or feedline, must carry the signal between the transmitter and the antenna. The ATU can be placed anywhere along the feedline: at the transmitter, at the antenna, or somewhere in between. Antenna tuning is best done as close to the antenna as possible to minimize loss, increase bandwidth, and reduce voltage and current on the transmission line. Also, when the information being transmitted has frequency components whose wavelength is a significant fraction of the electrical length of the feed line, distortion of the transmitted information will occur if there are standing waves on the line. Analog TV and FM stereo broadcasts are affected in this way. For those modes, matching at the antenna is required. When possible, an automatic or remotely-controlled tuner in a weather-proof case at or near the antenna is convenient and makes for an efficient system. With such a tuner, it is possible to match a wide range of antennas (including stealth antennas).SGC World: Smart Tuners for Stealth Antennas. When the ATU must be located near the radio for convenient adjustment, any significant SWR will increase the loss in the feedline. For that reason, when using an ATU at the transmitter, low-loss, high-impedance feedline is a great advantage (open-wire line, for example). A short length of low-loss coaxial line is acceptable, but with longer lossy lines the additional loss due to SWR becomes very high. It is very important to remember that when matching the transmitter to the line, as is done when the ATU is near the transmitter, there is no change in the SWR in the feedline. The backlash currents reflected from the antenna are retro-reflected by the ATU – usually several times between the two – and so are invisible on the transmitter-side of the ATU. The result of the multiple reflections is compounded loss, higher voltage or higher currents, and narrowed bandwidth, none of which can be corrected by the ATU. Standing wave ratio It is a common misconception that a high standing wave ratio (SWR) per se causes loss. A well-adjusted ATU feeding an antenna through a low-loss line may have only a small percentage of additional loss compared with an intrinsically matched antenna, even with a high SWR (4:1, for example). An ATU sitting beside the transmitter just re-reflects energy reflected from the antenna ("backlash current") back yet again along the feedline to the antenna ("retro-reflection"). High losses arise from RF resistance in the feedline and antenna, and those multiple reflections due to high SWR cause feedline losses to be compounded. Using low-loss, high-impedance feedline with an ATU results in very little loss, even with multiple reflections. However, if the feedline-antenna combination is 'lossy' then an identical high SWR may lose a considerable fraction of the transmitter's power output. High impedance lines – such as most parallel-wire lines – carry power mostly as high voltage rather than high current, and current alone determines the power lost to line resistance. So despite high SWR, very little power is lost in high-impedance line compared low-impedance line – typical coaxial cable, for example. For that reason, radio operators can be more casual about using tuners with high-impedance feedline. Without an ATU, the SWR from a mismatched antenna and feedline can present an improper load to the transmitter, causing distortion and loss of power or efficiency with heating and/or burning of the output stage components. Modern solid state transmitters will automatically reduce power when high SWR is detected, so some solid-state power stages only produce weak signals if the SWR rises above 1.5 to 1. Were it not for that problem, even the losses from an SWR of 2:1 could be tolerated, since only 11 percent of transmitted power would be reflected and 89 percent sent out through to the antenna. So the main loss of output power with high SWR is due to the transmitter "backing off" its output when challenged with backlash current. Tube transmitters and amplifiers usually have an adjustable output network that can feed mismatched loads up to perhaps 3:1 SWR without trouble. In effect the built-in π-network of the transmitter output stage acts as an ATU. Further, since tubes are electrically robust (even though mechanically fragile), tube-based circuits can tolerate very high backlash current without damage. Broadcast Applications AM broadcast transmitters One of the oldest applications for antenna tuners is in AM and shortwave broadcasting transmitters. AM transmitters usually use a vertical antenna (tower) which can be from 0.20 to 0.68 wavelengths long. At the base of the tower an ATU is used to match the antenna to the 50 Ohm transmission line from the transmitter. The most commonly used circuit is a T-network, using two series inductors with a shunt capacitor between them. When multiple towers are used the ATU network may also provide for a phase adjustment so that the currents in each tower can be phased relative to the others to produce a desired pattern. These patterns are often required by law to include nulls in directions that could produce interference as well as to increase the signal in the target area. Adjustment of the ATUs in a multitower array is a complex and time consuming process requiring considerable expertise. High-power shortwave transmitters For International Shortwave (50 kW and above), frequent antenna tuning is done as part of frequency changes which may be required on a seasonal or even a daily basis. Modern shortwave transmitters typically include built-in impedance-matching circuitry for SWR up to 2:1 , and can adjust their output impedance within 15 seconds. The matching networks in transmitters sometimes incorporate a balun or an external one can be installed at the transmitter in order to feed a balanced line. Balanced transmission lines of 300 Ohms or more were more-or-less standard for all shortwave transmitters and antennas in the past, even by amateurs. Most shortwave broadcasters have continued to use high-impedance feeds even before the advent of automatic impedance matching. The most commonly used shortwave antennas for international broadcasting are the HRS antenna (curtain array), which cover a 2 to 1 frequency range and the log-periodic antenna which cover up to 8 to 1 frequency range. Within that range, the SWR will vary, but is usually kept below 1.7 to 1 – within the range of SWR that can be tuned by antenna matching built-into many modern transmitters. Hence, when feeding these antennas, a modern transmitter will be able to tune itself as needed to match at any frequency. Automatic antenna tuning Automatic antenna tuning is used in flagship mobile phones, transceivers for amateur radio, and in land mobile, marine, and tactical HF radio transceivers. Each antenna tuning system (AT) shown in the figure has an "antenna port", which is directly or indirectly coupled to an antenna, and another port, referred to as "radio port" (or as "user port"), for transmitting and / or receiving radio signals through the AT and the antenna. Each AT shown in the figure has a single antenna-port, (SAP) AT, but a multiple antenna-port (MAP) AT may be needed for MIMO radio transmission. Several control schemes can be used in a radio transceiver or transmitter to automatically adjust an antenna tuner (AT). The control schemes are based on one of the two configurations, (a) and (b), shown in the diagram. For both configurations, the transmitter comprises: antenna antenna tuner / matching network (AT) sensing unit (SU) control unit (CU) transmitter and signal processing unit (TSPU) The TSPU incorporates all the parts of the transmitting not otherwise shown in the diagram. The TX port of the TSPU delivers a test signal. The SU delivers, to the TSPU, one or more output signals indicating the response to the test signal, one or more electrical variables (such as voltage, current, incident or forward voltage, etc.). The response sensed at the radio port in the case of configuration (a) or at the antenna port'' in the case of configuration (b). Note that neither configuration (a) nor (b) is ideal, since the line between the antenna and the AT attenuates SWR; response to a test signal is most accurately tested at or near the antenna feedpoint. {| style="text-align:center;" class="wikitable" |+ ''' |- style="vertical-align:bottom;" ! Control scheme !! Configuration !! Extremum-seeking? |- | Type 0 || n/a || n/a |- | Type 1 || (a) || No |- | Type 2 || (a) || Yes |- | Type 3 || (b) || No |- | Type 4 || (b) || Yes |} Broydé & Clavelier (2020) distinguish five types of antenna tuner control schemes, as follows: Type 0 designates the open-loop AT control schemes that do not use any SU, the adjustment being typically only based on previous knowledge programmed for each operating frequency Type 1 and type 2 control schemes use configuration (a) type 2 uses extremum-seeking control type 1 does not seek an extreme Type 3 and type 4 control schemes use configuration (b) type 4 uses extremum-seeking control type 3 does not seek an extreme The control schemes may be compared as regards: use of closed-loop or open-loop control (or both) measurements used ability to mitigate the effects of the electromagnetic characteristics of the surroundings aim / goal accuracy and speed dependence on use of a particular model of AT or CU See also American Radio Relay League Electrical lengthening Impedance bridging Loading coil Preselector Smith chart References Further reading External links American Radio Relay League website. What tuners do and a look inside. Tuner Wireless tuning and filtering
Antenna tuner
Engineering
6,039
53,584,972
https://en.wikipedia.org/wiki/Addiction%20by%20Design
Addiction by Design is a 2012 non-fiction book by Natasha Dow Schüll and published by Princeton University Press that describes machine gambling in Las Vegas. It offers an analysis of machine gambling and the intensified forms of consumption that computer-based technologies enable and the innovations that deliberately enhance and sustain the 'zone' which extreme machine gamblers yearn for. The book received attention in connection with how current information technologies, in certain contexts, can make people addicted. See also Addiction psychology Slot machine Compulsion loop References External links Chris Hedges and Professor Natasha Dow Schüll discuss the research reported in her book (2017-03-28). Video, 26 min Product design Works about addiction Human–computer interaction Non-fiction books about gambling 2012 non-fiction books American non-fiction books Princeton University Press books
Addiction by Design
Engineering
161
3,090,397
https://en.wikipedia.org/wiki/Radio%20over%20IP
Radio over Internet Protocol, or RoIP, is similar to Voice over IP (VoIP), but augments two-way radio communications rather than telephone calls. From the system point of view, it is essentially VoIP with push-to-talk. To the user it can be implemented like any other radio network. With RoIP, at least one node of a network is a radio (or a radio with an IP interface device) connected via IP to other nodes in the radio network. The other nodes can be two-way radios, but could also be dispatch consoles either traditional (hardware) or modern (software on a PC), POTS telephones, softphone applications running on a computer such as Skype phone, PDA, smartphone, or some other communications device accessible over IP. RoIP can be deployed over private networks as well as the public Internet. It is useful in land mobile radio systems used by public safety departments and fleets of utilities spread over a broad geographic area. Like other centralized radio systems such as trunked radio systems, issues of delay or latency and reliance on centralized infrastructure can be impediments to adoption by public safety agencies. RoIP is not a proprietary or protocol-limited construct but a basic concept that has been implemented in a number of ways. Several systems have been implemented in the amateur radio community such as Galaxy PTT Comms, AllStar Link, BroadNet, IRLP, and EchoLink that have demonstrated the utility of RoIP in a partly or entirely open-source environment. Many commercial radio systems vendors such as Persistent Systems, LLC., Motorola and Harris have adopted RoIP as part of their system designs. The motivation to deploy RoIP technology is usually driven by one of three factors: first, the need to span large geographic areas or operate in areas without sufficient coverage from radio towers; second, the desire to provide more reliable, or at least more repairable links in radio systems; and third, to support the use of many base station users, that is, voice communications from stationary users rather than mobile or handheld radios. Geographies may be more economically reliably served when spanned by the use of IP technology due to the constantly decreasing cost and increasing functionality of the evolving packet-switched network equipment and software (a track followed by Moore's law). Traditionally distant radio users have been linked via dedicated microwave equipment and/or leased telephone lines. Generally, the cost of operating a radio network is decreased by the adoption of IP technology, replacing the traditional microwave and leased telephone lines. Economical and reliable distant radio links such as those needed by state troopers, energy utilities, and Medivac helicopters are well served by RoIP technology (see Air Evac Lifeteam for an example of a 14-state radio system). U.S. military units are using RoIP to protect convoys spread out across large geographies The conversion to RoIP also drives the adoption of a network approach rather than hub and spoke architecture that is typical of the point-to-point links inherent in the legacy microwave and leased line technologies. Hub and spoke architectures are inherently fragile, while the network approach developed at the foundation of the public Internet by DARPA is generally more reliable, more adaptable, and faster to repair and restore in a wide area disaster such as Hurricane Katrina. The use of LMR (land mobile radio) equipment in both mobile and handheld forms, can be problematic for desk-bound users such as dispatchers, supervisors, and other users in large public safety agencies and energy/utilities, because such radios do not coexist well with computers (e.g. interference). Also, Emergency Operations Center (EOCs) are typically staffed with representatives from many different public safety agencies and other local government officials, each with a different radio. Such EOCs are more effectively (and quietly!) equipped when the radios for each of the different constituencies are made available in the center via RoIP at each user's computer, rather than via a handheld radio that may be out of range, difficult to hear, and out of batteries throughout the emergency. Finally, RoIP by its nature is inter-operable, as once any device whether radio, telephone, computer, or PDA is made part of the voice network enabled by IP, it is irrelevant what type of technology it utilizes. RoIP systems routinely combine VHF, UHF, POTS telephone, Cellular telephone, SATCOM, air-to-ground, and other technologies into a single voice conversation. This makes it especially valuable to the much-documented problems with communications interoperability. In order to minimize the growth of Radio over IP technologies that are incompatible with each other, the U.S. Department of Homeland Security and the National Institute of Standards and Technology are sponsoring BSI for ROIP, a draft standard for enabling different Radio over IP technologies to interoperate. Radio Control over IP (RCoIP) provides the essential signaling and management for voice messages required for Critical Communications and is a step up from Radio over IP (RoIP). RCoIP is designed so that essential messages get through by using confirmed signaling. Implementations is a client–server software program designed by amateur radio enthusiasts for linking amateur radio frequency gateways and repeaters via the internet by using a Voice over IP protocol. It is developed for licence free radios like Citizens Band, PMR446 and Family Radio Service. See also Bridging Systems Interface - a standard protocol from DHS OIC's SAFECOM program Cubic | Vocality - for Radio over IP gateway devices D-STAR EchoLink HamSphere Internet Radio Linking Project (IRLP) Midland Radio National Interop PLRI RIPRNet Wide-coverage Internet Repeater Enhancement System (WIRES) Audio Aggregator 25747 References Internet protocols Public safety communications Radio communications Interoperable communications Network appliances Radio hobbies Amateur radio software for Windows Amateur radio software for Linux
Radio over IP
Engineering
1,197
1,991,469
https://en.wikipedia.org/wiki/Civetone
Civetone is a macrocyclic ketone and the main odorous constituent of civet oil. It is a pheromone sourced from the African civet. It has a strong musky odor that becomes pleasant at extreme dilutions. Civetone is closely related to muscone, the principal odoriferous compound found in musk; the structure of both compounds was elucidated by Leopold Ružička. Today, civetone can be synthesized from precursor chemicals found in palm oil. Uses Civetone is a synthetic musk used as a perfume fixative and flavor. In order to attract jaguars to camera traps, field biologists have used the Calvin Klein-brand male cologne Obsession. It is believed that the civetone in the cologne resembles a territorial marking. See also 5-Cyclohexadecenone, a related musk chemical References Perfume ingredients Macrocycles Mammalian pheromones Cycloalkenes Ketones
Civetone
Chemistry
199
59,903,213
https://en.wikipedia.org/wiki/Audiomack
Audiomack (often stylized as audiomack) is a music streaming and audio discovery platform that allows artists to upload music and fans to stream and download songs. It is especially popular among emerging artists, offering a space to share their music . The platform supports genres such as Hip-hop, R&B, Afrobeats, and Latin music, with a focus on providing a free, user-friendly streaming experience. Audiomack is available as a web-based service and as an app on macOS, Android devices and Windows. History Audiomack was co-founded in 2012 by Dave Macli, David Ponte, Thomas Klinger, Ty Wangsness, and Brian Zisook. The platform originally allowed artists to freely share their mixtapes, songs, and albums. In April 2013, J. Cole (Yours Truly 2) and Chance the Rapper (Acid Rap) released new projects exclusively on the platform. In September 2018, Eminem released "Killshot", a diss track about Machine Gun Kelly, exclusively on Audiomack, earning 8.6 million plays in four months. In February, 2019, Nicki Minaj released three songs exclusively on the platform, including a remix of Blueface's "Thotiana." In November 2020, Audiomack signed a music licensing agreement with Warner Music Group, covering the United States, Canada, Jamaica, and five "key African territories," including Ghana, Kenya, Nigeria, South Africa, and Tanzania. In December 2020, Audiomack launched its monetization program, AMP, to all eligible creators based in the United States, Canada, and the United Kingdom. In July 2021, Audiomack expanded the program to creators worldwide and introduced a partnership with Ziiki Media to help promote artists across Africa. In February 2021, Variety reported Audiomack has music licensing agreements in the United States with Universal Music Group and Sony Music Entertainment. Audiomack also receives music through licensing deals with labels and distributors such as EMPIRE, among others. Also in February 2021, Billboard announced Audiomack streaming data would begin informing some of its flagship charts, including the Hot 100, the Billboard 200, and the Global 200. In March 2021, Fast Company magazine named Audiomack one of the 10 most innovative companies in music. In April 2021, Audiomack partnered with telecommunications company MTN Nigeria to offer its 76 million users cheaper data to stream music in its app. In December 2021, Audiomack launched Supporters, a "feature that will enable loyal fans to tip their favorite artist's music." Fans fund artists or rights holders directly by purchasing 'support badges' for individual song and album releases. Warner Music Group, which was Audiomack's first major label partner, signed on as the first major label participant in "Supporters." Other participating partners include Amuse, AudioSalad Direct, DistroKid, EMPIRE, FUGA, Stem, and Vydia. In February 2023, Audiomack announced a partnership deal with MTV Base to improve listeners' access to quality music material while raising awareness of African musicians throughout Africa. In July 2023, Audiomack announced a partnership with Love Renaissance (LVRN), an American record label and management company, to identify emerging musicians. In April 2024, Audiomack and Merlin announced a "strategic, indie-centric partnership." Through the partnership, Merlin's membership gained access to Audiomack's global audience of engaged listeners. As of November 2024, Audiomack is reporting 10 million daily active users and 36 million monthly active users globally. This news was reported alongside a partnership announcement with Indian music label Saregama. Features Offline playback is free to all users and not blocked by a paywall. Users and artists can upload their music to the service through its website. Audiomack uses a combination of audio fingerprinting, DMCA takedown requests, and manual curation to police unauthorized uploads. Audiomack does not limit or charge creators for storing content on its service. In February 2022, Audiomack launched Creator App, giving its users the ability to "upload new music, analyze data, and connect with fans all in one app." The Creator App surpassed 1 million downloads in spring of 2023. In May 2023, Audiomack updated its Creator App, adding Promote, "a new tab that provides creators with downloadable assets for marketing and promoting their work." In July 2023, the company announced Connect, a "free, first-of-its-kind feature" that allows creators to message their followers on the service. Audiomack introduced Audiomod, "a new set of tools that allow users to fiddle with tracks by changing the tempo, modifying the pitch, or swaddling them in reverb," in late 2023. The user-facing feature is the first of its kind in the music streaming space. Features Music Library Audiomack allows artists to upload their music directly to the platform, making it accessible to listeners for free streaming and downloads. It offers a catalog of millions of songs, ranging from popular tracks by mainstream artists to independent releases. The platform is known for hosting exclusive mixtapes, singles, and albums, especially in hip-hop and Afrobeats genres. Free and Premium Streaming Audiomack offers a free, ad-supported streaming model, allowing users to stream and download tracks at no cost. The platform also includes a premium subscription, Audiomack Premium, which provides an ad-free experience, higher-quality audio, and the ability to download music for offline listening. Trending and Charts Audiomack features a trending section that highlights the most popular songs, albums, and playlists based on user engagement and activity. The platform also includes curated playlists and editorial content, helping users discover new music. It offers genre-specific charts, including the Top Songs, Top Albums, and Top Trending tracks across various regions, providing a snapshot of trending music worldwide. Artist Monetization and Support Audiomack allows artists to earn revenue through its Audiomack Monetization Program (AMP), which is available to select creators. Through AMP, artists can monetize their streams on the platform, offering them a way to generate income from their music without a major label backing. Audiomack also provides promotional tools and analytics to help artists understand their audience and grow their fan base. Accounts and subscriptions As of October 2024, the two Audiomack subscription tiers are: Content Audiomack produces several original video content series, including Trap Symphony, with past episodes including Migos, Chief Keef, and Rich the Kid, among others. Other series include Bless The Booth and Fine Tuned, which featuring YNW Melly, among other acts. In March 2021, Audiomack officially launched Audiomack World, the editorial arm of the company. Users can read articles in the Audiomack app on Android and iOS, as well as their desktop site. In February 2024, Audiomack introduced You Need To Hear, a global rising artist program. The first artist selected for the series was FOURFIVE, a rapper from New York City. In 2022, Audiomack launched Keep the Beat Going, an annual campaign aimed at amplifying emerging artists' profiles and introducing them to global markets through billboards in major cities, curated playlists, digital ads, and creator workshops. The campaign has highlighted 72 artists between 2022 and 2024 from countries such as Ghana, Tanzania, Nigeria, South Africa, and Kenya, including notable names like Ayra Starr, Burna Boy, Rema, and Uncle Waffles. See also Music Streaming Services SoundCloud Spotify Apple Music YouTube Music References External links Audiomack on Instagram Streaming media systems 2012 establishments Digital audio distributors Internet properties established in 2012 American music websites Music streaming services
Audiomack
Technology
1,579
1,623,051
https://en.wikipedia.org/wiki/Cypher%20%28film%29
Cypher (also known as Brainstorm and Company Man), is a 2002 science fiction spy-fi thriller film directed by Vincenzo Natali and written by Brian King. The film follows an accountant (Jeremy Northam) whose sudden career as a corporate spy takes an unexpected turn when he meets a mysterious woman (Lucy Liu), uncovering secrets about the nature of his work. The film was shown in limited release in theaters in the US and Australia, and released on DVD on August 2, 2005. The film received mixed reviews, and Northam received the Best Actor award at the Sitges Film Festival. Plot Recently unemployed accountant Morgan Sullivan is bored with his suburban life. Pressured by his wife to take a job with her father's company, he instead pursues a position in corporate espionage. Digicorp's Head of Security, Finster, inducts Morgan and assigns him a new identity. As Jack Thursby, he is sent to conventions to secretly record presentations and transmit them to headquarters. Sullivan is soon haunted by recurring nightmares and neck pain. At a bar, Morgan meets Rita Foster from a competing corporation, who offers him pills and tells him not to transmit at the next convention. Afterward, Morgan is surprised when Digicorp confirms the receipt of his non-existent transmission. He takes the pills Rita gave him and his nightmares and pains stop. Confused and intrigued by Rita, he arranges to meet with her again. She tells him about Digicorp's deception and offers him an antidote – a green liquid in a large syringe. Morgan hesitantly accepts. She warns him that no matter what happens at the next convention he must not react. Morgan discovers that all the convention attendees believe themselves to be Digicorp spies. While they are drugged from the served drinks, plastic-clad scientists probe, inject and brainwash them. Individual headsets reinforce their new identities, preparing them to be used and then disposed of. Morgan manages to convince Digicorp that he believes his new identity. He is then recruited by Sunway Systems, a rival of Digicorp. Sunway's Head of Security, Callaway, encourages Morgan to act as a double agent, feeding corrupted data to Digicorp. Morgan calls Rita, who warns him that Sunway is equally ruthless, and that he is in fact being used by Rita's boss, Sebastian Rooks. Morgan manages to steal the required information from Sunway Systems' vault, escaping with Rita's help. Rita ultimately takes him to meet Rooks. When she temporarily leaves the room, a nervous Morgan calls Finster, and becomes even more distressed. He accidentally shoots Rita, who encourages him to ignore her and meet Rooks in the room next door. Morgan finds the room filled with objects which appear to be personal to him, including a photograph of him and Rita together. Realising that he is apparently Rooks, he turns to Rita in disbelief. Before Rita can convince him, the apartment is invaded by armed men. Rita and Morgan escape to the roof of the skyscraper as the security teams of Digicorp and Sunway meet, led by Finster and Callaway. After a short Mexican standoff both sides realise they are after the same person, Sebastian Rooks, and rush to the roof, where they find Morgan and Rita in a helicopter. Rita cannot fly it, but, having designed it himself, Sebastian can after Rita encourages him to remember his past self, connecting through his love for her. He lifts off amid gunfire from the security teams. Finster and Callaway comment as the couple seem to have escaped: Callaway: "Did you get a look at him? Did you see Rooks' face?" Finster: "Just Morgan Sullivan, our pawn." Looking up, they see the helicopter hovering and realise, too late, the true identity of Morgan Sullivan. Sebastian triggers a bomb, causing the whole roof to explode. On a boat in the South Pacific Ocean, Sebastian reveals the content of the stolen disc to Rita. Marked "terminate with extreme prejudice", it is the last copy of Rita's identity (after the one in the vault was destroyed). Sebastian throws the disc into the sea and says, "Now there's no copy at all." Cast Reception The film received mixed reviews. On review aggregator Rotten Tomatoes, the film holds a 58% rating based on reviews from 19 critics. Derek Elley of Variety called the film "consistently intriguing" and "100% plot driven" with excellent performances from the cast, while BBC's Neil Smith compared Cypher to The Manchurian Candidate, and noticed feelings of tension and claustrophobia, as in Natali's directorial début Cube, finally concluding that "Natali his yarn in an Orwellian atmosphere of paranoia." Scott Weinberg, reviewing for DVD Talk, recommended the film, calling it "one of the best direct-to-video titles [he has] seen all year", noting similarities to The Matrix, Dark City and the works of Philip K. Dick. English horror fiction writer and journalist Kim Newman, writing for the Empire magazine, awarded the film 4 out of 5 stars, praising Northam's and Liu's performances and calling the film a "semi-science-fictional exercise in puzzle-setting and solving". Some critics found problems with the film's complex narrative. Paul Byrnes of The Sydney Morning Herald found that the plot overwhelmed the characters so much that he "stopped caring". John J. Puccio, writing for Movie Metropolis, thought that "[Cyphers] corporate espionage plot doesn't prove simply too complicated, it ends up downright muddled", but concluded that the film was nevertheless "still kind of fun". For his performance in Cypher, Jeremy Northam received the Best Actor award on the 2002 Sitges Film Festival in Catalonia. References External links 2002 films 2000s science fiction thriller films 2000s spy films American science fiction thriller films Canadian science fiction thriller films English-language Canadian films Films about memory erasure and alteration Films about computing Films scored by Michael Andrews Films directed by Vincenzo Natali Cyberpunk films Fiction about mind control American spy films Films about accountants 2000s English-language films 2000s American films 2000s Canadian films English-language science fiction thriller films
Cypher (film)
Technology
1,278
22,494,279
https://en.wikipedia.org/wiki/Realization%20%28linguistics%29
In linguistics, realization is the process by which some kind of surface representation is derived from its underlying representation; that is, the way in which some abstract object of linguistic analysis comes to be produced in actual language. Phonemes are often said to be realized by speech sounds. The different sounds that can realize a particular phoneme are called its allophones. Realization is also a subtask of natural language generation, which involves creating an actual text in a human language (English, French, etc.) from a syntactic representation. There are a number of software packages available for realization, most of which have been developed by academic research groups in NLG. The remainder of this article concerns realization of this kind. Example For example, the following Java code causes the simplenlg system to print out the text The women do not smoke.: NPPhraseSpec subject = nlgFactory.createNounPhrase("the", "woman"); subject.setPlural(true); SPhraseSpec sentence = nlgFactory.createClause(subject, "smoke"); sentence.setFeature(Feature.NEGATED, true); System.out.println(realiser.realiseSentence(sentence)); In this example, the computer program has specified the linguistic constituents of the sentence (verb, subject), and also linguistic features (plural subject, negated), and from this information the realiser has constructed the actual sentence. Processing Realisation involves three kinds of processing: Syntactic realisation: Using grammatical knowledge to choose inflections, add function words and also to decide the order of components. For example, in English the subject usually precedes the verb, and the negated form of smoke is do not smoke. Morphological realisation: Computing inflected forms, for example the plural form of woman is women (not womans). Orthographic realisation: Dealing with casing, punctuation, and formatting. For example, capitalising The because it is the first word of the sentence. The above examples are very basic, most realisers are capable of considerably more complex processing. Systems A number of realisers have been developed over the past 20 years. These systems differ in terms of complexity and sophistication of their processing, robustness in dealing with unusual cases, and whether they are accessed programmatically via an API or whether they take a textual representation of a syntactic structure as their input. There are also major differences in pragmatic factors such as documentation, support, licensing terms, speed and memory usage, etc. It is not possible to describe all realisers here, but a few of the emerging areas are: Simplenlg : a document realizing engine with an api which intended to be simple to learn and use, focused on limiting scope to only finding the surface area of a document. KPML : this is the oldest realiser, which has been under development under different guises since the 1980s. It comes with grammars for ten different languages. FUF/SURGE : a realiser which was widely used in the 1990s, and is still used in some projects today OpenCCG : an open-source realiser which has a number of nice features, such as the ability to use statistical language models to make realisation decisions. References External links - ACL NLG Portal (contains links to the above and many other realisers) Natural language processing Computational linguistics
Realization (linguistics)
Technology
716
5,056,680
https://en.wikipedia.org/wiki/Xi%20Capricorni
The Bayer designation Xi Capricorni (ξ Cap, ξ Capricorni) is shared by two star systems, in the constellation Capricornus: ξ¹ Capricorni ξ² Capricorni, being the brighter of the two, often simply called ξ Capricorni They are separated by 0.25° in the sky. Capricorni, Xi Capricornus
Xi Capricorni
Astronomy
85
77,388,818
https://en.wikipedia.org/wiki/Zirconium%20dichloride
Zirconium dichloride is an inorganic chemical compound with the chemical formula . is a black solid. It adopts a layered structure as molybdenum disulfide The compound can be formed by heating zirconium monochloride and zirconium tetrachloride: Related compounds Zirconium diiodide. References Chlorides Metal halides Zirconium(II) compounds
Zirconium dichloride
Chemistry
91
339,742
https://en.wikipedia.org/wiki/Pinnation
Pinnation (also called pennation) is the arrangement of feather-like or multi-divided features arising from both sides of a common axis. Pinnation occurs in biological morphology, in crystals, such as some forms of ice or metal crystals, and in patterns of erosion or stream beds. The term derives from the Latin word pinna meaning "feather", "wing", or "fin". A similar concept is "pectination", which is a comb-like arrangement of parts (arising from one side of an axis only). Pinnation is commonly referred to in contrast to "palmation", in which the parts or structures radiate out from a common point. The terms "pinnation" and "pennation" are cognate, and although they are sometimes used distinctly, there is no consistent difference in the meaning or usage of the two words. Plants Botanically, pinnation is an arrangement of discrete structures (such as leaflets, veins, lobes, branches, or appendages) arising at multiple points along a common axis. For example, once-divided leaf blades having leaflets arranged on both sides of a rachis are pinnately compound leaves. Many palms (notably the feather palms) and most cycads and grevilleas have pinnately divided leaves. Most species of ferns have pinnate or more highly divided fronds, and in ferns, the leaflets or segments are typically referred to as "pinnae" (singular "pinna"). Plants with pinnate leaves are sometimes colloquially called "feather-leaved". Most of the following definitions are from Jackson's Glossary of Botanical Terms: Depth of divisions pinnatifid and pinnatipartite: leaves with pinnate lobes that are not discrete, remaining sufficiently connected to each other that they are not separate leaflets. pinnatisect: cut all the way to the midrib or other axis, but with the bases of the pinnae not contracted to form discrete leaflets. pinnate-pinnatifid: pinnate, with the pinnae being pinnatifid. Number of divisions paripinnate: pinnately compound leaves in which leaflets are borne in pairs along the rachis without a single terminal leaflet; also called "even-pinnate". imparipinnate: pinnately compound leaves in which there is a lone terminal leaflet rather than a terminal pair of leaflets; also called "odd-pinnate". Iteration of divisions bipinnate: pinnately compound leaves in which the leaflets are themselves pinnately compound; also called "twice-pinnate". tripinnate: pinnately compound leaves in which the leaflets are themselves bipinnate; also called "thrice-pinnate". tetrapinnate: pinnately compound leaves in which the leaflets are themselves tripinnate. unipinnate: solitary compound leaf with a row of leaflets arranged along each side of a common rachis. The term pinnula (plural: pinnulae) is the Latin diminutive of pinna (plural: pinnae); either as such or in the Anglicised form: pinnule, it is differently defined by various authorities. Some apply it to the leaflets of a pinna, especially the leaflets of bipinnate or tripinnate leaves. Others also or alternatively apply it to second or third order divisions of a bipinnate or tripinnate leaf. It is the ultimate free division (or leaflet) of a compound leaf, or a pinnate subdivision of a multipinnate leaf. Animals In animals, pinnation occurs in various organisms and structures, including: Some muscles can be unipinnate or bipinnate muscles. The fish Platax pinnatus is known as the pinnate spadefish or pinnate batfish. Geomorphology Pinnation occurs in certain waterway systems in which all major tributary streams enter the main channels by flowing in one direction at an oblique angle. References Plant morphology Leaves sv:Parflikig
Pinnation
Biology
832
20,747,917
https://en.wikipedia.org/wiki/Tricyclic%20antidepressant%20overdose
Tricyclic antidepressant overdose is poisoning caused by excessive medication of the tricyclic antidepressant (TCA) type. Symptoms may include elevated body temperature, blurred vision, dilated pupils, sleepiness, confusion, seizures, rapid heart rate, and cardiac arrest. If symptoms have not occurred within six hours of exposure they are unlikely to occur. TCA overdose may occur by accident or purposefully in an attempt to cause death. The toxic dose depends on the specific TCA. Most are non-toxic at less than 5 mg/kg except for desipramine, nortriptyline, and trimipramine, which are generally non-toxic at less than 2.5 mg/kg. In small children one or two pills can be fatal. An electrocardiogram (ECG) should be included in the assessment when there is concern of an overdose. In overdose activated charcoal is often recommended. People should not be forced to vomit. In those who have a wide QRS complex () sodium bicarbonate is recommended. If seizures occur benzodiazepines should be given. In those with low blood pressure intravenous fluids and norepinephrine may be used. The use of intravenous lipid emulsion may also be tried. In the early 2000s, TCAs were one of the most common causes of poisoning. In the United States in 2004 there were more than 12,000 cases. In the United Kingdom they resulted in about 270 deaths a year. An overdose from TCAs was first reported in 1959. Signs and symptoms The peripheral autonomic nervous system, central nervous system and the heart are the main systems that are affected following overdose. Initial or mild symptoms typically develop within 2 hours and include tachycardia, drowsiness, a dry mouth, nausea and vomiting, urinary retention, confusion, agitation, and headache. More severe complications include hypotension, cardiac rhythm disturbances, hallucinations, and seizures. Electrocardiogram (ECG) abnormalities are frequent and a wide variety of cardiac dysrhythmias can occur, the most common being sinus tachycardia and intraventricular conduction delay resulting in prolongation of the QRS complex and the PR/QT intervals. Seizures, cardiac dysrhythmias, and apnea are the most important life-threatening complications. Cause Tricyclics have a narrow therapeutic index, i.e., the therapeutic dose is close to the toxic dose. Factors that increase the risk of toxicity include advancing age, cardiac status, and concomitant use of other drugs. However, serum drug levels are not useful for evaluating risk of arrhythmia or seizure in tricyclic overdose. Pathophysiology Most of the toxic effects of TCAs are caused by four major pharmacological effects. TCAs have anticholinergic effects, cause excessive blockade of norepinephrine reuptake at the preganglionic synapse, direct alpha adrenergic blockade, and importantly they block sodium membrane channels with slowing of membrane depolarization, thus having quinidine-like effects on the myocardium. Diagnosis A specific blood test to verify toxicity is not typically available. An electrocardiogram (ECG) should be included in the assessment when there is concern of an overdose. Treatment People with symptoms are usually monitored in an intensive care unit for a minimum of 12 hours, with close attention paid to maintenance of the airways, along with monitoring of blood pressure, arterial pH, and continuous ECG monitoring. Supportive therapy is given if necessary, including respiratory assistance and maintenance of body temperature. Once a person has had a normal ECG for more than 24 hours they are generally medically clear. Decontamination Initial treatment of an acute overdose includes gastric decontamination. This is achieved by giving activated charcoal, which adsorbs the drug in the gastrointestinal tract either by mouth or via a nasogastric tube. Activated charcoal is most useful if given within 1 to 2 hours of ingestion. Other decontamination methods such as stomach pumps, ipecac induced emesis, or whole bowel irrigation are generally not recommended in TCA poisoning. Stomach pumps may be considered within an hour of ingestion but evidence to support the practice is poor. Medication Administration of intravenous sodium bicarbonate as an antidote has been shown to be an effective treatment for resolving the metabolic acidosis and cardiovascular complications of TCA poisoning. If sodium bicarbonate therapy fails to improve cardiac symptoms, conventional antidysrhythmic drugs or magnesium can be used to reverse any cardiac abnormalities. However, no benefit has been shown from Class 1 antiarrhythmic drugs; it appears they worsen the sodium channel blockade, slow conduction velocity, and depress contractility and should be avoided in TCA poisoning. Low blood pressure is initially treated with fluids along with bicarbonate to reverse metabolic acidosis (if present), if the blood pressure remains low despite fluids then further measures such as the administration of epinephrine, norepinephrine, vasopressin, or dopamine can be used to increase blood pressure. Another potentially severe symptom is seizures: Seizures often resolve without treatment but administration of a benzodiazepine such as Lorazepam or other anticonvulsant may be required for persistent muscular overactivity. Barbiturate anticonvulsants are not recommended due to increased risk of respiratory depression. There is no role for physostigmine in the treatment of tricyclic toxicity as it may increase cardiac toxicity and cause seizures. In cases of severe TCA overdose that are refractory to conventional therapy, intravenous lipid emulsion therapy has been reported to improve signs and symptoms in moribund patients with toxicities involving several types of lipophilic substances, therefore lipids may have a role in treating severe cases of refractory TCA overdose. Dialysis Tricyclic antidepressants are highly protein bound and have a large volume of distribution; therefore removal of these compounds from the blood with hemodialysis, hemoperfusion or other techniques are unlikely to be of any significant benefit. Epidemiology Studies in the 1990s in Australia and the United Kingdom showed that between 8 and 12% of drug overdoses were following TCA ingestion. TCAs may be involved in up to 33% of all fatal poisonings, second only to analgesics. Another study reported 95% of deaths from antidepressants in England and Wales between 1993 and 1997 were associated with tricyclic antidepressants, particularly dothiepin and amitriptyline. It was determined there were 5.3 deaths per 100,000 prescriptions. Sodium channel blockers such as Dilantin should not be used in the treatment of TCA overdose as the Na+ blockade will increase the QTI. References External links Poisoning by drugs, medicaments and biological substances Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate
Tricyclic antidepressant overdose
Environmental_science
1,471
61,275,333
https://en.wikipedia.org/wiki/Spatiality%20%28architecture%29
Spatiality is a term used in architecture for characteristics that, looked at from a certain aspect, define the quality of a space. In comparison to the term spaciousness, which includes formal, dimensional determination of size—depth, width or height—spatiality is a higher category term. It includes not only formal but other qualities of space—such as definition, openness, visibility, expressivity, etc. Spatiality in architecture is achieved in different ways, by using one of the design principles. In a general sense, the principles are classified into: a) those that use space organisation to determine or redefine boundaries, and b) those that use visual treatment to create a perceptive experience of its extension. In the physical sense, the principles can refer to: space volume (open plan, flexibility, enfilade, and circular connection), space surface (overlapping and gradation of planes), and materialisation of elements or surfaces. Spatiality can be defined by the user of the space or the designer of the space. In attempts to find a clearance to sleep within, homeless or unhoused individuals seek public spaces as temporary homes. Spatiality is used within an architectural context to prevent unhoused individuals from residing in these spaces. It often targets people who use or rely on public space more than others, like people who are homeless and youth which can be referred to as hostile architecture. Hostile architecture is defined as "an urban design strategy in which public spaces and structures are used to prevent certain activities or restrict certain people from using those spaces." The formal qualities of design and architecture aid in defining the quality of a space as unfit for a specific sub culture of individuals. "The form, function, and meaning of public space differ across numerous cultural traditions and is influenced by varying degrees of social and political control." Using objects like spikes, metal teeth, metal bars, large bolts, dividers, and steep ledges are all physical qualities that go unnoticed but serve a purpose of keeping individuals out of that physical space, preventing them from resting in that space and deterring the use of that space. An example is youth skateboarding where forms of exclusionary architecture prevent them from specific behaviours like riding on the sides of railings or buildings. Another example of exclusionary architecture is reduced public seating in train stations or at bus stops that restrict not only intended groups like unhoused individuals but those who require extra accessibility aid. Using formal qualities and defining the spatiality can control the behaviours of those intended but inhibit behaviours of others unintentionally. The designed spaces that include hostile aspects and restrict or exclude (exclusionary architecture) individuals are creating a designed experience for those that reside within it. By preventing individuals from sleeping on a park bench, the experience in that space is changed for the individual looking for a place to sleep and it is also changed for those who what have coexisted in that public space. Social spatialization (architecture) Social spatialization (social spatiality) is a concept that can be defined within the realm of architecture as a mode of being, a manner of 'seeing; and way of doing'. Alongside the term spatiality, social spatialization focuses on cultural affordance which is defined in psychology as the possibility of an action or event taking place in relation to the user and an object within a particular context. It is important to consider spatiality in a social and cultural context within a range of different professional disciplines and practices including but not limited to philosophy, the natural sciences, art, and poetry. An example of social spatiality applied within an architectural context is Kabyle houses. The Kabyle house and the Berber culture were a North African community and group of people studied by Bourdieu. The Kabyle house divides the different areas of the house through formal qualities of design. The relationship between the house and the outside world and the affordances offered are connected to the roles of the male and female culture. Women take responsibility for the dark, interior of the home and look after the water, cooking, and manure. They stay within the house and the garden while the male spends their time outside the home working in agriculture. The layout of the homes reflect these roles, the lower part of the home is reserved for the humans and the tasks to be performed including procreation, death, and forms of intimacy must be completed in this section of the home. Bourdieu outlines that the social identities and actions of the Berber people is mirrored by their designed space. Each role is reflected back in the hierarchy of their space, the objects that reside within them and light or direction of the sun it follows. References Architectural terminology
Spatiality (architecture)
Engineering
950
8,290,097
https://en.wikipedia.org/wiki/Putt-Putt%20Saves%20the%20Zoo
Putt-Putt Saves the Zoo is a 1995 video game and the third of seven adventure games in the Putt-Putt series of games developed and published by Humongous Entertainment. The animation style also changed with this game to hand-drawn animation, in contrast to the pixel art graphics of the previous two games, following the studio's jump from DOS to Windows with Freddi Fish and the Case of the Missing Kelp Seeds the previous year. The game was reissued on April 19, 1999. In November 2011, the game became the first Humongous Entertainment game to be rereleased for iOS and Google Play. Developed by Nimbus Games Inc., the iOS version of this game released by Atari was discontinued. A Nintendo Switch version was released in February 2022, followed by the PlayStation 4 version on the PlayStation Store in November the same year. Plot Putt-Putt is excited for the grand opening of the Cartown Zoo. He visits Mr. Baldini's grocery store, who tasks him with delivering zoo chow to zookeeper Outback Al. Upon arriving at the zoo, Putt-Putt learns that six baby animals have gone missing: Baby Jambo the african bush elephant calf; Masai the reticulated giraffe calf; Kenya the lion cub; Zanzibar the Hippopotamus calf; Sammy the seal; and Little Skeeter the boa constrictor. Putt-Putt volunteers to search for the animals, which Outback Al agrees to as he starts repairs on the zoo. After finding and rescuing all six baby animals, Putt-Putt notifies Outback Al of his success, and Al excitedly thanks him. At the zoo's opening ceremony, Outback Al gives Putt-Putt a Junior Zookeeper award for his help and allows him the honor of cutting the ribbon. The zoo is then opened to everyone as they all enter to explore, ending the game. Gameplay The game mechanics are almost the same as its predecessors including the glove box inventory, horn, radio and accelerator, though one addition is an ignition key shown on the bottom left side of Putt-Putt's dashboard, which allows the player to quit the game. A few mini games are also playable. Unlike other games, Putt-Putt can acquire a camera so the player can take pictures of the animals and other characters and print them out. Reception The combined sales of Putt-Putt Saves the Zoo, Putt-Putt Joins the Parade and Putt-Putt Goes to the Moon surpassed one million units by June 1997. During the year 2001 alone, Putt-Putt Saves the Zoo sold 100,972 retail units in North America, according to PC Data. References External links Putt-Putt Saves the Zoo at Humongous Entertainment 1995 video games Android (operating system) games Classic Mac OS games IOS games Linux games MacOS games Nimbus Games games Nintendo Switch games PlayStation 4 games Saves the Zoo SCUMM games ScummVM-supported games Single-player video games Tommo games UFO Interactive Games games Video games developed in the United States Video games scored by George Sanger Video games set in zoos Windows games Works about animals
Putt-Putt Saves the Zoo
Biology
663
21,843,217
https://en.wikipedia.org/wiki/Web%20of%20Things
The Web of Things (WoT) refers to a set of standards developed by the World Wide Web Consortium (W3C) to ensure interoperability across different Internet of things platforms and application domains. Building blocks The four WoT building blocks provide a way to implement systems that conform with the WoT architecture. Thing Description (TD) The key component of WoT building blocks is the WoT Thing Description. A Thing Description defines a virtual or physical device (Thing) and provides an information model based on a semantic vocabulary, with serialization in JSON. The Thing Description can be considered the main entry point for a Thing, similar to an JSON page for a website. Thing Descriptions promote JSON by offering both human- and machine-readable (and understandable) JSON about a Thing, such as its title, ID, descriptions, and more. Additionally, a Thing Description outlines all available actions, events, and properties of a Thing, as well as the security mechanisms required to access them. Thing Descriptions are highly flexible to ensure interoperability and, in addition to standard functionality, define a mechanism for extending functionality through the Context Extension Framework. Binding Templates IoT uses a wide variety of protocols to interact with Things, as no single protocol is universally suitable. One of the main challenges for the Web of Things is managing the diversity of protocols and interaction mechanisms. This challenge is addressed through Binding Templates. WoT Binding Templates provide a collection of communication metadata blueprints that support various IoT solutions. A Binding Template is created once and can then be reused in any Thing Description. Scripting API The WoT Scripting API is an optional building block of the Web of Things. It simplifies IoT application development by providing an ECMAScript-based application API, similar to how web browsers offer an API for web applications. By providing a universal application runtime system, the Scripting API addresses the issue of heterogeneity in IoT systems. It also enables the creation of reusable scripts to implement device logic, significantly enhancing the portability of application modules. The current reference implementation of the WoT Scripting API is an open-source project called node-wot, developed by the Eclipse Thingweb project. Security and Privacy Guidelines In the WoT architecture, security is relevant to all aspects of the system. The specification of each WoT building block includes several considerations regarding the security and privacy of that particular block. Security is supported by specific features, such as public metadata in Thing Descriptions and the separation of concerns in the design of the Scripting API. Additionally, there is a specification called the WoT Security and Privacy Guidelines, which addresses a variety of security and privacy-related concerns. History Connecting objects to the Web arguably started around the year 2000. In 2002, a peer-reviewed paper presented the Cooltown project. This project explored the use of URLs to address and HTTP to interact with physical objects such as public screens or printers. Following this early work, the growing interest in and implementation of the Internet of things started to raise some questions about the application layer of the IoT. While most of the work in the IoT space focused on network protocols, there was a need to think about the convergence of data from IoT devices. Researchers and practitioners started envisioning the IoT as a system where data from various devices could be consumed by Web applications to create new use cases. The idea of the Web as an application layer for the IoT started to emerge in 2007. Several researchers began working in parallel on these concepts. Among them, Dominique Guinard and Vlad Trifa started the Web of Things online community and published the first WoT manifesto, advocating the use of Web standards (REST, Lightweight semantics, etc.) to build the application layer of the IoT. The manifesto was published together with an implementation on the Sun SPOT platform. At the same time, Dave Raggett from W3C began discussing the Web of Things at various W3C and IoT events. Erik Wilde published "Putting Things to REST," a self-published concept paper looking at utilizing REST to sense and control physical objects. Early mentions of the Web of Things as a term also appeared in a paper by Vlad Stirbu et al. From 2007 onwards, Trifa, Guinard, Wilde, and other researchers tried publishing their ideas and concepts at peer-reviewed conferences, but their work was rejected by the Wireless Sensor Networks research community on the basis that Internet and Web protocols were too verbose and limited in the context of real-world devices, preferring to focus on optimization of memory and computation usage, wireless bandwidth, or very short duty cycles. However, a number of researchers in the WSN community began considering these ideas more seriously. In early 2009, several respected WSN researchers, such as David Culler, Jonathan Hui, Adam Dunkels, and Yazar Dogan, evaluated the use of Internet and Web protocols for low-power sensor nodes and showed the feasibility of the approach. Following this, Guinard and Trifa presented their end-to-end implementation of the concepts and presented it in a peer-reviewed publication accepted at the World Wide Web conference in 2009. Building on this implementation and uniting efforts, a RESTful architecture for things was proposed in 2010 by Guinard, Trifa, and Wilde. Guinard, Trifa, and Wilde ran the first International Workshop in 2010 on the Web of Things and it has been an annual occurrence since. These workshops morphed into a growing community of researchers and practitioners who could discuss the latest findings and ideas on the Web of Things . In 2011, two of the first PhD theses on the Web of Things were presented at ETH Zurich: Building Blocks for a Participatory Web of Things: Devices, Infrastructures, and Programming Frameworks from Vlad Trifa and A Web of Things Application Architecture – Integrating the Real-World into the Web from Dominique Guinard. Building on this work, Simon Mayer emphasized the importance of REST's uniform interface, and in particular the HATEOAS principle, in his PhD thesis. In 2014, the W3C showed an increased interest in the Web of Things and organized the W3C Workshop on the Web of Things,[14] under the lead of Dave Raggett, together with Siemens and the COMPOSE European project. This workshop led to the creation of the Web of Things Interest Group at W3C and the submission of the Web Thing Model. The same year, Siemens announced the creation of a research group dedicated to the Web of Things. In October 2014, Google also announced interest in these ideas by launching the Physical Web GitHub project. The Web of Things Interest Group identified the required set of standards needed for the Web of Things in February 2017. The Working Group started working on four deliverables called WoT Architecture, WoT Thing Description, WoT Scripting API, and WoT Binding Templates. See also Internet of Things (IoT) Smart device Connected device Home automation devices Smart grid Matter Further reading External links References Cloud standards Web 2.0 neologisms World Wide Web Internet of things World Wide Web Consortium
Web of Things
Technology
1,459
3,633,138
https://en.wikipedia.org/wiki/Source%20transformation
Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Thévenin's theorem and Norton's theorem respectively. Process Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance, and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit. Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain. In general, the concept of source transformation is an application of Thévenin's theorem to a current source, or Norton's theorem to a voltage source. However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources. Source transformations are used to exploit the equivalence of a real current source and a real voltage source, such as a battery. Application of Thévenin's theorem and Norton's theorem gives the quantities associated with the equivalence. Specifically, given a real current source, which is an ideal current source in parallel with an impedance , applying a source transformation gives an equivalent real voltage source, which is an ideal voltage source in series with the impedance. The impedance retains its value and the new voltage source has value equal to the ideal current source's value times the impedance, according to Ohm's Law . In the same way, an ideal voltage source in series with an impedance can be transformed into an ideal current source in parallel with the same impedance, where the new ideal current source has value . Example calculation Source transformations are easy to compute using Ohm's law. If there is a voltage source in series with an impedance, it is possible to find the value of the equivalent current source in parallel with the impedance by dividing the value of the voltage source by the value of the impedance. The converse also holds: if a current source in parallel with an impedance is present, multiplying the value of the current source with the value of the impedance provides the equivalent voltage source in series with the impedance. A visual example of a source transformation can be seen in Figure 1. A brief proof of the theorem The transformation can be derived from the uniqueness theorem. In the present context, it implies that a black box with two terminals must have a unique well-defined relation between its voltage and current. It is readily to verify that the above transformation indeed gives the same V-I curve, and therefore the transformation is valid. See also Ohm's Law Thévenin's theorem Current source Voltage source Electrical impedance References Electrical engineering Electronic engineering Electrical circuits Electronic circuits Electronic design Circuit theorems
Source transformation
Physics,Technology,Engineering
618
61,878,439
https://en.wikipedia.org/wiki/Ramesh%20Jasti
Ramesh Jasti is a professor of organic chemistry at the University of Oregon. He was the first person to synthesize the elusive cycloparaphenylene in 2008 during post doctoral work in the laboratory of Professor Carolyn Bertozzi. He started his laboratory at Boston University where he was the recipient of the NSF CAREER award. His early lab repeatedly broke the record for the synthesis of the smallest cycloparaphenylene known. In 2014, he moved his laboratory to the University of Oregon where he expanded his focus to apply the molecules he discovered in the areas of organic materials, mechanically interlocked molecules, and biology. He is the Director of the Materials Science Institute at the University of Oregon. Awards and honors Fred Morrison Scholarship (1994) Thieme Journal Award (2012) American Chemical Society Young Academic Investigator Award (2013) Boston University Ignition Award (2013) Boston University Materials Science and Engineering Innovation Award (2013) National Science Foundation CAREER Award (2013) Alfred P. Sloan Fellowship (2013) Boston University Innovation Professorship (2013) Camille Dreyfus Teacher-Scholar Award (2014) References Organic chemists University of Oregon faculty Living people Year of birth missing (living people)
Ramesh Jasti
Chemistry
244
73,308,571
https://en.wikipedia.org/wiki/Sodium%20benzenesulfonate
Sodium benzenesulfonate is an organic compound with the formula . It is white, water-soluble solid, It is produced by the neutralization benzenesulfonic acid with sodium hydroxide. It is also a common ingredient in some detergents. The compound typically crystallizes from water as the monohydrate. Heating this salt in strong base results in desulfonation, giving, after acid workup, phenol This reaction was at one time, the principal route to phenol. References Sulfonyl groups Leaving groups Sulfonates
Sodium benzenesulfonate
Chemistry
118
3,734,039
https://en.wikipedia.org/wiki/ANOVA%20gauge%20R%26R
ANOVA gauge repeatability and reproducibility is a measurement systems analysis technique that uses an analysis of variance (ANOVA) random effects model to assess a measurement system. The evaluation of a measurement system is not limited to gauge but to all types of measuring instruments, test methods, and other measurement systems. Purpose ANOVA Gage R&R measures the amount of variability induced in measurements by the measurement system itself, and compares it to the total variability observed to determine the viability of the measurement system. There are several factors affecting a measurement system, including: Measuring instruments, the gage or instrument itself and all mounting blocks, supports, fixtures, load cells, etc. The machine's ease of use, sloppiness among mating parts, and, "zero" blocks are examples of sources of variation in the measurement system. In systems making electrical measurements, sources of variation include electrical noise and analog-to-digital converter resolution. Operators (people), the ability and/or discipline of a person to follow the written or verbal instructions. Test methods, how the devices are set up, the test fixtures, how the data is recorded, etc. Specification, the measurement is reported against a specification or a reference value. The range or the engineering tolerance does not affect the measurement, but is an important factor in evaluating the viability of the measurement system. Parts or specimens (what is being measured), some items are easier to be measured than others. A measurement system may be good for measuring steel block length but not for measuring rubber pieces, for example. There are two important aspects of a Gage R&R: Repeatability: The variation in measurements taken by a single person or instrument on the same or replicate item and under the same conditions. Reproducibility: the variation induced when different operators, instruments, or laboratories measure the same or replicated specimen. It is important to understand the difference between accuracy and precision to understand the purpose of Gage R&R. Gage R&R addresses only the precision of a measurement system. It is common to examine the P/T ratio which is the ratio of the precision of a measurement system to the (total) tolerance of the manufacturing process of which it is a part. If the P/T ratio is low, the impact on product quality of variation due to the measurement system is small. If the P/T ratio is larger, it means the measurement system is "eating up" a large fraction of the tolerance, in that the parts that do not have sufficient tolerance may be measured as acceptable by the measurement system. Generally, a P/T ratio less than 0.1 indicates that the measurement system can reliably determine whether any given part meets the tolerance specification. A P/T ratio greater than 0.3 suggests that unacceptable parts will be measured as acceptable (or vice versa) by the measurement system, making the system inappropriate for the process for which it is being used. ANOVA Gage R&R is an important tool within the Six Sigma methodology, and it is also a requirement for a production part approval process (PPAP) documentation package. Examples of Gage R&R studies can be found in part 1 of Czitrom & Spagon. There is not a universal criterion of minimum sample requirements for the GRR matrix, it being a matter for the Quality Engineer to assess risks depending on how critical the measurement is and how costly they are. The "10×2×2" (ten parts, two operators, two repetitions) is an acceptable sampling for some studies, although it has very few degrees of freedom for the operator component. Several methods of determining the sample size and degree of replication are used. Calculating variance components In one common crossed study, 10 parts might each be measured two times by two different operators. The ANOVA then allows the individual sources of variation in the measurement data to be identified; the part-to-part variation, the repeatability of the measurements, the variation due to different operators; and the variation due to part by operator interaction. The calculation of variance components and standard deviations using ANOVA is equivalent to calculating variance and standard deviation for a single variable but it enables multiple sources of variation to be individually quantified which are simultaneously influencing a single data set. When calculating the variance for a data set the sum of the squared differences between each measurement and the mean is calculated and then divided by the degrees of freedom (n – 1). The sums of the squared differences are calculated for measurements of the same part, by the same operator, etc., as given by the below equations for the part (SSPart), the operator (SSOp), repeatability (SSRep) and total variation (SSTotal). where nOp is the number of operators, nRep is the number of replicate measurements of each part by each operator, is the number of parts, x̄ is the grand mean, x̄i.. is the mean for each part, x̄·j· is the mean for each operator, x<sub>ijk'''</sub> is each observation and x̄ij is the mean for each factor level. When following the spreadsheet method of calculation the n terms are not explicitly required since each squared difference is automatically repeated across the rows for the number of measurements meeting each condition. The sum of the squared differences for part by operator interaction (SS''Part · Op) is the residual variation given by See also Measurement uncertainty Random effects model References Six Sigma Measurement Analysis of variance
ANOVA gauge R&R
Physics,Mathematics
1,117
4,603,713
https://en.wikipedia.org/wiki/Parallel%20Ocean%20Program
The Parallel Ocean Program (POP) is a three-dimensional ocean circulation model designed primarily for studying the ocean climate system. The model is developed and supported primarily by researchers at LANL. External links Physical oceanography
Parallel Ocean Program
Physics
44
59,545
https://en.wikipedia.org/wiki/Sagrada%20Fam%C3%ADlia
The Basílica i Temple Expiatori de la Sagrada Família, otherwise known as Sagrada Família, is a church under construction in the Eixample district of Barcelona, Catalonia, Spain. It is the largest unfinished Catholic church in the world. Designed by the Catalan architect Antoni Gaudí (1852–1926), in 2005 his work on Sagrada Família was added to an existing (1984) UNESCO World Heritage Site, "Works of Antoni Gaudí". On 7 November 2010, Pope Benedict XVI consecrated the church and proclaimed it a minor basilica. On 19 March 1882, construction of Sagrada Família began under architect Francisco de Paula del Villar. In 1883, when Villar resigned, Gaudí took over as chief architect, transforming the project with his architectural and engineering style, combining Gothic and curvilinear Art Nouveau forms. Gaudí devoted the remainder of his life to the project, and he is buried in the church's crypt. At the time of his death in 1926, less than a quarter of the project was complete. Relying solely on private donations, Sagrada Família's construction progressed slowly and was interrupted by the Spanish Civil War. In July 1936, anarchists from the FAI set fire to the crypt and broke their way into the workshop, partially destroying Gaudí's original plans. In 1939, Francesc de Paula Quintana took over site management, which was able to go on with the material that was saved from Gaudí's workshop and that was reconstructed from published plans and photographs. Construction resumed to intermittent progress in the 1950s. Advancements in technologies such as computer-aided design and computerised numerical control (CNC) have since enabled faster progress and construction passed the midpoint in 2010. In 2014, it was anticipated that the building would be completed by 2026, the centenary of Gaudí's death, but this schedule was threatened by work slowdowns caused by the 2020–2021 depths of the COVID-19 pandemic. In March 2024, an updated forecast reconfirmed a likely completion of the building in 2026, though the announcement stated that work on sculptures, decorative details and a controversial proposed stairway leading to what will eventually be the main entrance is expected to continue until 2034. Describing Sagrada Família, art critic Rainer Zerbst said "it is probably impossible to find a church building anything like it in the entire history of art", and Paul Goldberger describes it as "the most extraordinary personal interpretation of Gothic architecture since the Middle Ages". Though sometimes described as a cathedral, the basilica is not the cathedral church of the Archdiocese of Barcelona; that title belongs to the Cathedral of the Holy Cross and Saint Eulalia (Barcelona Cathedral). History Origins Sagrada Família was inspired by a bookseller, , founder of Asociación Espiritual de Devotos de San José (Spiritual Association of Devotees of St. Joseph). After a visit to the Vatican in 1872, Bocabella returned from Italy with the intention of building a church inspired by the basilica at Loreto. The apse crypt of the church, funded by donations, was begun 19 March 1882, on the festival of St. Joseph, to the design of the architect Francisco de Paula del Villar, whose plan was for a Gothic revival church of a standard form. The apse crypt was completed before Villar's resignation on 18 March 1883, when Antoni Gaudí assumed responsibility for its design, which he changed radically. Gaudi began work on the church in 1883 but was not appointed Architect Director until 1884. 20th century On the subject of the extremely long construction period, Gaudí is said to have remarked: "My client is not in a hurry." When Gaudí died in 1926, the basilica was between 15 and 25 percent complete. After Gaudí's death, work continued under the direction of his main disciple Domènec Sugrañes i Gras until interrupted by the Spanish Civil War in 1936. Parts of the unfinished basilica and Gaudí's models and workshop were destroyed during the war. The present design is based on reconstructed versions of the plans that were burned in a fire as well as on modern adaptations. Since 1940, the architects Francesc Quintana, Isidre Puig Boada, Lluís Bonet i Garí and Francesc Cardoner have carried on the work. The illumination was designed by Carles Buïgas. The director until 2012 was the son of Lluís Bonet, Jordi Bonet i Armengol. Armengol began introducing computers into the design and construction process in the 1980s. 21st century The central nave vaulting was completed in 2000 and the main tasks since then have been the construction of the transept vaults and apse. In 2002, the Sagrada Família Schools building was relocated from the eastern corner of the site to the southern corner, and began housing an exhibition. The school was originally designed by Gaudí in 1909 for the children of the construction workers. , work concentrated on the crossing and supporting structure for the main steeple of Jesus Christ as well as the southern enclosure of the central nave, which will become the Glory façade. Computer-aided design technology has allowed stone to be shaped off-site by a CNC milling machine, whereas in the 20th century the stone was carved by hand. In 2008, some renowned Catalan architects advocated halting construction to respect Gaudí's original designs, which, although they were not exhaustive and were partially destroyed, have been partially reconstructed in recent years. Since 2013, AVE high-speed trains have passed near Sagrada Família through a tunnel that runs beneath the centre of Barcelona. The tunnel's construction, which began on 26 March 2010, was controversial. The Ministry of Public Works of Spain () claimed the project posed no risk to the church. Sagrada Família engineers and architects disagreed, saying there was no guarantee that the tunnel would not affect the stability of the building. The Board of the Sagrada Família () and the neighborhood association (AVE by the Coast) led a campaign against this route for the AVE, without success. In October 2010, the tunnel boring machine reached the church underground under the location of the building's principal façade. Service through the tunnel was inaugurated on 8 January 2013. Track in the tunnel makes use of a system by Edilon Sedra in which the rails are embedded in an elastic material to dampen vibrations. The main nave was covered and an organ installed in mid-2010, allowing the still-unfinished building to be used for liturgies. The church was consecrated by Pope Benedict XVI on 7 November 2010 in front of a congregation of 6,500people. A further 50,000 people followed the consecration Mass from outside the basilica, where more than 100bishops and 300priests were on hand to distribute Holy Communion. In 2012, Barcelona-born Jordi Faulí i Oller took over as architect of the project. Mark Burry of New Zealand serves as Executive Architect and Researcher. Sculptures by J. Busquets, Etsuro Sotoo and Josep Maria Subirachs decorate the fantastical façades. Chief architect Jordi Faulí announced in October 2015 that construction was 70 percent complete and had entered its final phase of raising six immense steeples. The steeples and most of the church's structure were planned to be completed by 2026, the centennial of Gaudí's death; as of a 2017 estimate, decorative elements should be complete by 2030 or 2032. Visitor entrance fees of €15 to €20 finance the annual construction budget of €25million. Completion of the structure will use post-tensioned stone. Starting on 9 July 2017, an international mass is celebrated at the basilica every Sunday and holy day of obligation, at 9a.m., and is open to the public (until the church is full). Occasionally, Mass is celebrated at other times, where attendance requires an invitation. When masses are scheduled, instructions to obtain an invitation are posted on the basilica's website. In addition, visitors may pray in the chapel of the Blessed Sacrament and Penitence. The stone initially used in its construction came from the Montserrat mountain, but it became clear that as quarrying there went deeper, the stone was increasingly fragile and an alternative source had to be found. Since 2018 stone of the type needed to complete the construction has been sourced from the Withnell Quarry in Brinscall, near Chorley, England. Historical photographs of Sagrada Família Incidents On 19 April 2011, an arsonist started a small fire in the sacristy which forced the evacuation of tourists and construction workers. The sacristy was damaged, and the fire took 45 minutes to contain. On 11 March 2020, during the COVID-19 pandemic in Spain, construction temporarily stopped and the basilica was closed. This was the first time the construction had been halted since the Spanish Civil War. The Gaudí House Museum in Park Güell was also closed. The basilica reopened, initially to key workers, on 4 July 2020. Local residents have concerns about plans to build a large stairway leading up to the basilica's main entrance, unfinished at the time, which could require the demolition of three city blocks: the homes to 1,000 people as well as some businesses. Design The style of Sagrada Família is variously likened to Spanish Late Gothic, Catalan Modernism or Art Nouveau. While the style falls within the Art Nouveau period, Nikolaus Pevsner points out that, along with Charles Rennie Mackintosh in Glasgow, Scotland, Gaudí carried the Art Nouveau style far beyond its usual application as a surface decoration. Plan While never a cathedral, Sagrada Família was planned from the outset to be a large building, comparable in size to a cathedral. Its ground-plan has obvious links to earlier Spanish cathedrals such as Burgos Cathedral, León Cathedral and Seville Cathedral. In common with Catalan and many other European Gothic cathedrals, Sagrada Família is short in comparison to its width, and has a great complexity of parts, which include double aisles, an ambulatory with a chevet of seven apsidal chapels, a multitude of steeples and three portals, each widely different in structure as well as ornament. Where it is common for cathedrals in Spain to be surrounded by numerous chapels and ecclesiastical buildings, the layout of Sagrada Família has an unusual feature: a covered passage or cloister which forms a rectangle enclosing the church and passing through the narthex of each of its three portals. With this peculiarity aside, the plan, influenced by Villar's crypt, barely hints at the complexity of Gaudí's design or its deviations from traditional church architecture. There are no exact right angles to be seen inside or outside the church, and few straight lines in the design. Spires Gaudí's original design calls for a total of eighteen spires, representing in ascending order of height the Twelve Apostles, the four Evangelists, the Virgin Mary, and, tallest of all, Jesus Christ. Thirteen spires had been completed , corresponding to four apostles at the Nativity façade, four apostles at the Passion façade, the four Evangelists, and the Virgin Mary. The Evangelists' spires are surmounted by sculptures of their traditional symbols: a winged bull (Saint Luke), a winged man (Saint Matthew), an eagle (Saint John), and a winged lion (Saint Mark). The central spire of Jesus Christ is to be surmounted by a giant cross; its total height () will be less than that of Montjuïc hill in Barcelona, as Gaudí believed that his creation should not surpass God's. The lower spires are surmounted by communion hosts with sheaves of wheat and chalices with bunches of grapes, representing the Eucharist. Plans call for tubular bells to be placed within the spires, driven by the force of the wind, and driving sound down into the interior of the church. Gaudí performed acoustic studies to achieve the appropriate acoustic results inside the temple. However, only one bell is currently in place. The completion of the Jesus Christ spire will make Sagrada Família the tallest church building in the world— taller than the current record-holder, Ulm Minster, which is at its highest point. On 29 November 2021, a twelve-pointed illuminated crystal star was installed on one of the main towers of the basilica dedicated to the Virgin Mary. The construction makes use of post-tensioned stone panels, which are pre-assembled before incorporation into the main structure; using this method has significant structural and operational benefits. Façades The church is designed to have three grand façades: the Nativity façade to the east, the Passion façade to the west, and the Glory façade to the south (incomplete). The Nativity façade was built before work was interrupted in 1935 and bears the most direct Gaudí influence. The Passion façade was built according to the design that Gaudi created in 1917. The construction began in 1954, and the steeples, built over the elliptical plan, were finished in 1976. It is especially striking for its spare, gaunt, tormented characters, including emaciated figures of Christ being scourged at the pillar; and Christ on the Cross. These controversial designs are the work of Josep Maria Subirachs. The Glory façade, on which construction began in 2002, will be the largest and most monumental of the three and will represent one's ascension to God. It will also depict various scenes such as Hell, Purgatory, and will include elements such as the seven deadly sins and the seven heavenly virtues. Nativity Façade Constructed between 1893 and 1936, the Nativity façade was the first façade to be completed. Dedicated to the birth of Jesus, it is decorated with scenes reminiscent of elements of life. Characteristic of Gaudí's naturalistic style, the sculptures are ornately arranged and decorated with scenes and images from nature, each a symbol in its own manner. For instance, the three porticos are separated by two large columns, and at the base of each lies a turtle or a tortoise (one to represent the land and the other the sea; each are symbols of time as something set in stone and unchangeable). In contrast to the figures of turtles and their symbolism, two chameleons can be found at either side of the façade and are symbolic of change. The façade faces the rising sun to the northeast, a symbol for the birth of Christ. It is divided into three porticos, each of which represents a theological virtue (Hope, Faith and Charity). The Tree of Life rises above the door of Jesus in the portico of Charity. Four steeples complete the façade and are each dedicated to a Saint (Matthias, Barnabas, Jude the Apostle, and Simon the Zealot). Originally, Gaudí intended for this façade to be polychromed, for each archivolt to be painted with a wide array of colours. He wanted every statue and figure to be painted. In this way the figures of humans would appear as much alive as the figures of plants and animals. Gaudí chose this façade to embody the structure and decoration of the whole church. He was well aware that he would not finish the church and that he would need to set an artistic and architectural example for others to follow. He also chose for this façade to be the first on which to begin construction and for it to be, in his opinion, the most attractive and accessible to the public. He believed that if he had begun construction with the Passion Façade, one that would be hard and bare (as if made of bones), before the Nativity Façade, people would have withdrawn at the sight of it. Some of the statues were destroyed in 1936 during the Spanish Civil War, and subsequently were reconstructed by the Japanese artist Etsuro Sotoo. Passion Façade In contrast to the highly decorated Nativity Façade, the Passion Façade is austere, plain and simple, with ample bare stone, and is carved with harsh straight lines to resemble the bones of a skeleton. Dedicated to the Passion of Christ, the suffering of Jesus during his crucifixion, the façade was intended to portray the sins of man. Construction began in 1954, following the drawings and instructions left by Gaudí for future architects and sculptors. The steeples were completed in 1976, and in 1987 a team of sculptors, headed by Josep Maria Subirachs, began work sculpting the various scenes and details of the façade. They aimed to give a rigid, angular form to provoke a dramatic effect. Gaudí intended for this façade to strike fear into the onlooker. He wanted to "break" arcs and "cut" columns, and to use the effect of chiaroscuro (dark angular shadows contrasted by harsh rigid light) to further show the severity and brutality of Christ's sacrifice. Facing the setting sun, indicative and symbolic of the death of Christ, the Passion Façade is supported by six large and inclined columns, designed to resemble strained muscles. Above there is a pyramidal pediment, made up of eighteen bone-shaped columns, which culminate in a large cross with a crown of thorns. Each of the four steeples is dedicated to an apostle (James, Thomas, Philip, and Bartholomew) and, like the Nativity Façade, there are three porticos, each representing the theological virtues, though in a much different light. The scenes sculpted into the façade may be divided into three levels, which ascend in an S form and reproduce the Stations of the Cross (Via Crucis of Christ). The lowest level depicts scenes from Jesus' last night before the crucifixion, including the Last Supper, Kiss of Judas, Ecce homo, and the Sanhedrin trial of Jesus. The middle level portrays the Calvary, or Golgotha, of Christ, and includes The Three Marys, Saint Longinus, Saint Veronica, and a hollow-face illusion of Christ on the Veil of Veronica. In the third and final level the Death, Burial and the Resurrection of Christ can be seen. A bronze figure situated on a bridge creating a link between the steeples of Saint Bartholomew and Saint Thomas represents the Ascension of Jesus. The façade contains a magic square based on the magic square in the 1514 print Melencolia I. The square is rotated and one number in each row and column is reduced by one, so the rows and columns add up to 33 instead of the standard 34 for a 4x4 magic square. Glory Façade The largest and most striking of the façades will be the Glory Façade, on which construction began in 2002. It will be the principal façade and will offer access to the central nave. Dedicated to the Celestial Glory of Jesus, it represents the road to God: Death, Final Judgment, and Glory, while Hell is left for those who deviate from God's will. Aware that he would not live long enough to see this façade completed, Gaudí made a model which was demolished in 1936, whose original fragments were used as the basis for the development of the design for the façade. The completion of this façade may require the partial demolition of the block with buildings across the Carrer de Mallorca. The decision was expected to be proposed in May 2023. To reach the Glory Portico, the large staircase will lead over the underground passage built over Carrer de Mallorca with the decoration representing Hell and vice. On other projects, Carrer de Mallorca will have to go underground. It will be decorated with demons, idols, false gods, heresy and schisms, etc. Purgatory and death will also be depicted, the latter using tombs along the ground. The portico will have seven large columns dedicated to gifts of the Holy Spirit. At the base of the columns there will be representations of the seven deadly sins, and at the top, the seven heavenly virtues. Gifts: wisdom, understanding, counsel, fortitude, knowledge, piety and fear of the Lord. Sins: greed, lust, pride, gluttony, sloth, wrath, envy. Virtues: kindness, diligence, patience, charity, temperance, humility, chastity. This façade will have five doors corresponding to the five naves of the temple, with the central one having a triple entrance, that will give the Glory Façade a total seven doors representing the sacraments: Baptism Confirmation Eucharist Penance Holy orders Marriage Anointing of the sick In September 2008, the doors of the Glory façade, by Subirachs, were installed. These central doors bear the text of the Our Father prayer in Catalan in high relief, accompanied with the words "Our Father" and "Give us this day our daily bread" inscribed in fifty different languages. The handles of the door are the letters "A" and "G," forming the initials of Antoni Gaudí, within the phrase ("lead us not into temptation"). Interior The church plan is that of a Latin cross with five aisles. The central nave vaults reach while the side nave vaults reach . The transept has three aisles. The columns are on a grid. However, the columns of the apse, resting on del Villar's foundation, do not adhere to the grid, requiring a section of columns of the ambulatory to transition to the grid thus creating a horseshoe pattern to the layout of those columns. The crossing rests on the four central columns of porphyry supporting a great hyperboloid surrounded by two rings of twelve hyperboloids (currently under construction). The central vault reaches . The apse is capped by a hyperboloid vault reaching . Gaudí intended that a visitor standing at the main entrance be able to see the vaults of the nave, crossing, and apse, thus the graduated increase in vault loft. There are gaps in the floor of the apse, providing a view into the crypt below. The columns of the interior are a unique Gaudí design. Besides branching to support their load, their ever-changing surfaces are the result of the intersection of various geometric forms. The simplest example is that of a square base evolving into an octagon as the column rises, then a sixteen-sided form, and eventually to a circle. This effect is the result of a three-dimensional intersection of helicoidal columns (for example a square cross-section column twisting clockwise and a similar one twisting counterclockwise). Essentially none of the interior surfaces are flat; the ornamentation is comprehensive and rich, consisting in large part of abstract shapes which combine smooth curves and jagged points. Even detail-level work such as the iron railings for balconies and stairways are full of curvaceous elaboration. Organ In 2010 an organ was installed in the chancel by the Blancafort Orgueners de Montserrat organ builders. The instrument has 26 stops (1,492 pipes) on two manuals and a pedalboard. To overcome the unique acoustical challenges posed by the church's architecture and vast size, several additional organs will be installed at various points within the building. These instruments will be playable separately (from their own individual consoles) and simultaneously (from a single mobile console), yielding an organ of some 8,000 pipes when completed. Geometric details The steeples on the Nativity façade are crowned with geometrically shaped tops that are reminiscent of Cubism (they were finished around 1930), and the intricate decoration is contemporary to the style of Art Nouveau, but Gaudí's unique style drew primarily from nature, not other artists or architects, and resists categorization. Gaudí used hyperboloid structures in later designs for Sagrada Família (more obviously after 1914). However, there are a few places on the nativity façade—a design not equated with Gaudí's ruled-surface design—where the hyperboloid appears. For example, all around the scene with the pelican, there are numerous examples (including the basket held by one of the figures). There is a hyperboloid adding structural stability to the cypress tree (by connecting it to the bridge). Finally, the "bishop's mitre" spires are capped with hyperboloid structures. In his later designs, ruled surfaces are prominent in the nave's vaults and windows and the surfaces of the Passion Façade. Symbolism Themes throughout the decoration include words from the liturgy. The steeples are decorated with words such as "Hosanna", "Excelsis", and "Sanctus"; the great doors of the Passion façade reproduce excerpts of the Passion of Jesus from the New Testament in various languages, mainly Catalan; and the Glory façade is to be decorated with the words from the Apostles' Creed, while its main door reproduces the entire Lord's Prayer in Catalan, surrounded by multiple variations of "Give us this day our daily bread" in other languages. The three entrances symbolize the three virtues: Faith, Hope and Love. Each of them is also dedicated to a part of Christ's life. The Nativity Façade is dedicated to his birth; it also has a cypress tree which symbolizes the tree of life. The Glory Façade is dedicated to his Christ's glory period. The Passion Façade is symbolic of Christ's suffering. The apse steeple bears Latin text of the Hail Mary prayer. Areas of the sanctuary will be designated to represent various concepts, such as saints, virtues and sins, and secular concepts such as regions, presumably with decoration to match. Burials Josep Maria Bocabella Antoni Gaudí Appraisal The art historian Nikolaus Pevsner, writing in the 1960s, referred to Gaudí's buildings as growing "like sugar loaves and anthills" and describes the ornamenting of buildings with shards of broken pottery as possibly "bad taste" but handled with vitality and "ruthless audacity". The building's design itself has been polarizing. Assessments by Gaudí's fellow architects were generally positive; Louis Sullivan greatly admired it, describing Sagrada Família as the "greatest piece of creative architecture in the last twenty-five years. It is spirit symbolised in stone!" Walter Gropius praised Sagrada Família, describing the building's walls as "a marvel of technical perfection". Time magazine called it "sensual, spiritual, whimsical, exuberant". However, author and critic George Orwell, mistakenly referring to it as a cathedral, called it "one of the most hideous buildings in the world". Author James A. Michener called it "one of the strangest-looking serious buildings in the world" and British historian Gerald Brenan stated about the building "Not even in the European architecture of the period can one discover anything so vulgar or pretentious." The building's distinctive silhouette has nevertheless become symbolic of Barcelona itself, drawing an estimated 3 million visitors annually. World Heritage status In 1984, UNESCO granted World Heritage Site designations to three Gaudí buildings in Barcelona, though not yet including Sagrada Família, under the collective designation "Works of Antoni GaudíNo 320 bis" (items 320-001 to 320-003), testifying "to Gaudí's exceptional creative contribution to the development of architecture and building technology", "having represented el Modernisme of Catalonia" and "anticipated and influenced many of the forms and techniques that were relevant to the development of modern construction in the 20th century". In 2005, UNESCO extended the inscription for Works of Antoni GaudíNo 320 bis to include four additional buildings in Barcelona, with item 320-005 listed as two specific sections of Sagrada Família: the Crypt and the Nativity façade. Visitor access Visitors can access the Nave, Crypt, Museum, Shop, and the Passion and Nativity steeples. Entrance to either of the steeples requires a reservation and advance purchase of a ticket. Access is possible only by lift (elevator) and a short walk up the remainder of the steeples to the bridge between the steeples. Descent is via a very narrow spiral staircase of over 300 steps. There is a posted caution for those with medical conditions. As of June 2017, online ticket purchase has been available. As of August 2010, there had been a service whereby visitors could buy an entry code either at Servicaixa ATM kiosks (part of CaixaBank) or online. International masses The Archdiocese of Barcelona holds an international mass at the Basilica of the Sagrada Família every Sunday and on holy days of obligation. Date and time: Every Sunday and on holy days of obligation at 9 am. There is no charge for attending mass but capacity is limited. Visitors are asked to dress appropriately and behave respectfully. Funding and building permit Construction on Sagrada Família is not supported by any government or official church sources. Private patrons funded the initial stages. Money from tickets purchased by tourists is now used to pay for the work, and private donations are accepted. The construction budget for 2009 was €18 million. In October 2018, Sagrada Família trustees agreed to pay city authorities €36 million for a building permit, after 136 years of unlicensed construction. Most of the funds would be directed to improve the access between the church and the Barcelona Metro. The permit was issued by the city on 7 June 2019. See also List of Catholic basilicas List of Gaudí buildings List of Modernista buildings in Barcelona Sagrada Família (Barcelona Metro) Notes References Bibliography Further reading External links Works of Antoni Gaudí UNESCO Collection on Google Arts and Culture Gaudí, Sagrada Família (video), Smarthistory Antoni Gaudí buildings Art Nouveau church buildings in Spain Articles containing video clips Basilica churches in Spain Bien de Interés Cultural landmarks in the Province of Barcelona Buildings and structures under construction in Spain Eixample Hyperboloid structures Mathematics and art Modernisme architecture in Barcelona Roman Catholic churches in Barcelona Skyscrapers in Barcelona Tourist attractions in Barcelona Visionary environments Votive churches World Heritage Sites in Catalonia
Sagrada Família
Technology
6,213
42,725,214
https://en.wikipedia.org/wiki/Shapley-Ames%20Catalog
The Shapley-Ames Catalog of Bright Galaxies is a catalog of galaxies published in 1932 that includes observations of 1249 objects brighter than 13.2 magnitude. It was compiled by Harlow Shapley and Adelaide Ames. They identified 1189 objects based on the New General Catalogue and 48 based on the Index Catalogue. With the help of new photographic recordings, which also contained comparison stars of known brightness, the brightness of many galaxies were measured and recorded only up to a magnitude of 13.2. It was the first compilation of bright galaxies in the northern and southern sky. The catalog contains position, brightness, size, and Hubble classification of the galaxies. For the next 60 years, astronomers referred to this catalog as a primary source for information about redshifts and galaxy types. History Shapley and Ames began their study of all nearby galaxies in 1926. An important finding from this research was that the galaxies were not evenly distributed (they violated the isotropy assumption) in that the northern hemisphere contained more galaxies than the southern hemisphere. It also found that the Virgo cloud extended further than previously believed. From this data, Shapley and Ames therefore created a new hierarchy of clusters called a supercluster, which is a cluster of galaxy clusters, and called this Virgo cloud in the northern hemisphere a "Local Supercluster". In 1981, Allan Sandage and Gustav Tammann published an updated version known as the Revised Shapley-Ames Catalog (RSA). The original list of galaxies was maintained, with the exception of three objects which were no longer considered galaxies. The information on the 1246 individual galaxies have been updated and substantially extended. References Astronomical catalogues Astronomical catalogues of galaxies Astronomical catalogues of galaxy clusters
Shapley-Ames Catalog
Astronomy
355
58,267
https://en.wikipedia.org/wiki/Conceptual%20schema
A conceptual schema or conceptual data model is a high-level description of informational needs underlying the design of a database. It typically includes only the core concepts and the main relationships among them. This is a high-level model with insufficient detail to build a complete, functional database. It describes the structure of the whole database for a group of users. The conceptual model is also known as the data model that can be used to describe the conceptual schema when a database system is implemented. It hides the internal details of physical storage and targets the description of entities, datatypes, relationships and constraints. Overview A conceptual schema is a map of concepts and their relationships used for databases. This describes the semantics of an organization and represents a series of assertions about its nature. Specifically, it describes the things of significance to an organization (entity classes), about which it is inclined to collect information, and their characteristics (attributes) and the associations between pairs of those things of significance (relationships). Because a conceptual schema represents the semantics of an organization, and not a database design, it may exist on various levels of abstraction. The original ANSI four-schema architecture began with the set of external schemata that each represents one person's view of the world around him or her. These are consolidated into a single conceptual schema that is the superset of all of those external views. A data model can be as concrete as each person's perspective, but this tends to make it inflexible. If that person's world changes, the model must change. Conceptual data models take a more abstract perspective, identifying the fundamental things, of which the things an individual deals with are just examples. The model does allow for what is called inheritance in object oriented terms. The set of instances of an entity class may be subdivided into entity classes in their own right. Thus, each instance of a sub-type entity class is also an instance of the entity class's super-type. Each instance of the super-type entity class, then is also an instance of one of the sub-type entity classes. Super-type/sub-type relationships may be exclusive or not. A methodology may require that each instance of a super-type may only be an instance of one sub-type. Similarly, a super-type/sub-type relationship may be exhaustive or not. It is exhaustive if the methodology requires that each instance of a super-type must be an instance of a sub-type. A sub-type named "Other" is often necessary. Example relationships Each PERSON may be the vendor in one or more ORDERS. Each ORDER must be from one and only one PERSON. PERSON is a sub-type of PARTY. (Meaning that every instance of PERSON is also an instance of PARTY.) Each EMPLOYEE may have a supervisor who is also an EMPLOYEE. Data structure diagram A data structure diagram (DSD) is a data model or diagram used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that bind them. See also References Further reading Perez, Sandra K., & Anthony K. Sarris, eds. (1995) Technical Report for IRDS Conceptual Schema, Part 1: Conceptual Schema for IRDS, Part 2: Modeling Language Analysis, X3/TR-14:1995, American National Standards Institute, New York, NY. Halpin T, Morgan T (2008) Information Modeling and Relational Databases, 2nd edn., San Francisco, CA: Morgan Kaufmann. External links A different point of view, as described by the agile community Data modeling Conceptual modelling
Conceptual schema
Engineering
743
2,008,686
https://en.wikipedia.org/wiki/Poikilotherm
A poikilotherm () is an animal (Greek poikilos – 'various', 'spotted', and therme – 'heat') whose internal temperature varies considerably. Poikilotherms have to survive and adapt to environmental stress. One of the most important stressors is outer environment temperature change, which can lead to alterations in membrane lipid order and can cause protein unfolding and denaturation at elevated temperatures. Poikilotherm is the opposite of homeotherm – an animal which maintains thermal homeostasis. In principle, the term could be applied to any organism, but it is generally only applied to vertebrate animals. Usually the fluctuations are a consequence of variation in the ambient environmental temperature. Many terrestrial ectotherms are poikilothermic. However some ectotherms seek constant-temperature environments to the point that they are able to maintain a constant internal temperature, and are considered actual or practical homeotherms. It is this distinction that often makes the term poikilotherm more useful than the vernacular "cold-blooded", which is sometimes used to refer to ectotherms more generally. Poikilothermic animals include types of vertebrate animals, specifically some fish, amphibians, and reptiles, as well as many invertebrate animals. The naked mole-rat and sloths are some of the rare mammals which are poikilothermic. Etymology The term derives from Greek poikilos (), meaning "varied," ultimately from a root meaning "dappled" or "painted," and thermos (), meaning "heat". Physiology Poikilotherm animals must be able to function over a wider range of temperatures than homeotherms. The speed of most chemical reactions vary with temperature, and in order to function poikilotherms may have four to ten enzyme systems that operate at different temperatures for an important chemical reaction. As a result, poikilotherms often have larger, more complex genomes than homeotherms in the same ecological niche. Frogs are a notable example of this effect, though their complex development is also an important factor in their large genome. Because their metabolism is variable and generally below that of homeothermic animals, sustained high-energy activities like powered flight in large animals or maintaining a large brain is generally beyond poikilotherm animals. The metabolism of poikilotherms favors strategies such as sit-and-wait hunting over chasing prey for larger animals with high movement cost. As they do not use their metabolisms to heat or cool themselves, total energy requirement over time is low. For the same body weight, poikilotherms need only 5 to 10% of the energy of homeotherms. Adaptations in poikilotherms Some adaptations are behavioral. Lizards and snakes bask in the sun in the early morning and late evening, and seek shelter around noon. The eggs of the yellow-faced bumblebee are unable to regulate heat. A behavioral adaptation to combat this is incubation, where to maintain the internal temperatures of eggs, the queen and her workers will incubate the brood almost constantly, by warming their abdomens and touching them to the eggs. The bumblebee generates heat by shivering flight muscles even though it is not flying. Termite mounds are usually oriented in a north–south direction so that they absorb as much heat as possible around dawn and dusk and minimise heat absorption around noon. Tuna are able to warm their entire bodies through a heat exchange mechanism called the rete mirabile, which helps keep heat inside the body, and minimises the loss of heat through the gills. They also have their swimming muscles near the center of their bodies instead of near the surface, which minimises heat loss. Gigantothermy means growing to large size in order to reduce heat loss, such as in sea turtles and ice-age megafauna. Body volume increases proportionally faster than does body surface, with increasing size; and less body surface area per unit body volume tends to minimise heat loss. Camels, although they are homeotherms, thermoregulate using a method termed "temperature cycling" to conserve energy. In hot deserts, they allow their body temperature to rise during the day and fall during the night, adjusting their body temperature to cycle over approximately 6 °C. Ecology It is comparatively easy for a poikilotherm to accumulate enough energy to reproduce. Poikilotherms at the same trophic level often have much shorter generations than homeotherms: weeks rather than years. Such applies even to animals with similar ecological roles such as cats and snakes. This difference in energy requirement also means that a given food source can support a greater density of poikilothermic animals than homeothermic animals. This is reflected in the predator-prey ratio which is usually higher in poikilothermic fauna compared to homeothermic ones. However, when homeotherms and poikilotherms have similar niches, and compete, the homeotherm can often drive poikilothermic competitors to extinction, because homeotherms can gather food for a greater fraction of each day and in more effective, specialized ways (e.g. chimpanzees actively seeking out and collecting army ants with sticks versus the typical poikilotherm sit-and-wait strategy). In medicine In medicine, loss of normal thermoregulation is referred to as poikilothermia. This can be seen in compartment syndrome and with use of sedative-hypnotics like barbiturates, ethanol, and chloral hydrate. REM sleep is considered a poikilothermic state in humans. Poikilothermia is one of the signs of acute limb ischemia. Notes External links Animal physiology Thermoregulation
Poikilotherm
Biology
1,219
15,560,529
https://en.wikipedia.org/wiki/Relative%20biological%20effectiveness
In radiobiology, the relative biological effectiveness (often abbreviated as RBE) is the ratio of biological effectiveness of one type of ionizing radiation relative to another, given the same amount of absorbed energy. The RBE is an empirical value that varies depending on the type of ionizing radiation, the energies involved, the biological effects being considered such as cell death, and the oxygen tension of the tissues or so-called oxygen effect. Application The absorbed dose can be a poor indicator of the biological effect of radiation, as the biological effect can depend on many other factors, including the type of radiation, energy, and type of tissue. The relative biological effectiveness can help give a better measure of the biological effect of radiation. The relative biological effectiveness for radiation of type R on a tissue is defined as the ratio where DX is a reference absorbed dose of radiation of a standard type X, and DR is the absorbed dose of radiation of type R that causes the same amount of biological damage. Both doses are quantified by the amount of energy absorbed in the cells. Different types of radiation have different biological effectiveness mainly because they transfer their energy to the tissue in different ways. Photons and beta particles have a low linear energy transfer (LET) coefficient, meaning that they ionize atoms in the tissue that are spaced by several hundred nanometers (several tenths of a micrometer) apart, along their path. In contrast, the much more massive alpha particles and neutrons leave a denser trail of ionized atoms in their wake, spaced about one tenth of a nanometer apart (i.e., less than one-thousandth of the typical distance between ionizations for photons and beta particles). RBEs can be used for either cancer/hereditary risks (stochastic) or for harmful tissue reactions (deterministic) effects. Tissues have different RBEs depending on the type of effect. For high LET radiation (i.e., alphas and neutrons), the RBEs for deterministic effects tend to be lower than those for stochastic effects. The concept of RBE is relevant in medicine, such as in radiology and radiotherapy, and to the evaluation of risks and consequences of radioactive contamination in various contexts, such as nuclear power plant operation, nuclear fuel disposal and reprocessing, nuclear weapons, uranium mining, and ionizing radiation safety. Relation to radiation weighting factors (WR) For the purposes of computing the equivalent dose to an organ or tissue, the International Commission on Radiological Protection (ICRP) has defined a standard set of radiation weighting factors (WR), formerly termed the quality factor (Q). The radiation weighting factors convert absorbed dose (measured in SI units of grays or non-SI rads) into formal biological equivalent dose for radiation exposure (measured in units of sieverts or rem). However, ICRP states: "The quantities equivalent dose and effective dose should not be used to quantify higher radiation doses or to make decisions on the need for any treatment related to tissue reactions [i.e., deterministic effects]. For such purposes, doses should be evaluated in terms of absorbed dose (in gray, Gy), and where high-LET radiations (e.g., neutrons or alpha particles) are involved, an absorbed dose, weighted with an appropriate RBE, should be used" Radiation weighting factors are largely based on the RBE of radiation for stochastic health risks. However, for simplicity, the radiation weighting factors are not dependent on the type of tissue, and the values are conservatively chosen to be greater than the bulk of experimental values observed for the most sensitive cell types, with respect to external (external to the cell) sources. Radiation weighting factors have not been developed for internal sources of heavy ions, such as a recoil nucleus. The ICRP 2007 standard values for relative effectiveness are given below. The higher radiation weighting factor for a type of radiation, the more damaging it is, and this is incorporated into the calculation to convert from gray to sievert units. Radiation weighting factors that go from physical energy to biological effect must not be confused with tissue weighting factors. The tissue weighting factors are used to convert an equivalent dose to a given tissue in the body, to an effective dose, a number that provides an estimation of total danger to the whole organism, as a result of the radiation dose to part of the body. Experimental methods Typically the evaluation of relative biological effectiveness is done on various types of living cells grown in culture medium, including prokaryotic cells such as bacteria, simple eukaryotic cells such as single celled plants, and advanced eukaryotic cells derived from organisms such as rats. By irradiating batches of cells with different doses and types of radiation, a relationship between dose and the fraction of cells that die can be found, and then used to find the doses corresponding to some common survival rate. The ratio of these doses is the RBE of R. Instead of death, the endpoint might be the fraction of cells that become unable to undergo mitotic division (or, for bacteria, binary fission), thus being effectively sterilized — even if they can still carry out other cellular functions. The types R of ionizing radiation most considered in RBE evaluation are X-rays and gamma radiation (both consisting of photons), alpha radiations (helium-4 nuclei), beta radiation (electrons and positrons), neutron radiation, and heavy nuclei, including the fragments of nuclear fission. For some kinds of radiation, the RBE is strongly dependent on the energy of the individual particles. Dependence on tissue type Early on it was found that X-rays, gamma rays, and beta radiation were essentially equivalent for all cell types. Therefore, the standard radiation type X is generally an X-ray beam with 250 keV photons or cobalt-60 gamma rays. As a result, the relative biological effectiveness of beta and photon radiation is essentially 1. For other radiation types, the RBE is not a well-defined physical quantity, since it varies somewhat with the type of tissue and with the precise place of absorption within the cell. Thus, for example, the RBE for alpha radiation is 2–3 when measured on bacteria, 4–6 for simple eukaryotic cells, and 6–8 for higher eukaryotic cells. According to one source it may be much higher (6500 with X rays as the reference) on ovocytes. The RBE of neutrons is 4–6 for bacteria, 8–12 for simple eukaryotic cells, and 12–16 for higher eukaryotic cells. Dependence on source location In the early experiments, the sources of radiation were all external to the cells that were irradiated. However, since alpha particles cannot traverse the outermost dead layer of human skin, they can do significant damage only if they come from the decay of atoms inside the body. Since the range of an alpha particle is typically about the diameter of a single eukaryotic cell, the precise location of the emitting atom in the tissue cells becomes significant. For this reason, it has been suggested that the health impact of contamination by alpha emitters might have been substantially underestimated. Measurements of RBE with external sources also neglect the ionization caused by the recoil of the parent-nucleus due to the alpha decay. While the recoil of the parent-nucleus of the decaying atom typically carries only about 2% of the energy of the alpha-particle that is emitted by the decaying atom, its range is extremely short (about 2–3 angstroms), due to its high electric charge and high mass. The parent nucleus is required to recoil, upon emission of an alpha particle, with a discrete kinetic energy due to conservation of momentum. Thus, all of the ionization energy from the recoil-nucleus is deposited in an extremely small volume near its original location, typically in the cell nucleus on the chromosomes, which have an affinity for heavy metals. The bulk of studies, using sources that are external to the cell, have yielded RBEs between 10 and 20. Since most of the ionization damage from the travel of the alpha particle is deposited in the cytoplasm, whereas from the travel of the recoil-nucleus is on the DNA itself, it is likely greater damage is caused by the recoil nucleus than by the alpha particle itself. History In 1931, Failla and Henshaw reported on determination of the relative biological effectiveness (RBE) of x rays and γ rays. This appears to be the first use of the term ‘RBE’. The authors noted that RBE was dependent on the experimental system being studied. Somewhat later, it was pointed out by Zirkle et al. (1952) that the biological effectiveness depends on the spatial distribution of the energy imparted and the density of ionisations per unit path length of the ionising particles. Zirkle et al. coined the term ‘linear energy transfer (LET)’ to be used in radiobiology for the stopping power, i.e. the energy loss per unit path length of a charged particle. The concept was introduced in the 1950s, at a time when the deployment of nuclear weapons and nuclear reactors spurred research on the biological effects of artificial radioactivity. It had been noticed that those effects depended both on the type and energy spectrum of the radiation, and on the kind of living tissue. The first systematic experiments to determine the RBE were conducted in that decade. See also Background radiation Linear energy transfer (LET) Theory of dual radiation action References External links Relative Biological Effectiveness in Ion Beam Therapy Radiation health effects
Relative biological effectiveness
Chemistry,Materials_science
1,979
55,242,463
https://en.wikipedia.org/wiki/Zetron
Zetron, Inc. is an American company that manufactures integrated communications systems. Founded in 1980, formerly a subsidiary of JVCKenwood, in May 2021 it was purchased by Codan Communications. Its products are for use by Emergency and Public Safety Agencies (Fire, Ambulance and Police), Utilities and Transportation Companies. Products Radio Dispatch Systems E-9-1-1 Call Taking Systems Fire station alerting and Dispatch Systems CAD and GIS Mapping System - Computer-aided Dispatch (CAD) & Geographic Information System (GIS) Systems Specialized integrated communications Paging Systems History 1980: Zetron is founded by Milt Zeutschel and John Reece. 1981: Introduces paging products for volunteer fire departments (CE-1000). 1987: Introduces Series 4000, the first user programmable microprocessor-based radio dispatch console. 1990: Opens European office. 1996: Introduces the first integrated radio dispatch and 9-1-1 call taking solution. 2000: Zetron acquired the ACOM Business Unit from Plessey Asia Pacific Pty Ltd 2001: The Advanced Communication System (Acom) is introduced, expanding to large-scaled dispatch center capability. 2002: ACOM based VSCS is fully commissioned as the communications component of The Australian Advanced Air Traffic System 2004: IP-based interface is delivered between Acom and M/A-Com's OpenSky. 2007: Acquired by the Japanese company Kenwood Corporation (now JVCKenwood). 2007: Zetron becomes the first vendor to support the P25 CSSI Console interface. 2008: New IP-based Fire Station Alerting system begins shipping. 2017: Established the strategic partnership with Harris Corporation company for emergency dispatch console system. 2020: Celebrates 40 years in business. 2021: Purchased by Codan Communications, the communications subsidiary of the Codan Group. Notes Radio technology Amateur radio companies Companies based in Redmond, Washington Electronics companies established in 1980 Electronics companies of the United States 2007 mergers and acquisitions 2021 mergers and acquisitions American subsidiaries of foreign companies
Zetron
Technology,Engineering
405
28,786,437
https://en.wikipedia.org/wiki/Gibson%20assembly
Gibson assembly is a molecular cloning method that allows for the joining of multiple DNA fragments in a single, isothermal reaction. It is named after its creator, Daniel G. Gibson, who is the chief technology officer and co-founder of the synthetic biology company, Telesis Bio. The technology is more efficient than manual plasmid genetic recombination methods, but remains expensive as it is still under patent. Process The entire Gibson assembly reaction requires few components with minor manipulations. The method can simultaneously combine up to 15 DNA fragments based on sequence identity. It requires that the DNA fragments contain ~20-40 base pair overlap with adjacent DNA fragments. These DNA fragments are mixed with a cocktail of three enzymes, along with other buffer components. The three required enzyme activities are: exonuclease, DNA polymerase, and DNA ligase. The exonuclease chews back DNA from the 5' end, thus not inhibiting polymerase activity and allowing the reaction to occur in one single process. The resulting single-stranded regions on adjacent DNA fragments can anneal. The DNA polymerase incorporates nucleotides to fill in any gaps. The DNA ligase covalently joins the DNA of adjacent segments, thereby removing any nicks in the DNA. The resulting product is different DNA fragments joined into one. Either linear or closed circular molecules can be assembled. There are two approaches to Gibson assembly. A one-step method and a two-step method. Both methods can be performed in a single reaction vessel. The Gibson assembly 1-step method allows for the assembly of up to 5 different fragments using a single step isothermal process. In this method, fragments and a master mix of enzymes are combined and the entire mixture is incubated at 50 °C for up to one hour. For the creation of more complex constructs with up to 15 fragments, or for constructs incorporating fragments from 100 bp to 10 kb, the Gibson assembly two-step approach is used. The two-step reaction requires two separate additions of master mix. One of the reactions is for the exonuclease and annealing step while the other is for DNA polymerase and ligation steps. For the two-step approach, different incubation temperatures are used to carry out the assembly process. Advantages The Gibson DNA assembly method has many advantages compared to conventional restriction enzyme/ligation cloning of recombinant DNA. For example, No restriction digest of the DNA fragments after PCR is necessary. However, the backbone vector can be digested, or synthesized by PCR. It is cheaper and faster than conventional cloning schemes, as it requires fewer steps and fewer reagents. No restriction site scar remains between two DNA fragments, but the region between the double strands and hanging ends is slightly susceptible to mutation when DNA polymerase closes the gaps. Up to 5 DNA fragments can be combined simultaneously in a single-tube reaction using a one-step master mix of enzymes. Up to 15 fragments can be combined simultaneously using a two-step reaction. In the two step approach, the exonuclease and annealing steps are done first. This is followed by the addition of the DNA polymerase and ligase in a second step. The Gibson assembly method can also be used for site directed mutagenesis to incorporate site-specific mutations such as insertions, deletions, and point mutations References Further information A Guide to Gibson Assembly from the University of Cambridge, UK Gibson Assembly Primer Design Tool Gibson Assembly Site Directed Mutagenesis Primer Design Tool Chemical Transformation of Gibson Assembly Constructs DNA Biological engineering
Gibson assembly
Engineering,Biology
736
222,990
https://en.wikipedia.org/wiki/Christiane%20N%C3%BCsslein-Volhard
Christiane (Janni) Nüsslein-Volhard (; born 20 October 1942) is a German developmental biologist and a 1995 Nobel Prize in Physiology or Medicine laureate. She is the only woman from Germany to have received a Nobel Prize in the sciences. Nüsslein-Volhard earned her PhD in 1974 from the University of Tübingen, where she studied protein-DNA interaction. She won the Albert Lasker Award for Basic Medical Research in 1991 and the Nobel Prize in Physiology or Medicine in 1995, together with Eric Wieschaus and Edward B. Lewis, for their research on the genetic control of embryonic development. Early life and education Nüsslein-Volhard was born in Magdeburg on 20 October 1942, the second of five children to Rolf Volhard, an architect, and Brigitte Haas Volhard, a nursery school teacher. She has four siblings: three sisters and one brother. She grew up and went to school in south Frankfurt, where she was exposed to art and music and thus was "trained in looking at things and recognizing things". Her great-grandfather was the chemist Jacob Volhard, and her grandfather was the known internist Franz Volhard. She is also the aunt of the Nobel laureate in chemistry Benjamin List. After the Abitur in 1962, she briefly considered pursuing medicine, but dropped the idea after doing a month’s nursing course in a hospital. Instead, she opted to study biology at Goethe University Frankfurt. In 1964 Nüsslein-Volhard left Frankfurt for the University of Tübingen, to start a new course in biochemistry. She originally wanted to do behavioral biology, "but then somehow I ended up in biochemistry (...) and molecular genetics because at the time this was the most modern aspect, and I was ambitious — I wanted to go where the leaders were. The old-fashioned botanists and zoologists were such dull people— there was nothing interesting there." She received a diploma in biochemistry in 1969 and earned a PhD in 1974 for research into protein–DNA interactions and the binding of RNA polymerase in Escherichia coli. Career In 1975, Nüsslein-Volhard became a postdoctoral researcher in Walter Gehring´s laboratory at the Biozentrum, University of Basel. She was a specialist in the developmental biology of Drosophila melanogaster (fruit fly) supported by a long-term fellowship from the European Molecular Biology Organization (EMBO). In 1977, she continued in the laboratory of Klaus Sander at University of Freiburg, who was an expert in embryonic patterning. In 1978, she set up her own lab in the newly founded European Molecular Biology Laboratory in Heidelberg with Eric Wieschaus, whom she had met in Basel. Over the next three years they examined about 20,000 mutated fly families, collected about 600 mutants with an altered body pattern and found that out of the approximately 5,000 essential genes only 120 were essential for early development. In October 1980, they published the mere 15 genes controlling the segmented pattern of the Drosophila larva. In 1981, Nüsslein-Volhard moved to the Friedrich Miescher Laboratory of the Max Planck Society in Tübingen. From 1984 until her retirement in 2014, she was the director of the Max Planck Institute for Developmental Biology in Tübingen and also led its genetics department. After 1984, she launched work on the developmental biology of vertebrates, using the zebrafish (Danio rerio) as her research model. In 2001, she became a member of the Nationaler Ethikrat (National Ethics Council of Germany) for the ethical assessment of new developments in the life sciences and their influence on the individual and society. Her primer for the lay-reader, Coming to Life: How Genes Drive Development, was published in April 2006. In 2004, she started the Christiane Nüsslein-Volhard Foundation (Christiane Nüsslein-Volhard Stiftung) which aids promising young female German scientists with children. The foundation's main focus is to facilitate childcare as a supplement to existing stipends and day care. Research During the late 1970’s and early 1980’s, little was known about the genetic and molecular mechanisms by which multicellular organisms develop from single cells to morphologically complex forms during embryogenesis. Nüsslein-Volhard and Wieschaus identified genes involved in embryonic development by a series of  genetic screens, generating random mutations in fruit flies using ethyl methanesulfonate. Some of these mutations affected genes involved in the development of the embryo. They took advantage of the segmented form of Drosophila larvae to address the logic of the genes controlling development. They looked at the pattern of segments and denticles in each mutant under the microscope, and were therefore able to work out that particular genes were involved in different processes during development based on their differing mutant phenotypes (such as fewer segments, gaps in the normal segment pattern, and alterations in the patterns of denticles on the segments). Many of these genes were given descriptive names based on the appearance of the mutant larvae, such as hedgehog, gurken (German: "cucumbers"), and Krüppel ("cripple"). Later, researchers Pavel Tomancal, Amy Beaton, et al., identified exactly which gene had been affected by each mutation, thereby identifying a set of genes crucial for Drosophila embryogenesis. The subsequent study of these mutants and their interactions led to important new insights into early Drosophila development, especially the mechanisms that underlie the step-wise development of body segments. These experiments are not only distinguished by their sheer scale (with the methods available at the time, they involved an enormous workload), but more importantly by their significance for organisms other than fruit flies. Her findings led to important realizations about evolution – for example, that protostomes and deuterostomes are likely to have had a relatively well-developed common ancestor with a much more complex body plan than had been conventionally thought. Additionally, they greatly increased our understanding of the regulation of transcription, as well as cell fate during development. Nüsslein-Volhard is associated with the discovery of Toll, which led to the identification of toll-like receptors. , Nüsslein-Volhard has an h-index of 104 according to Scopus. Personal life Nüsslein-Volhard married in the mid-1960s while studying at the Goethe University Frankfurt, but divorced soon afterward and did not have any children. She lives in Bebenhausen, Germany. She has said that she loves to sing, play the flute and do chamber music. She published a cookbook in 2006. Awards and honors 1986: Gottfried Wilhelm Leibniz Prize of the German Research Foundation 1986: Franz Vogt Award of the University of Giessen 1991: Albert Lasker Award for Basic Medical Research 1991: Keith R. Porter Lecture 1992: Alfred P. Sloan, Jr. Prize 1992: Louis-Jeantet Prize for Medicine 1992: Louisa Gross Horwitz Prize from Columbia University 1992: Otto Warburg Medal of the German Society for Biochemistry and Molecular Biology 1992: Otto Bayer Award 1993: Sir Hans Krebs Medal from the Federation of European Biochemical Societies 1993: Ernst Schering Prize 1994: Merit Cross of the Federal Republic of Germany 1995: Nobel Prize in Physiology or Medicine 1996: Order of Merit of Baden-Württemberg 1997: Pour le Mérite for Sciences and Arts 2005: Grand Merit Cross with Star and Sash of the Federal Republic of Germany (Großes Verdienstkreuz mit Stern und Schulterband) 2007: German Founder Award of the Federation of German Foundations 2009: Austrian Decoration for Science and Art 2013–2021: Chancellor of the order Pour le Mérite for Sciences and Arts 2014: Bavarian Maximilian Order for Science and Art 2019: Schiller Prize of the City of Marbach The asteroid 15811 Nüsslein-Volhard is named in her honour Honorary degrees Nüsslein-Volhard has been awarded honorary degrees by the following Universities: Yale, Harvard, Princeton, Rockefeller, Utrecht, University College London, Oxford (June 2005), Sheffield, St Andrews (June 2011), Freiburg, Munich and Bath (July 2012). 1991: Honorary doctorate from the University of Utrecht 1991: Honorary doctorate from Princeton University 1993: Honorary doctorate from the University of Freiburg 1993: Honorary doctorate from Harvard University 2001: Honorary doctorate from Rockefeller University 2002: Honorary doctorate from University College London 2005: Honorary doctorate from University of Oxford 2007: Honorary doctorate from Weizmann Institute of Science 2008: Mercator Professorship, University of Duisburg-Essen 2011: Honorary doctorate from the University of St Andrews 2012: Honorary doctorate from the University of Bath Memberships 1989: Founding member of the Academia Europaea 1989: Corresponding member of the Heidelberg Academy of Sciences 1990: Corresponding member of North Rhine-Westphalia Academy for Sciences and Arts 1990: Elected a Foreign Member of the Royal Society (ForMemRS), London 1990: Member of the National Academy of Sciences, Washington 1991: Member of the German Academy of Sciences Leopoldina 1992: Member of the American Academy of Arts and Sciences 1995: Member of the American Philosophical Society 2001–2006: Member of the National Ethics Council of the Federal Government (German Ethics Council) Member of the French Academy of Sciences Member of the Scientific Committee of the Ingrid zu Solms Foundation Member of the European Molecular Biology Organization See also Timeline of women in science Notes References External links including the Nobel Lecture on 8 December 1995 The Identification of Genes Controlling Development in Flies and Fishes 1942 births Living people German biochemists German geneticists German physiologists German embryologists German women biochemists German women biologists Women geneticists Women medical researchers Women physiologists Women Nobel laureates German Nobel laureates Nobel laureates in Physiology or Medicine Foreign associates of the National Academy of Sciences Members of the European Molecular Biology Organization Fellows of the American Academy of Arts and Sciences Fellows of the AACR Academy Foreign members of the Royal Society Max Planck Society people Members of Academia Europaea Members of the French Academy of Sciences Gottfried Wilhelm Leibniz Prize winners Grand Crosses with Star and Sash of the Order of Merit of the Federal Republic of Germany Recipients of the Pour le Mérite (civil class) Recipients of the Albert Lasker Award for Basic Medical Research Recipients of the Order of Merit of Baden-Württemberg Recipients of the Austrian Decoration for Science and Art Goethe University Frankfurt alumni Academic staff of the University of Duisburg-Essen University of Tübingen alumni Academic staff of the University of Tübingen Scientists from Magdeburg 20th-century German women scientists 21st-century German women scientists 20th-century German biologists 21st-century German biologists Members of the American Philosophical Society Max Planck Institute directors
Christiane Nüsslein-Volhard
Technology
2,192
25,464,987
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20September%2012%2C%202053
A total solar eclipse will take place at the Moon's ascending node of orbit on Friday, September 12, 2053, with a magnitude of 1.0328. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 2.7 days after perigee (on September 9, 2053, at 16:30 UTC), the Moon's apparent diameter will be larger. The path of totality will be visible from parts of the southern tip of Spain, the northern tip of Morocco, Algeria, Tunisia, Libya, Egypt, Saudi Arabia, Yemen, the Maldives, and western Indonesia. A partial solar eclipse will also be visible for parts of north and central Africa, Europe, the Middle East, Central Asia, South Asia, and Southeast Asia. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2053 A penumbral lunar eclipse on March 4. An annular solar eclipse on March 20. A penumbral lunar eclipse on August 29. A total solar eclipse on September 12. Metonic Preceded by: Solar eclipse of November 25, 2049 Followed by: Solar eclipse of July 1, 2057 Tzolkinex Preceded by: Solar eclipse of August 2, 2046 Followed by: Solar eclipse of October 24, 2060 Half-Saros Preceded by: Lunar eclipse of September 7, 2044 Followed by: Lunar eclipse of September 18, 2062 Tritos Preceded by: Solar eclipse of October 14, 2042 Followed by: Solar eclipse of August 12, 2064 Solar Saros 145 Preceded by: Solar eclipse of September 2, 2035 Followed by: Solar eclipse of September 23, 2071 Inex Preceded by: Solar eclipse of October 2, 2024 Followed by: Solar eclipse of August 24, 2082 Triad Preceded by: Solar eclipse of November 12, 1966 Followed by: Solar eclipse of July 14, 2140 Solar eclipses of 2051–2054 Saros 145 Metonic series Tritos series Inex series References 2053 09 12 2053 in science 2053 09 12 2053 09 12
Solar eclipse of September 12, 2053
Astronomy
635
46,365,465
https://en.wikipedia.org/wiki/Tim%20Cannon
Tim Cannon is an American software developer, entrepreneur, and biohacker based in Pittsburgh, Pennsylvania. He is best known as Chief Information Officer of Grindhouse Wetware, a biotechnology startup company that creates technology to augment human capabilities. Grindhouse was co-founded by Cannon and Shawn Sarver in 2012. Cannon himself has had a variety of body modification implants, and has been referred to in the media as a cyborg. Cannon has spoken at conferences around the world on the topics of human enhancement, futurism, and citizen science, including at TEDx Rosslyn, FITUR, the University of Maryland, the World Business Dialogue, the Medical Entrepreneur Startup Hospital, and others. He has been published in Wired and featured in television shows such as National Geographic Channel's The Big Picture with Kal Penn. Cannon has been featured on podcasts including Ryan O'Shea's Future Grind and Roderick Russell's Remarkably Human. Implants Cannon has had a variety of body modification implants, including a radio-frequency identification (RFID) tag in his hand and magnetic implants in a finger, wrist, and tragus, causing him to be labelled a cyborg by media outlets including Business Insider, Newsweek, The Awl, and others. Because of legal and ethical restrictions on the types of surgery that can be done on humans, most of these modifications cannot be done by doctors or anesthetists. Instead they are done by body modification experts or on a "DIY" basis. In May 2012, inspired by Lepht Anonym, Cannon had finger magnets implanted to give him an "extra sense", the ability to feel electromagnetism. In October 2013, Cannon became the first person to be implanted with the Grindhouse-designed biometric sensor known as Circadia, a procedure which was performed by body modification artist Steve Haworth in Essen, Germany. The device, approximately the size of a deck of playing cards, automatically sent Cannon's temperature to his phone, was powered wirelessly through inductive charging, and mimicked bioluminescence with subdermal LEDs. After a few months as an initial proof-of-concept test, a series of panic attacks led to the device's removal. Cannon is currently working to design an improved, consumer-friendly version of his Circadia implant that measures additional biometrics such as blood glucose, blood oxygen, blood pressure, and heart rate data. In November 2015, Tim had a prototype of Grindhouse's Northstar device implanted into his right forearm during a procedure at the “Cyborg Fair” in Düsseldorf, Germany. A little larger than a coin, Northstar contained five LED lights, creating a bioluminescent effect when touched with a magnet (such as the ones implanted in Cannon's fingertips.) Its purpose is solely aesthetic. Capable of blinking around 10,000 times before the battery runs down, the device has been presented as a way to "light" tattoos. References 1979 births Living people American futurologists Cyberneticists Life extensionists American transhumanists People from Camden, Ohio Cyborgs
Tim Cannon
Biology
643
4,238,602
https://en.wikipedia.org/wiki/Urchin%20%28software%29
Urchin was a web statistics analysis program that was developed by Urchin Software Corporation. Urchin analyzed web server log file content and displayed the traffic information on that website based upon the log data. Sales of Urchin products ended on March 28, 2012. Urchin software could be run in two different data collection modes: log file analyzer or hybrid. As a log file analyzer, Urchin processed web server log files in a variety of log file formats. Custom file formats could also be defined. As a hybrid, Urchin combined page tags with log file data to eradicate the limitations of each data collection method in isolation. The result was more accurate web visitor data. Urchin became one of the more popular solutions for website traffic analysis, particularly with ISPs and web hosting providers. This was largely due to its scalability in performance and its pricing model. Urchin Software Corp. was acquired by Google in April 2005, forming Google Analytics. In April 2008, Google released Urchin 6. In February 2009, Google released Urchin 6.5, integrating AdWords. Urchin 7 was released in September 2010 and included 64-bit support, a new UI, and event tracking, among other features. See also UTM parameters List of web analytics software References External links Google software Web analytics Discontinued Google services Web log analysis software
Urchin (software)
Technology
270