id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
42,954,720
https://en.wikipedia.org/wiki/Tim%20O%27Riordan
Tim O'Riordan OBE DL FBA (born ) is a British geographer who is Emeritus Professor of Environmental Sciences at the University of East Anglia (UEA) and a prominent British environmental writer and thinker. Background O'Riordan grew up in the north of England, and was educated at the University of Edinburgh (MA, Geography), Cornell University (MS, Water Resources Engineering), and King's College, Cambridge (PhD in Geography). He taught at Simon Fraser University in Canada in the late 1960s, before talking up a lectureship at UEA. He retired as Professor in 2005. He was a founder and deputy director (1991-) of the Centre for Social and Economic Research on the Global Environment (CSERGE) at UEA. He was widowed and has two daughters and lives in Norwich. Contributions O'Riordan's contributions are to environmental policy analysis; environmental appraisal and evaluation; and environmental governance and decision-making. In 1981 he published Environmentalism, one of the first critical summaries of the field. Latterly he has worked on interdisciplinary approaches pursuing the transition to sustainability, becoming active in the development of sustainability science partnerships. In 2014 he called for a "science for sustainable development which is geared to compassion, fairness, empathy, and social justice." For him, "Sustainability is not a word but a way of becoming." His work has been cited over 16,500 times. His engaged and more practical work relates to designing future coastlines in East Anglia in England so that they are ready for sea level rise and the creation of sound economies and societies for a sustainable future, using participatory democratic decisionmaking; he has worked in Broadland since the late 1960s. O'Riordan has edited a number of books on the institutional aspects of global environmental change, and policy and practice, including two editions of the textbook, Environmental Science for Environmental Management. His work on European environmental policy and risk management is summarised in several volumes. He is editor of the prominent magazine/journal Environment. He has worked on the greening of business, participating in the Prince of Wales' seminar on Business and the Environment, and has sat on several advisory boards including the Corporate Responsibility Body for Asda plc, and the Growth and Climate Change Panel for Anglian Water Group. He was a member of the UK Sustainable Development Commission until it was closed down by the government in 2011. Awards Order of the British Empire (2010) Fellow of the British Academy (1999) Distinguished Friend of Oxford (2013) Deputy Lieutenant of the County of Norfolk Sheriff of Norwich (2009–10) Publications O'Riordan, T. and T. Lenton (eds.). 2013. Addressing Tipping Points for a Precarious Future. Oxford University Press/British Academy. Horlick-Jones, T., Walls, J., Rowe, G., Pidgeon, N., Poortinga, W., Murdock, G., O'Riordan, T., (2007) The GM Debate: Risks, Politics and Public Engagement. Routledge. O’Riordan, T. and S. Stoll-Kleemann (eds.). 2002. Biodiversity, Sustainability and Human Communities: Protecting Beyond the Protected. Cambridge University Press. O’Riordan, T. (ed.) 2002. Globalism, Localism and Identity: New Perspectives on the Transition of Sustainability. Earthscan. O’Riordan, T., James Cameron, and Jordan, A.J. (eds.) 2001. Reinterpreting the Precautionary Principle. Cameron May, London. O’Riordan, T. (ed.) 1999). Environmental Science for Environmental Management, Revised Second Edition. Prentice Hall, Harlow Essex. O'Riordan, T. (ed.). 1997. Ecotaxation. Earthscan. O'Riordan, T. and H. Voisey (ed.). 1998. The Transition to Sustainability: the Politics of Agenda 21 in Europe. Earthscan. O'Riordan, T. and H. Voisey (ed.). 1997. Sustainable Development in Western Europe: Coming to Terms with Agenda 21. Frank Cass. Jager J. and T. O'Riordan (eds.). 1996. The Politics of Climate Change: A European Perspective. Routledge. O’Riordan, T. and J. Cameron (eds.). (1994). Interpreting the Precautionary Principle. Routledge. Pearce, D., Turner, R., O'Riordan, T.(1993) Blueprint III. Earthscan. Weale A., T O'Riordan, L. Kramme. 1992. Controlling Pollution in the round: Change and Choice in Environmental Regulation in Britain and Germany. London: Anglo-German Foundation. O'Riordan T., R. Kemp, M. Purdue 1988. Sizewell B: An Anatomy of Inquiry. Pergamon. Lowe, P., Cox, G., MacEwen, M., O'Riordan, T., Winter, M. 1986. Countryside conflicts. The politics of farming, forestry and conservation. Gower Publishing. O'Riordan, T., Turner, R. (eds.) 1983. Progress in Resource Management and Environmental Planning. O'Riordan, T., Turner, R. (eds.). 1983. Annotated Reader in Environmental Planning and Management. Pergamon Press. O'Riordan, T. 1981. Environmentalism. Pion. O'Riordan, T. and D. Sewell (eds.). 1981. Project Appraisal and Policy Review. Wiley. O'Riordan, T. and G. Padgett. 1978. Sharing rivers and canals: a study of the views of coarse anglers and boat users on selected waterways. Sports Council. O'Riordan, T. 1971. Perspectives on resource management. Pion. References 1942 births Living people British geographers Environmental scientists Development specialists Alumni of the University of Edinburgh Alumni of King's College, Cambridge Academics of the University of East Anglia Deputy lieutenants of Norfolk Officers of the Order of the British Empire Fellows of the British Academy
Tim O'Riordan
Environmental_science
1,287
24,790,818
https://en.wikipedia.org/wiki/Tamil%20radio%20on%20internet
Tamil radio on internet has become very popular after the Sri Lankan Tamil national radio started broadcasting their services on the Internet. This new trend has attracted millions of Tamil listeners from all over the world, specially listeners from south India and a commercial service called "Varthaga Sevai" was started. This became a very popular Sri Lankan Tamil radio service and since then other private Tamil radio stations have also started broadcasting. Early days The first private Tamil radio station was "Sooryan fm" who also started to stream over the Internet and captured all the Tamil listeners that the Tamil commercial service had built up. Sooryan fm quickly become very popular because of their simple Tamil. Competition for Tamil radio increased when several private Tamil radios started (Shakthi, Thalam, etc.). After Shakthi entered into Tamil TV the trend slightly changed away from radio and into television with the generation change. Internet radio With the introduction of Internet radio, there were no Tamil-language stations with live announcers and programmes but there were and still are many Tamil stations broadcasting only Tamil songs. The first Internet station which was very famous for broadcasting only songs was "Superstarzfm". This ceased broadcasting after a while due to financial problems. Later, some young people with SObrothers joined and created the radio station "TamilsFlashFm", which is also known as TFFm. This is the first Tamil Youth Internet radio and within three years it had become very popular and reached monthly 60,000 Tamil Listeners worldwide. This station not only broadcast songs round the clock but also hosted live on air shows with young Rj's giving the listeners the chance to request songs through Skype and MSN. The current Program Manager of TamilFlash.Fm is Sabesan Kanagaratnam. Other radio stations followed TamilsFlashFm's lead and produced live shows on air. Now there are other Internet radio stations which use the SHOUTcast technique to broadcast over the Internet. Also, radio and podcasting apps focused on India, such as JawaRadio, bring regional languages into their platform. See also List of Tamil-language radio stations References Listen Online Tamil Radio Station Internet radio
Tamil radio on internet
Technology
440
35,167,958
https://en.wikipedia.org/wiki/Roof%20window
A roof window is an outward opening window that is incorporated as part of the design of a roof. Often confused with a skylight, a roof window differs in a few basic ways. A roof window is often a good option when there is a desire to allow both light and fresh air into the space. A roof window is different from a tubular skylight in that the light is not directed through any type of channel or tube in order to provide lighting for the interior of a building. This type of light tube design is often employed with buildings where the installation of a skylight or roof window is not practical. While a roof window is normally included in the original construction of the building, it is possible to add the design feature to an existing structure. As long as the framework and the slope of the roof allow for the inclusion of this type of window, it can be installed with relative ease. Many manufacturers offer prefabricated window inserts for retrofitting into existing roofs. See also VELUX Dormer Loft conversions in the United Kingdom Flashing (weatherproofing) References External links Windows Roofs
Roof window
Technology,Engineering
222
13,495,833
https://en.wikipedia.org/wiki/Leu-enkephalin
Leu-enkephalin is an endogenous opioid peptide neurotransmitter with the amino acid sequence Tyr-Gly-Gly-Phe-Leu that is found naturally in the brains of many animals, including humans. It is one of the two forms of enkephalin; the other is met-enkephalin. The tyrosine residue at position 1 is thought to be analogous to the 3-hydroxyl group on morphine. Leu-enkephalin has agonistic actions at both the μ- and δ-opioid receptors, with significantly greater preference for the latter. It has little to no effect on the κ-opioid receptor. A nasal spray formulation of leu-enkephalin (developmental code names NES-100, NM-0127, NM-127, PES-200; proposed brand name Envelta) is under development by Virpax Pharmaceuticals for the treatment of pain and post-traumatic stress disorder (PTSD). As of November 2023, it is up to the preclinical stage of development for these indications. See also Met-enkephalin References Delta-opioid receptor agonists Experimental drugs Opioid peptides
Leu-enkephalin
Chemistry,Biology
265
613,092
https://en.wikipedia.org/wiki/Aminoacyl%20tRNA%20synthetase
An aminoacyl-tRNA synthetase (aaRS or ARS), also called tRNA-ligase, is an enzyme that attaches the appropriate amino acid onto its corresponding tRNA. It does so by catalyzing the transesterification of a specific cognate amino acid or its precursor to one of all its compatible cognate tRNAs to form an aminoacyl-tRNA. In humans, the 20 different types of aa-tRNA are made by the 20 different aminoacyl-tRNA synthetases, one for each amino acid of the genetic code. This is sometimes called "charging" or "loading" the tRNA with an amino acid. Once the tRNA is charged, a ribosome can transfer the amino acid from the tRNA onto a growing peptide, according to the genetic code. Aminoacyl tRNA therefore plays an important role in RNA translation, the expression of genes to create proteins. Mechanism The synthetase first binds ATP and the corresponding amino acid (or its precursor) to form an aminoacyl-adenylate, releasing inorganic pyrophosphate (PPi). The adenylate-aaRS complex then binds the appropriate tRNA molecule's D arm, and the amino acid is transferred from the aa-AMP to either the 2'- or the 3'-OH of the last tRNA nucleotide (A76) at the 3'-end. The mechanism can be summarized in the following reaction series: Amino Acid + ATP → Aminoacyl-AMP + PPi Aminoacyl-AMP + tRNA → Aminoacyl-tRNA + AMP Summing the reactions, the highly exergonic overall reaction is as follows: Amino Acid + tRNA + ATP → Aminoacyl-tRNA + AMP + PPi Some synthetases also mediate an editing reaction to ensure high fidelity of tRNA charging. If the incorrect tRNA is added (aka. the tRNA is found to be improperly charged), the aminoacyl-tRNA bond is hydrolyzed. This can happen when two amino acids have different properties even if they have similar shapes—as is the case with valine and threonine. The accuracy of aminoacyl-tRNA synthetase is so high that it is often paired with the word "superspecificity” when it is compared to other enzymes that are involved in metabolism. Although not all synthetases have a domain with the sole purpose of editing, they make up for it by having specific binding and activation of their affiliated amino acids. Another contribution to the accuracy of these synthetases is the ratio of concentrations of aminoacyl-tRNA synthetase and its cognate tRNA. Since tRNA synthetase improperly acylates the tRNA when the synthetase is overproduced, a limit must exist on the levels of aaRSs and tRNAs in vivo. Classes There are two classes of aminoacyl tRNA synthetase, each composed of ten enzymes: Class I has two highly conserved sequence motifs. It aminoacylates at the 2'-OH of a terminal adenosine nucleotide on tRNA, and it is usually monomeric or dimeric (one or two subunits, respectively). Class II has three highly conserved sequence motifs. It aminoacylates at the 3'-OH of a terminal adenosine on tRNA, and is usually dimeric or tetrameric (two or four subunits, respectively). Although phenylalanine-tRNA synthetase is class II, it aminoacylates at the 2'-OH. The amino acids are attached to the hydroxyl (-OH) group of the adenosine via the carboxyl (-COOH) group. Regardless of where the aminoacyl is initially attached to the nucleotide, the 2'-O-aminoacyl-tRNA will ultimately migrate to the 3' position via transesterification. Bacterial aminoacyl-tRNA synthetases can be grouped as follows: Amino acids which use class II aaRS seem to be evolutionarily older. Structures Both classes of aminoacyl-tRNA synthetases are multidomain proteins. In a typical scenario, an aaRS consists of a catalytic domain (where both the above reactions take place) and an anticodon binding domain (which interacts mostly with the anticodon region of the tRNA). Transfer-RNAs for different amino acids differ not only in their anticodon but also at other points, giving them slightly different overall configurations. The aminoacyl-tRNA synthetases recognize the correct tRNAs primarily through their overall configuration, not just through their anticodon. In addition, some aaRSs have additional RNA binding domains and editing domains that cleave incorrectly paired aminoacyl-tRNA molecules. The catalytic domains of all the aaRSs of a given class are found to be homologous to one another, whereas class I and class II aaRSs are unrelated to one another. The class I aaRSs feature a cytidylyltransferase-like Rossmann fold seen in proteins like glycerol-3-phosphate cytidylyltransferase, nicotinamide nucleotide adenylyltransferase and archaeal FAD synthase, whereas the class II aaRSs have a unique fold related to biotin and lipoate ligases. The alpha helical anticodon binding domain of arginyl-, glycyl- and cysteinyl-tRNA synthetases is known as the DALR domain after characteristic conserved amino acids. Aminoacyl-tRNA synthetases have been kinetically studied, showing that Mg2+ ions play an active catalytic role and therefore aaRs have a degree of magnesium dependence. Increasing the Mg2+ concentration leads to an increase in the equilibrium constants for the aminoacyl-tRNA synthetases’ reactions. Although this trend was seen in both class I and class II synthetases, the magnesium dependence for the two classes are very distinct. Class II synthetases have two or (more frequently) three Mg2+ ions, while class I only requires one Mg2+ ion. Beside their lack of overall sequence and structure similarity, class I and class II synthetases feature different ATP recognition mechanisms. While class I binds via interactions mediated by backbone hydrogen bonds, class II uses a pair of arginine residues to establish salt bridges to its ATP ligand. This oppositional implementation is manifested in two structural motifs, the Backbone Brackets and Arginine Tweezers, which are observable in all class I and class II structures, respectively. The high structural conservation of these motifs suggest that they must have been present since ancient times. Evolution Most of the aaRSs of a given specificity are evolutionarily closer to one another than to aaRSs of another specificity. However, AsnRS and GlnRS group within AspRS and GluRS, respectively. Most of the aaRSs of a given specificity also belong to a single class. However, there are two distinct versions of the LysRS - one belonging to the class I family and the other belonging to the class II family. The molecular phylogenies of aaRSs are often not consistent with accepted organismal phylogenies. That is, they violate the so-called canonical phylogenetic pattern shown by most other enzymes for the three domains of life - Archaea, Bacteria, and Eukarya. Furthermore, the phylogenies inferred for aaRSs of different amino acids often do not agree with one another. In addition, aaRS paralogs within the same species show a high degree of divergence between them. These are clear indications that horizontal transfer has occurred several times during the evolutionary history of aaRSs. A widespread belief in the evolutionary stability of this superfamily, meaning that every organism has all the aaRSs for their corresponding amino acids, is misconceived. A large-scale genomic analysis on ~2500 prokaryotic genomes showed that many of them miss one or more aaRS genes whereas many genomes have 1 or more paralogs. AlaRS, GlyRS, LeuRS, IleRS and ValRS are the most evolutionarily stable members of the family. GluRS, LysRS and CysRS often have paralogs, whereas AsnRS, GlnRS, PylRS and SepRS are often absent from many genomes. With the exception of AlaRS, it has been discovered that 19 out of the 20 human aaRSs have added at least one new domain or motif. These new domains and motifs vary in function and are observed in various forms of life. A common novel function within human aaRSs is providing additional regulation of biological processes. There exists a theory that the increasing number of aaRSs that add domains is due to the continuous evolution of higher organisms with more complex and efficient building blocks and biological mechanisms. One key piece of evidence to this theory is that after a new domain is added to an aaRS, the domain becomes fully integrated. This new domain's functionality is conserved from that point on. As genetic efficiency evolved in higher organisms, 13 new domains with no obvious association with the catalytic activity of aaRSs genes have been added. Application in biotechnology In some of the aminoacyl tRNA synthetases, the cavity that holds the amino acid can be mutated and modified to carry unnatural amino acids synthesized in the lab, and to attach them to specific tRNAs. This expands the genetic code, beyond the twenty canonical amino acids found in nature, to include an unnatural amino acid as well. The unnatural amino acid is coded by a nonsense (TAG, TGA, TAA) triplet, a quadruplet codon, or in some cases a redundant rare codon. The organism that expresses the mutant synthetase can then be genetically programmed to incorporate the unnatural amino acid into any desired position in any protein of interest, allowing biochemists or structural biologists to probe or change the protein's function. For instance, one can start with the gene for a protein that binds a certain sequence of DNA, and, by directing an unnatural amino acid with a reactive side-chain into the binding site, create a new protein that cuts the DNA at the target-sequence, rather than binding it. By mutating aminoacyl tRNA synthetases, chemists have expanded the genetic codes of various organisms to include lab-synthesized amino acids with all kinds of useful properties: photoreactive, metal-chelating, xenon-chelating, crosslinking, spin-resonant, fluorescent, biotinylated, and redox-active amino acids. Another use is introducing amino acids bearing reactive functional groups for chemically modifying the target protein. Certain diseases’ causation (such as neuronal pathologies, cancer, disturbed metabolic conditions, and autoimmune disorders) have been correlated to specific mutations of aminoacyl-tRNA synthetases. Charcot-Marie-Tooth (CMT) is the most frequent heritable disorder of the peripheral nervous system (a neuronal disease) and is caused by a heritable mutation in glycol-tRNA and tyrosyl-tRNA. Diabetes, a metabolic disease, induces oxidative stress, which triggers a build up of mitochondrial tRNA mutations. It has also been discovered that tRNA synthetases may be partially involved in the etiology of cancer. A high level of expression or modification of aaRSs has been observed within a range of cancers. A common outcome from mutations of aaRSs is a disturbance of dimer shape/formation which has a direct relationship with its function. These correlations between aaRSs and certain diseases have opened up a new door to synthesizing therapeutics. Noncatalytic domains The novel domain additions to aaRS genes are accretive and progressive up the Tree of Life. The strong evolutionary pressure for these small non-catalytic protein domains suggested their importance. Findings beginning in 1999 and later revealed a previously unrecognized layer of biology: these proteins control gene expression within the cell of origin, and when released exert homeostatic and developmental control in specific human cell types, tissues and organs during adult or fetal development or both, including pathways associated with angiogenesis, inflammation, the immune response, the mechanistic target of rapamycin (mTOR) signalling, apoptosis, tumorigenesis, and interferon gamma (IFN-) and p53 signalling. Substrate depletion In 2022, it was discovered that aminoacyl-trna synthetases may incorporate alternative amino acids during shortages of their precursors. In particular, tryptophanyl-tRNA synthetase (WARS1) will incorporate phenylalanine during tryptophan depletion, essentially inducing a W>F codon reassignment. Depletion of the other substrate of aminoacyl-tRNA synthetases, the cognate tRNA, may be relevant to certain diseases, e.g. Charcot–Marie–Tooth disease. It was shown that CMT-mutant glycyl-tRNA synthetase variants are still able to bind tRNA-gly but fail to release it, leading to depletion of the cellular pool of glycyl-tRNA-gly, what in turn results in stalling of the ribosome on glycine codons during mRNA translation. Clinical Mutations in the mitochondrial enzyme have been associated with a number of genetic disorders including Leigh syndrome, West syndrome and CAGSSS (cataracts, growth hormone deficiency, sensory neuropathy, sensorineural hearing loss and skeletal dysplasia syndrome). Prediction servers ICAARS: B. Pawar, and GPS Raghava (2010) Prediction and classification of aminoacyl tRNA synthetases using PROSITE domains. BMC Genomics 2010, 11:507 MARSpred: Prokaryotic AARS database: See also TARS (gene) AARS2 (gene) References External links EC 6.1 Protein biosynthesis
Aminoacyl tRNA synthetase
Chemistry
2,968
74,655,714
https://en.wikipedia.org/wiki/Passthrough%20%28architecture%29
A passthrough (or serving hatch) is a window-like opening between the kitchen and the dining or family room. Considered to be a conservative approach to the open plan, in a modern family home a passthrough is typically built when a larger opening is either precluded by the locations of structural columns or is impractical due to the need to preserve the wall storage space. If dining involves dedicated waiting staff, the pass-through allows servers to work without stepping into the kitchen; a restaurant design frequently has two passthroughs, one for the food and one for the dirty dishes. The term "pass-through" is also used for any opening in a wall between the rooms intended for passing items. Window for communications In addition to the main purpose of passing the dishes between the kitchen and the dining area, a larger passthrough also improves guest/host communications, adds openness, and brings more light into a smaller kitchen. Passthrough allows the kitchen door to stay shut, with shutters used to further isolate the noise, smell, and messy views of the kitchen from the dining area. Post-World War II household rearrangements dictated the need for better communications between the kitchen and dining areas. Pre-war, the meal preparations in the middle-class homes involved domestic help, a closed-off kitchen was desirable to keep odors (and voices of servants) out of the public area. With wives becoming solely responsible for the meal preparation, cooking got merged with socializing. The house layout, via passthroughs (or elimination of the kitchen wall altogether), signaled that the kitchen worker was now a wife and a mother, and not a servant. In the original design of the Stahl House the boundary between the kitchen and the rest of the space were not just demarcated by a lowered ceiling and a passthrough: the entrance to the kitchen could have been closed off by sliding doors, thus leaving the very large passthrough as the sole means of communication with the rest of the house, still providing the wife a "commanding view". A view from other side of the opening also applies: combined with glass walls, the passthrough facilitates a common feature of the suburban life, surveillance: "…there is no escaping the omnipresent eye of the community" (William Mann Dobriner). This constant visibility (including the household members observing the wife in the kitchen cooking), perpetuated the heteronormative structure of family and society. Just like picture windows, the kitchen passthrough in Stahl House made people on both sides of the opening into spectators. In the Julius Shulman's series of photos of the Stahl House the passthrough embodies the husband/wife interaction, with the woman in the kitchen and the man on the other side of the passthrough. See also Hagioscope References Sources Architectural elements Kitchen
Passthrough (architecture)
Technology,Engineering
607
20,124,115
https://en.wikipedia.org/wiki/StatXact
StatXact is a statistical software package for analyzing data using exact statistics. It calculates exact p-values and confidence intervals for contingency tables and non-parametric procedures. It is marketed by Cytel Inc. References External links StatXact homepage at Cytel Inc. Statistical software Windows-only proprietary software
StatXact
Mathematics
67
66,607,472
https://en.wikipedia.org/wiki/Kaurenoic%20acid
Kaurenoic acid (ent-kaur-16-en-19-oic acid or kauren-19-oic acid) is a diterpene with antibacterial activity against Gram-positive bacteria. However its low solubility and blood lytic activity on erythrocytes might make it a poor pharmaceutical candidate. Kaurenoic acid also has uterine relaxant activity via calcium blockade and opening ATP-sensitive potassium channels. Kaurenoic acid is found in several plants such as Copaifera. It is a potential biomarker for the presence of sunflower in foods. Medical use Kaurenoic acid has been studied for its medicinal properties and seems to have anti-inflammatory, antiulcerogenic, antitumor, antinociceptive, antimelanoma, antitilipoperoxidation, antioxidant and antimicrobial properties. Kaurenoic acid decreases leukocyte migration. It seems to inhibit histamine and serotonin pathways, in addition to antiprotozoal activities against Trypanosoma cruzi and Leishmania amazonensis. References Diterpenes Tetracyclic compounds Carboxylic acids
Kaurenoic acid
Chemistry
255
144,433
https://en.wikipedia.org/wiki/Observatory%20of%20Strasbourg
The Observatory of Strasbourg is an astronomical observatory in Strasbourg, France. History This observatory is actually Strasbourg's third observatory: the first was built in 1673 on one of the city's surrounding towers (the astronomer Julius Reichelt notably played a role in its establishment), and the second in 1828 on the roof of the buildings of the Academy. Following the Franco-Prussian War of 1870–1871, the city of Strasbourg became part of the German Empire. The University of Strasbourg was refounded in 1872, and a new observatory began construction in 1875, in the Neustadt district. The main instrument was a 50 cm Repsold refractor, which saw first light in 1880 (see Great refractor). At the time this was the largest instrument in the German Empire. In 1881, the ninth General Assembly of the Astronomische Gesellschaft met in Strasbourg to mark the official inauguration. The observatory site was selected primarily for instruction purposes and political symbolism, rather than the observational qualities. It was a low-lying site that was prone to mists. During the period up until 1914, the staff was too small to work the instruments and so there was little academic research published prior to World War I. The main observations were of comets and variable stars. After 1909, the instruments were also used to observe binary stars and perform photometry of nebulae. The observatory is currently the home for the Centre de données astronomiques de Strasbourg, a database for the collection and distribution of astronomical information. This includes SIMBAD, a reference database for astronomical objects, VizieR, an astronomical catalogue service and Aladin, an interactive sky atlas. The modern extension of the building used to house the Planétarium de Strasbourg until 2023, and the opening of a larger and more modern planetarium in the vicinity. The observatory is surrounded by the Jardin botanique de l'Université de Strasbourg. In the vaulted basement below the observatory, a university-administered museum is located. Called Crypte aux étoiles ("star crypt"), it displays old telescopes and other antique astronomical devices such as clocks and theodolites. Notable astronomers Agnès Acker Julius Bauschinger Adolf Berberich André Danjon William Lewis Elkin Ernest Esclangon Ernst Hartwig Carlos Jaschek Pierre Lacroute Otto Tetens Friedrich Winnecke Carl Wilhelm Wirtz Walter Wislicenus See also List of astronomical observatories References External links Official website of the Observatory Official website of the Planetarium Publications of Strasbourg Observatory digitalized on Paris Observatory digital library University of Strasbourg Astronomical observatories in France Buildings and structures in Strasbourg Planetaria in France 1875 establishments in Germany Museums in Strasbourg Science museums in France Hermann Eggert buildings
Observatory of Strasbourg
Astronomy
557
2,658,571
https://en.wikipedia.org/wiki/Kakutani%20fixed-point%20theorem
In mathematical analysis, the Kakutani fixed-point theorem is a fixed-point theorem for set-valued functions. It provides sufficient conditions for a set-valued function defined on a convex, compact subset of a Euclidean space to have a fixed point, i.e. a point which is mapped to a set containing it. The Kakutani fixed point theorem is a generalization of the Brouwer fixed point theorem. The Brouwer fixed point theorem is a fundamental result in topology which proves the existence of fixed points for continuous functions defined on compact, convex subsets of Euclidean spaces. Kakutani's theorem extends this to set-valued functions. The theorem was developed by Shizuo Kakutani in 1941, and was used by John Nash in his description of Nash equilibria. It has subsequently found widespread application in game theory and economics. Statement Kakutani's theorem states: Let S be a non-empty, compact and convex subset of some Euclidean space Rn. Let φ: S → 2S be a set-valued function on S with the following properties: φ has a closed graph; φ(x) is non-empty and convex for all x ∈ S.Then φ has a fixed point.Definitions Set-valued function A set-valued function φ from the set X to the set Y is some rule that associates one or more points in Y with each point in X. Formally it can be seen just as an ordinary function from X to the power set of Y, written as φ: X → 2Y, such that φ(x) is non-empty for every . Some prefer the term correspondence, which is used to refer to a function that for each input may return many outputs. Thus, each element of the domain corresponds to a subset of one or more elements of the range. Closed graph A set-valued function φ: X → 2Y is said to have a closed graph if the set {(x,y) | y ∈ φ(x)} is a closed subset of X × Y in the product topology i.e. for all sequences and such that , and for all , we have . Fixed point Let φ: X → 2X be a set-valued function. Then a ∈ X is a fixed point of φ if a ∈ φ(a). Examples A function with infinitely many fixed points The function: , shown on the figure at the right, satisfies all Kakutani's conditions, and indeed it has many fixed points: any point on the 45° line (dotted line in red) which intersects the graph of the function (shaded in grey) is a fixed point, so in fact there is an infinity of fixed points in this particular case. For example, x = 0.72 (dashed line in blue) is a fixed point since 0.72 ∈ [1 − 0.72/2, 1 − 0.72/4]. A function with a unique fixed point The function: satisfies all Kakutani's conditions, and indeed it has a fixed point: x = 0.5 is a fixed point, since x is contained in the interval [0,1]. A function that does not satisfy convexity The requirement that φ(x) be convex for all x is essential for the theorem to hold. Consider the following function defined on [0,1]: The function has no fixed point. Though it satisfies all other requirements of Kakutani's theorem, its value fails to be convex at x = 0.5. A function that does not satisfy closed graph Consider the following function defined on [0,1]: The function has no fixed point. Though it satisfies all other requirements of Kakutani's theorem, its graph is not closed; for example, consider the sequences xn = 0.5 - 1/n, yn = 3/4. Alternative statement Some sources, including Kakutani's original paper, use the concept of upper hemicontinuity while stating the theorem:Let S be a non-empty, compact and convex subset of some Euclidean space Rn. Let φ: S→2S be an upper hemicontinuous set-valued function on S with the property that φ(x) is non-empty, closed, and convex for all x ∈ S. Then φ has a fixed point.This statement of Kakutani's theorem is completely equivalent to the statement given at the beginning of this article. We can show this by using the closed graph theorem for set-valued functions, which says that for a compact Hausdorff range space Y, a set-valued function φ: X→2Y has a closed graph if and only if it is upper hemicontinuous and φ(x) is a closed set for all x. Since all Euclidean spaces are Hausdorff (being metric spaces) and φ is required to be closed-valued in the alternative statement of the Kakutani theorem, the Closed Graph Theorem implies that the two statements are equivalent. Applications Game theory The Kakutani fixed point theorem can be used to prove the minimax theorem in the theory of zero-sum games. This application was specifically discussed by Kakutani's original paper. Mathematician John Nash used the Kakutani fixed point theorem to prove a major result in game theory. Stated informally, the theorem implies the existence of a Nash equilibrium in every finite game with mixed strategies for any finite number of players. This work later earned him a Nobel Prize in Economics. In this case: The base set S is the set of tuples of mixed strategies chosen by each player in a game. If each player has k possible actions, then each player's strategy is a k-tuple of probabilities summing up to 1, so each player's strategy space is the standard simplex in Rk. Then, S is the cartesian product of all these simplices. It is indeed a nonempty, compact and convex subset of Rkn. The function φ(x) associates with each tuple a new tuple where each player's strategy is her best response to other players' strategies in x. Since there may be a number of responses which are equally good, φ is set-valued rather than single-valued. For each x, φ(x) is nonempty since there is always at least one best response. It is convex, since a mixture of two best-responses for a player is still a best-response for the player. It can be proved that φ has a closed graph. Then the Nash equilibrium of the game is defined as a fixed point of φ, i.e. a tuple of strategies where each player's strategy is a best response to the strategies of the other players. Kakutani's theorem ensures that this fixed point exists. General equilibrium In general equilibrium theory in economics, Kakutani's theorem has been used to prove the existence of set of prices which simultaneously equate supply with demand in all markets of an economy. The existence of such prices had been an open question in economics going back to at least Walras. The first proof of this result was constructed by Lionel McKenzie. In this case: The base set S is the set of tuples of commodity prices. The function φ(x) is chosen so that its result differs from its arguments as long as the price-tuple x does not equate supply and demand everywhere. The challenge here is to construct φ so that it has this property while at the same time satisfying the conditions in Kakutani's theorem. If this can be done then φ has a fixed point according to the theorem. Given the way it was constructed, this fixed point must correspond to a price-tuple which equates supply with demand everywhere. Fair division Kakutani's fixed-point theorem is used in proving the existence of cake allocations that are both envy-free and Pareto efficient. This result is known as Weller's theorem. Relation to Brouwer's fixed-point theorem Brouwer's fixed-point theorem is a special case of Kakutani fixed-point theorem. Conversely, Kakutani fixed-point theorem is an immediate generalization via the approximate selection theorem: Proof outline S = [0,1] The proof of Kakutani's theorem is simplest for set-valued functions defined over closed intervals of the real line. Moreover, the proof of this case is instructive since its general strategy can be carried over to the higher-dimensional case as well. Let φ: [0,1]→2[0,1] be a set-valued function on the closed interval [0,1] which satisfies the conditions of Kakutani's fixed-point theorem. Create a sequence of subdivisions of [0,1] with adjacent points moving in opposite directions. Let (ai, bi, pi, qi) for i = 0, 1, … be a sequence with the following properties: {|class="wikitable" |- |1.||width="50%"|1 ≥ bi > ai ≥ 0 |2.||width="40%"|(bi − ai) ≤ 2−i |- |3.||pi ∈ φ(ai) |4.||qi ∈ φ(bi) |- |5.||pi ≥ ai |6.||qi ≤ bi |} Thus, the closed intervals [ai, bi] form a sequence of subintervals of [0,1]. Condition (2) tells us that these subintervals continue to become smaller while condition (3)–(6) tell us that the function φ shifts the left end of each subinterval to its right and shifts the right end of each subinterval to its left. Such a sequence can be constructed as follows. Let a0 = 0 and b0 = 1. Let p0 be any point in φ(0) and q0 be any point in φ(1). Then, conditions (1)–(4) are immediately fulfilled. Moreover, since p0 ∈ φ(0) ⊂ [0,1], it must be the case that p0 ≥ 0 and hence condition (5) is fulfilled. Similarly condition (6) is fulfilled by q0. Now suppose we have chosen ak, bk, pk and qk satisfying (1)–(6). Let,m = (ak+bk)/2. Then m ∈ [0,1] because [0,1] is convex. If there is a r ∈ φ(m) such that r ≥ m, then we take,ak+1 = mbk+1 = bkpk+1 = rqk+1 = qk Otherwise, since φ(m) is non-empty, there must be a s ∈ φ(m) such that s ≤ m. In this case let,ak+1 = akbk+1 = mpk+1 = pkqk+1 = s. It can be verified that ak+1, bk+1, pk+1 and qk+1 satisfy conditions (1)–(6). Find a limiting point of the subdivisions. We have a pair of sequences of intervals, and we would like to show them to converge to a limiting point with the Bolzano-Weierstrass theorem. To do so, we construe these two interval sequences as a single sequence of points, (an, pn, bn, qn). This lies in the cartesian product [0,1]×[0,1]×[0,1]×[0,1], which is a compact set by Tychonoff's theorem. Since our sequence (an, pn, bn, qn) lies in a compact set, it must have a convergent subsequence by Bolzano-Weierstrass. Let's fix attention on such a subsequence and let its limit be (a*, p*,b*,q*). Since the graph of φ is closed it must be the case that p* ∈ φ(a*) and q* ∈ φ(b*). Moreover, by condition (5), p* ≥ a* and by condition (6), q* ≤ b*. But since (bi − ai) ≤ 2−i by condition (2),b* − a* = (lim bn) − (lim an) = lim (bn − an) = 0. So, b* equals a*. Let x = b* = a*. Then we have the situation that φ(x) ∋ q* ≤ x ≤ p* ∈ φ(x). Show that the limiting point is a fixed point. If p* = q* then p* = x = q*. Since p* ∈ φ(x), x is a fixed point of φ. Otherwise, we can write the following. Recall that we can parameterize a line between two points a and b by (1-t)a + tb. Using our finding above that q<x<p, we can create such a line between p and q as a function of x (notice the fractions below are on the unit interval). By a convenient writing of x, and since φ(x) is convex and it once again follows that x must belong to φ(x) since p* and q* do and hence x is a fixed point of φ. S is a n-simplex In dimensions greater one, n-simplices are the simplest objects on which Kakutani's theorem can be proved. Informally, a n-simplex is the higher-dimensional version of a triangle. Proving Kakutani's theorem for set-valued function defined on a simplex is not essentially different from proving it for intervals. The additional complexity in the higher-dimensional case exists in the first step of chopping up the domain into finer subpieces: Where we split intervals into two at the middle in the one-dimensional case, barycentric subdivision is used to break up a simplex into smaller sub-simplices. While in the one-dimensional case we could use elementary arguments to pick one of the half-intervals in a way that its end-points were moved in opposite directions, in the case of simplices the combinatorial result known as Sperner's lemma is used to guarantee the existence of an appropriate subsimplex. Once these changes have been made to the first step, the second and third steps of finding a limiting point and proving that it is a fixed point are almost unchanged from the one-dimensional case. Arbitrary S Kakutani's theorem for n-simplices can be used to prove the theorem for an arbitrary compact, convex S. Once again we employ the same technique of creating increasingly finer subdivisions. But instead of triangles with straight edges as in the case of n-simplices, we now use triangles with curved edges. In formal terms, we find a simplex which covers S and then move the problem from S to the simplex by using a deformation retract. Then we can apply the already established result for n-simplices. Infinite-dimensional generalizations Kakutani's fixed-point theorem was extended to infinite-dimensional locally convex topological vector spaces by Irving Glicksberg and Ky Fan. To state the theorem in this case, we need a few more definitions: Upper hemicontinuity A set-valued function φ: X→2Y is upper hemicontinuous if for every open set W ⊂ Y, the set {x| φ(x) ⊂ W} is open in X. Kakutani map Let X and Y be topological vector spaces and φ: X→2Y be a set-valued function. If Y is convex, then φ is termed a Kakutani map if it is upper hemicontinuous and φ(x) is non-empty, compact and convex for all x ∈ X. Then the Kakutani–Glicksberg–Fan theorem can be stated as:Let S be a non-empty, compact and convex subset of a Hausdorff locally convex topological vector space. Let φ: S→2S be a Kakutani map. Then φ has a fixed point.The corresponding result for single-valued functions is the Tychonoff fixed-point theorem. There is another version that the statement of the theorem becomes the same as that in the Euclidean case:Let S be a non-empty, compact and convex subset of a locally convex Hausdorff space. Let φ: S→2S be a set-valued function on S which has a closed graph and the property that φ(x) is non-empty and convex for all x ∈ S. Then the set of fixed points of φ is non-empty and compact.'' Anecdote In his game theory textbook, Ken Binmore recalls that Kakutani once asked him at a conference why so many economists had attended his talk. When Binmore told him that it was probably because of the Kakutani fixed point theorem, Kakutani was puzzled and replied, "What is the Kakutani fixed point theorem?" References Further reading (Standard reference on fixed-point theory for economists. Includes a proof of Kakutani's theorem.) (Comprehensive high-level mathematical treatment of fixed point theory, including the infinite dimensional analogues of Kakutani's theorem.) (Standard reference on general equilibrium theory. Chapter 5 uses Kakutani's theorem to prove the existence of equilibrium prices. Appendix C includes a proof of Kakutani's theorem and discusses its relationship with other mathematical results used in economics.) External links Fixed-point theorems Theorems in convex geometry Theorems in topology General equilibrium theory
Kakutani fixed-point theorem
Mathematics
3,706
440,959
https://en.wikipedia.org/wiki/Mpemba%20effect
The Mpemba effect is the name given to the observation that a liquid (typically water) that is initially hot can freeze faster than the same liquid which begins cold, under otherwise similar conditions. There is disagreement about its theoretical basis and the parameters required to produce the effect. The Mpemba effect is named after Tanzanian Erasto Bartholomeo Mpemba, who described it in 1963 as a secondary school student. The initial discovery and observations of the effect originate in ancient times; Aristotle said that it was common knowledge. Definition The phenomenon, when taken to mean "hot water freezes faster than cold", is difficult to reproduce or confirm because it is ill-defined. Monwhea Jeng proposed a more precise wording: "There exists a set of initial parameters, and a pair of temperatures, such that given two bodies of water identical in these parameters, and differing only in initial uniform temperatures, the hot one will freeze sooner." Even with Jeng's definition, it is not clear whether "freezing" refers to the point at which water forms a visible surface layer of ice, the point at which the entire volume of water becomes a solid block of ice, or when the water reaches . Jeng's definition suggests simple ways in which the effect might be observed, such as if a warmer temperature melts the frost on a cooling surface, thereby increasing thermal conductivity between the cooling surface and the water container. Alternatively, the Mpemba effect may not be evident in situations and under circumstances that at first seem to qualify. Observations Historical context Various effects of heat on the freezing of water were described by ancient scientists, including Aristotle: "The fact that the water has previously been warmed contributes to its freezing quickly: for so it cools sooner. Hence many people, when they want to cool water quickly, begin by putting it in the sun." Aristotle's explanation involved antiperistasis: "...the supposed increase in the intensity of a quality as a result of being surrounded by its contrary quality." Francis Bacon noted that "slightly tepid water freezes more easily than that which is utterly cold." René Descartes wrote in his Discourse on the Method, relating the phenomenon to his vortex theory: "One can see by experience that water that has been kept on a fire for a long time freezes faster than other, the reason being that those of its particles that are least able to stop bending evaporate while the water is being heated." Scottish scientist Joseph Black investigated a special case of the phenomenon by comparing previously boiled with unboiled water; he found that the previously boiled water froze more quickly. Evaporation was controlled for. He discussed the influence of stirring on the results of the experiment, noting that stirring the unboiled water led to it freezing at the same time as the previously boiled water, and also noted that stirring the very-cold unboiled water led to immediate freezing. Joseph Black then discussed Daniel Gabriel Fahrenheit's description of supercooling of water, arguing that the previously boiled water could not be as readily supercooled. Mpemba's observation The effect is named after Tanzanian scientist Erasto Mpemba. He described it in 1963 in Form 3 of Magamba Secondary School, Tanganyika; when freezing a hot ice cream mixture in a cookery class, he noticed that it froze before a cold mixture. He later became a student at Mkwawa Secondary (formerly High) School in Iringa. The headmaster invited Dr. Denis Osborne from the University College in Dar es Salaam to give a lecture on physics. After the lecture, Mpemba asked him, "If you take two similar containers with equal volumes of water, one at and the other at , and put them into a freezer, the one that started at freezes first. Why?" Mpemba was at first ridiculed by both his classmates and his teacher. After initial consternation, however, Osborne experimented on the issue back at his workplace and confirmed Mpemba's finding. They published the results together in 1969, while Mpemba was studying at the College of African Wildlife Management. Mpemba and Osborne described placing samples of water in beakers in the icebox of a domestic refrigerator on a sheet of polystyrene foam. They showed the time for freezing to start was longest with an initial temperature of and that it was much less at around . They ruled out loss of liquid volume by evaporation and the effect of dissolved air as significant factors. In their setup, most heat loss was found to be from the liquid surface. Modern experimental work David Auerbach has described an effect that he observed in samples in glass beakers placed into a liquid cooling bath. In all cases the water supercooled, reaching a temperature of typically before spontaneously freezing. Considerable random variation was observed in the time required for spontaneous freezing to start and in some cases this resulted in the water which started off hotter (partially) freezing first. In 2016, Burridge and Linden defined the criterion as the time to reach , carried out experiments, and reviewed published work to date. They noted that the large difference originally claimed had not been replicated, and that studies showing a small effect could be influenced by variations in the positioning of thermometers: "We conclude, somewhat sadly, that there is no evidence to support meaningful observations of the Mpemba effect." In controlled experiments, the effect can entirely be explained by undercooling and the time of freezing was determined by what container was used. Experimental results confirming the Mpemba effect have been criticized for being flawed, not accounting for dissolved solids and gasses, and other confounding factors. Philip Ball, a reviewer for Physics World wrote: "Even if the Mpemba effect is real — if hot water can sometimes freeze more quickly than cold — it is not clear whether the explanation would be trivial or illuminating." Ball wrote that investigations of the phenomenon need to control a large number of initial parameters (including type and initial temperature of the water, dissolved gas and other impurities, and size, shape and material of the container, and temperature of the refrigerator) and need to settle on a particular method of establishing the time of freezing, all of which might affect the presence or absence of the Mpemba effect. The required vast multidimensional array of experiments might explain why the effect is not yet understood. New Scientist recommends starting the experiment with containers at , respectively, to maximize the effect. Suggested explanations While the actual occurrence of the Mpemba effect is disputed, several theoretical explanations could explain its occurrence. In 2017, two research groups independently and simultaneously found a theoretical Mpemba effect and also predicted a new "inverse" Mpemba effect in which heating a cooled, far-from-equilibrium system takes less time than another system that is initially closer to equilibrium. Zhiyue Lu and Oren Raz yielded a general criterion based on Markovian statistical mechanics, predicting the appearance of the inverse Mpemba effect in the Ising model and diffusion dynamics. Antonio Lasanta and co-authors also predicted the direct and inverse Mpemba effects for a granular gas in a far-from-equilibrium initial state. Lasanta's paper also suggested that a very generic mechanism leading to both Mpemba effects is due to a particle velocity distribution function that significantly deviates from the Maxwell–Boltzmann distribution. James Brownridge, a physicist at Binghamton University, has said that supercooling is involved. Several molecular dynamics simulations have also supported that changes in hydrogen bonding during supercooling take a major role in the process. In 2017, Yunwen Tao and co-authors suggested that the vast diversity and peculiar occurrence of different hydrogen bonds could contribute to the effect. They argued that the number of strong hydrogen bonds increases as temperature is elevated, and that the existence of the small strongly bonded clusters facilitates in turn the nucleation of hexagonal ice when warm water is rapidly cooled down. The authors used vibrational spectroscopy and modelling with density functional theory-optimized water clusters. The following explanations have also been proposed: Microbubble-induced heat transfer: The process of boiling induced microbubbles in water that remain stably suspended as the water cools, then act by convection to transfer heat more quickly as the water cools. Evaporation: The evaporation of the warmer water reduces the mass of the water to be frozen. Evaporation is endothermic, meaning that the water mass is cooled by vapor carrying away the heat, but this alone probably does not account for the entirety of the effect. Convection, accelerating heat transfers: Reduction of water density below tends to suppress the convection currents that cool the lower part of the liquid mass; the lower density of hot water would reduce this effect, perhaps sustaining the more rapid initial cooling. Higher convection in the warmer water may also spread ice crystals around faster. Frost: Frost has insulating effects. The lower temperature water will tend to freeze from the top, reducing further heat loss by radiation and air convection, while the warmer water will tend to freeze from the bottom and sides because of water convection. This is disputed as there are experiments that account for this factor. Solutes: Calcium carbonate, magnesium carbonate, and other mineral salts dissolved in water can precipitate out when water is boiled, leading to an increase in the freezing point compared to non-boiled water that contains all the dissolved minerals. Thermal conductivity: The container of hotter liquid may melt through a layer of frost that is acting as an insulator under the container (frost is an insulator, as mentioned above), allowing the container to come into direct contact with a much colder lower layer that the frost formed on (ice, refrigeration coils, etc.) The container now rests on a much colder surface (or one better at removing heat, such as refrigeration coils) than the originally colder water, and so cools far faster from this point on. Conduction through the bottom is dominant, when the bottom of a hot beaker has been wetted by melted ice, and then sticky frozen to it. In context of Mpemba effect it is a mistake to think that bottom ice insulates, compared to poor air cooling properties. Dissolved gases: Cold water can contain more dissolved gases than hot water, which may somehow change the properties of the water with respect to convection currents, a proposition that has some experimental support but no theoretical explanation. Hydrogen bonding: In warm water, hydrogen bonding is weaker. Crystallization: Another explanation suggests that the relatively higher population of water hexamer states in warm water might be responsible for the faster crystallization. Distribution function: Strong deviations from the Maxwell–Boltzmann distribution result in potential Mpemba effect showing up in gases. Similar effects Other phenomena in which large effects may be achieved faster than small effects are: Latent heat: Turning ice to water takes the same amount of energy as heating water from to . Leidenfrost effect: Lower temperature boilers can sometimes vaporize water faster than higher temperature boilers. Strong Mpemba effect The possibility of a "strong Mpemba effect" where exponentially faster cooling can occur in a system at particular initial temperatures was predicted in 2017 by Klich, Raz, Hirschberg and Vucelja. In 2020 the strong Mpemba effect was demonstrated experimentally by Avinash Kumar and John Boechhoefer in a colloidal system. Quantum Mpemba effect In 2024, Goold and coworkers described their quantum-mechanical analysis of an the abstract problem wherein "an initially hot system is quenched into a cold bath and reaches equilibrium faster than an initially cooler system."In addition to their theoretical work, which used non-equilibrium quantum dynamics, their paper includes computational studies of spin systems which exhibit the effect. They concluded that certain initial conditions of a quantum-dynamical system can lead to a simultaneous increase in the thermalization rate and the free energy. See also Density of water Heat capacity Water cluster Newton's law of cooling References Notes Bibliography Auerbach attributes the Mpemba effect to differences in the behaviour of supercooled formerly hot water and formerly cold water. An extensive study of freezing experiments. External links A possible explanation of the Mpemba Effect An analysis of the Mpemba effect London South Bank University – History and analysis of the Mpemba effect An historical interview with Erasto B. Mpemba, Dr Denis G. Osborne and Ray deSouza High school experiment description, with link to experimental results in the University of California Usenet Physics FAQ Mpemba Competition - Royal Society of Chemistry Physical paradoxes Thermodynamics Phase transitions Unsolved problems in physics Water physics Physical phenomena Hysteresis 1969 in Tanzania Science and technology in Tanzania
Mpemba effect
Physics,Chemistry,Materials_science,Mathematics,Engineering
2,632
13,215,593
https://en.wikipedia.org/wiki/Lp0%20on%20fire
lp0 on fire (also known as Printer on Fire) is an outdated error message generated on some Unix and Unix-like computer operating systems in response to certain types of printer errors. lp0 is the Unix device handle for the first line printer, but the error can be displayed for any printer attached to a Unix or Linux system. It indicates a printer error that requires further investigation to diagnose, but not necessarily that it is on fire. Printer flammability In the late 1950s, high speed computerized printing was still a somewhat experimental field. The first documented fire-starting printer was a Stromberg-Carlson 5000 xerographic printer (similar to a modern laser printer, but with a CRT as the light source instead of a laser), installed around 1959 at the Lawrence Livermore National Laboratory and modified with an extended fusing oven to achieve a print speed of one page per second. In the event of a printing stall, and occasionally during normal operation, the fusing oven would cause the paper to combust. This fire risk was aggravated by the fact that if the printer continued to operate, it would feed a fire with fresh paper at high speed. However, there is no evidence of the "lp0 on fire" message appearing in any software of the time. As the technology matured, most large printer installations were drum printers, a type of impact printer which could print an entire line of text at once through the use of a high speed rotary printing drum. It was thought that in the event of a severe jam, the friction of paper against the drum could ignite either the paper itself, or, in a dirty machine, the accumulated paper and ink dust in the mechanism. Whether this ever happened is not known; there are no reports of friction-related printer fires. The line printer employed a series of status codes, specifically ready, online, and check. If the online status was set to "off" and the check status was set to "on," the operating system would interpret this as the printer running out of paper. However, if the online code was set to "on" and the check code was also set to "on", it meant that the printer still had paper, but was suffering an error (and may still be attempting to run). Due to the potentially hazardous conditions which could arise in early line printers, UNIX displayed the message "on fire" to motivate any system operator viewing the message to go and check on the line printer immediately. In the early 1980s, Xerox created a prototype laser printer engine and provided units to various computer companies. To fuse the toner, the paper path passed a glowing wire. If paper jammed anywhere in the path, the sheet in the fuser caught fire. The prototype UNIX driver reported paper jams as "on fire." Later print engine models used a hot drum in place of the wire. Phrase origins Michael K. Johnson ("mkj" of Red Hat and Fedora fame) wrote the first Linux version of this error message in 1992. However, he, Herbert Rosmanith and Alan Cox (all Linux developers) have acknowledged that the phrase existed in Unix in different forms prior to his Linux printer implementation. Since then, the lp printer code has spread across all sorts of POSIX-compliant operating systems, which often still retain this legacy message. Modern printer drivers and support have improved and hidden low-level error messages from users, so most Unix/Linux users today have never seen the "on fire" message. The "on fire" message remains in the Linux source code as of version 6.0. The message is also present in other software modules, often to humorous effect. For example, in some kernels' CPU code, a CPU thermal failure could result in the message "CPU#0: Possible thermal failure (CPU on fire ?)" and similar humor can be found in the phrase "halt and catch fire". See also HTTP 418 Not a typewriter PC LOAD LETTER References Computer errors Computer humour Computer printers Fire
Lp0 on fire
Chemistry,Technology
822
15,349,062
https://en.wikipedia.org/wiki/ZNF44
Zinc finger protein 44 is a protein that in humans is encoded by the ZNF44 gene. References Further reading External links Transcription factors
ZNF44
Chemistry,Biology
29
22,148,935
https://en.wikipedia.org/wiki/Manumation
Manumation is the automation of paper based processes in public sector and business without improvement regarding its efficiency. Automation of an inefficient process does not lead to an improvement in case of manumation. This term could be seen as a sarcastic description of the digital replication and mimicking of frequently ineffective and even broken paper-based processes in first phase of the societal digitalisation, from 1995 to 2015. Manumation is also a term for automated systems, which require more manual work than the original manual process. Definitions Examples Computerized transaction processing is the automation of previously manual transactions. See also Semi-automation References Impact of automation Systems analysis
Manumation
Engineering
132
240,060
https://en.wikipedia.org/wiki/Regulatory%20sequence
A regulatory sequence is a segment of a nucleic acid molecule which is capable of increasing or decreasing the expression of specific genes within an organism. Regulation of gene expression is an essential feature of all living organisms and viruses. Description In DNA, regulation of gene expression normally happens at the level of RNA biosynthesis (transcription). It is accomplished through the sequence-specific binding of proteins (transcription factors) that activate or inhibit transcription. Transcription factors may act as activators, repressors, or both. Repressors often act by preventing RNA polymerase from forming a productive complex with the transcriptional initiation region (promoter), while activators facilitate formation of a productive complex. Furthermore, DNA motifs have been shown to be predictive of epigenomic modifications, suggesting that transcription factors play a role in regulating the epigenome. In RNA, regulation may occur at the level of protein biosynthesis (translation), RNA cleavage, RNA splicing, or transcriptional termination. Regulatory sequences are frequently associated with messenger RNA (mRNA) molecules, where they are used to control mRNA biogenesis or translation. A variety of biological molecules may bind to the RNA to accomplish this regulation, including proteins (e.g., translational repressors and splicing factors), other RNA molecules (e.g., miRNA) and small molecules, in the case of riboswitches. Activation and implementation A regulatory DNA sequence does not regulate unless it is activated. Different regulatory sequences are activated and then implement their regulation by different mechanisms. Enhancer activation and implementation Expression of genes in mammals can be upregulated when signals are transmitted to the promoters associated with the genes. Cis-regulatory DNA sequences that are located in DNA regions distant from the promoters of genes can have very large effects on gene expression, with some genes undergoing up to 100-fold increased expression due to such a cis-regulatory sequence. These cis-regulatory sequences include enhancers, silencers, insulators and tethering elements. Among this constellation of sequences, enhancers and their associated transcription factor proteins have a leading role in the regulation of gene expression. Enhancers are sequences of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to promoters. Multiple enhancers, each often at tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control expression of their common target gene. The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factor proteins (in 2018 Lambert et al. indicated there were about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern the level of transcription of the target gene. Mediator (coactivator) (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (RNAP II) enzyme bound to the promoter. Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the Figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of a transcription factor bound to an enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating a promoter to initiate transcription of messenger RNA from its target gene. CpG island methylation and demethylation 5-Methylcytosine (5-mC) is a methylated form of the DNA base cytosine (see figure). 5-mC is an epigenetic marker found predominantly on cytosines within CpG dinucleotides, which consist of a cytosine is followed by a guanine reading in the 5′ to 3′ direction along the DNA strand (CpG sites). About 28 million CpG dinucleotides occur in the human genome. In most tissues of mammals, on average, 70% to 80% of CpG cytosines are methylated (forming 5-methyl-CpG, or 5-mCpG). Methylated cytosines within CpG sequences often occur in groups, called CpG islands. About 59% of promoter sequences have a CpG island while only about 6% of enhancer sequences have a CpG island. CpG islands constitute regulatory sequences, since if CpG islands are methylated in the promoter of a gene this can reduce or silence gene expression. DNA methylation regulates gene expression through interaction with methyl binding domain (MBD) proteins, such as MeCP2, MBD1 and MBD2. These MBD proteins bind most strongly to highly methylated CpG islands. These MBD proteins have both a methyl-CpG-binding domain and a transcriptional repression domain. They bind to methylated DNA and guide or direct protein complexes with chromatin remodeling and/or histone modifying activity to methylated CpG islands. MBD proteins generally repress local chromatin by means such as catalyzing the introduction of repressive histone marks or creating an overall repressive chromatin environment through nucleosome remodeling and chromatin reorganization. Transcription factors are proteins that bind to specific DNA sequences in order to regulate the expression of a given gene. The binding sequence for a transcription factor in DNA is usually about 10 or 11 nucleotides long. There are approximately 1,400 different transcription factors encoded in the human genome and they constitute about 6% of all human protein coding genes. About 94% of transcription factor binding sites that are associated with signal-responsive genes occur in enhancers while only about 6% of such sites occur in promoters. EGR1 is a transcription factor important for regulation of methylation of CpG islands. An EGR1 transcription factor binding site is frequently located in enhancer or promoter sequences. There are about 12,000 binding sites for EGR1 in the mammalian genome and about half of EGR1 binding sites are located in promoters and half in enhancers. The binding of EGR1 to its target DNA binding site is insensitive to cytosine methylation in the DNA. While only small amounts of EGR1 protein are detectable in cells that are un-stimulated, EGR1 translation into protein at one hour after stimulation is markedly elevated. Expression of EGR1 in various types of cells can be stimulated by growth factors, neurotransmitters, hormones, stress and injury. In the brain, when neurons are activated, EGR1 proteins are upregulated, and they bind to (recruit) pre-existing TET1 enzymes, which are highly expressed in neurons. TET enzymes can catalyze demethylation of 5-methylcytosine. When EGR1 transcription factors bring TET1 enzymes to EGR1 binding sites in promoters, the TET enzymes can demethylate the methylated CpG islands at those promoters. Upon demethylation, these promoters can then initiate transcription of their target genes. Hundreds of genes in neurons are differentially expressed after neuron activation through EGR1 recruitment of TET1 to methylated regulatory sequences in their promoters. Activation by double- or single-strand breaks About 600 regulatory sequences in promoters and about 800 regulatory sequences in enhancers appear to depend on double-strand breaks initiated by topoisomerase 2β (TOP2B) for activation. The induction of particular double-strand breaks is specific with respect to the inducing signal. When neurons are activated in vitro, just 22 TOP2B-induced double-strand breaks occur in their genomes. However, when contextual fear conditioning is carried out in a mouse, this conditioning causes hundreds of gene-associated DSBs in the medial prefrontal cortex and hippocampus, which are important for learning and memory. Such TOP2B-induced double-strand breaks are accompanied by at least four enzymes of the non-homologous end joining (NHEJ) DNA repair pathway (DNA-PKcs, KU70, KU80 and DNA LIGASE IV) (see figure). These enzymes repair the double-strand breaks within about 15 minutes to 2 hours. The double-strand breaks in the promoter are thus associated with TOP2B and at least these four repair enzymes. These proteins are present simultaneously on a single promoter nucleosome (there are about 147 nucleotides in the DNA sequence wrapped around a single nucleosome) located near the transcription start site of their target gene. The double-strand break introduced by TOP2B apparently frees the part of the promoter at an RNA polymerase–bound transcription start site to physically move to its associated enhancer. This allows the enhancer, with its bound transcription factors and mediator proteins, to directly interact with the RNA polymerase that had been paused at the transcription start site to start transcription. Similarly, topoisomerase I (TOP1) enzymes appear to be located at many enhancers, and those enhancers become activated when TOP1 introduces a single-strand break. TOP1 causes single-strand breaks in particular enhancer DNA regulatory sequences when signaled by a specific enhancer-binding transcription factor. Topoisomerase I breaks are associated with different DNA repair factors than those surrounding TOP2B breaks. In the case of TOP1, the breaks are associated most immediately with DNA repair enzymes MRE11, RAD50 and ATR. Examples Genomes can be analyzed systematically to identify regulatory regions. Conserved non-coding sequences often contain regulatory regions, and so they are often the subject of these analyses. CAAT box CCAAT box Operator (biology) Pribnow box TATA box SECIS element, mRNA Polyadenylation signal, mRNA A-box Z-box C-box E-box G-box Insulin gene Regulatory sequences for the insulin gene are: A5 Z negative regulatory element (NRE) C2 E2 A3 cAMP response element A2 CAAT enhancer binding (CEB) C1 E1 G1 See also Regulator gene Regulation of gene expression Cis-acting element Gene regulatory network Open Regulatory Annotation Database Operon DNA binding site Promoter Trans-acting factor ORegAnno References External links ORegAnno - Open Regulatory Annotation Database ReMap - database of transcriptional regulators Gene expression
Regulatory sequence
Chemistry,Biology
2,329
43,284,277
https://en.wikipedia.org/wiki/Penicillium%20asturianum
Penicillium asturianum is an anamorph fungus species of the genus of Penicillium. See also List of Penicillium species References asturianum Fungi described in 1981 Fungus species
Penicillium asturianum
Biology
47
12,510,214
https://en.wikipedia.org/wiki/Carbochemistry
Carbochemistry is the branch of chemistry that studies the transformation of coal (bituminous coal, coal tar, anthracite, lignite, graphite, and charcoal) into useful products and raw materials. The processes that are used in carbochemistry include degasification processes such as carbonization and coking, gasification processes, and liquefaction processes. History The beginning of carbochemistry goes back to the 16th century. At that time, large quantities of charcoal were needed for the smelting of iron ores. Since the production of charcoal required large amounts of slowly-regenerating wood, the use of coal was studied. The use of pure coal was difficult because of the amount of liquid and solid by-products that were generated. In order to improve the handling the coal was initially treated as wood in kilns to produce coke. Around 1684, John Clayton discovered that coal gas generated from coal was combustible. He described his discovery in the Philosophical Transactions of the Royal Society. See also Bergius process Clean coal technology Coal tar Fischer–Tropsch process Petrochemistry Synthetic Liquid Fuels Program Coking factory References Chemical engineering
Carbochemistry
Chemistry,Engineering
243
71,528,348
https://en.wikipedia.org/wiki/ISO%2025119
ISO 25119, titled "Tractors and machinery for agriculture and forestry – Safety-related parts of control systems", is an international standard for functional safety of electrical and/or electronic systems that are installed in tractors and machines used in agriculture and forestry, defined by the International Organization for Standardization (ISO). Parts of ISO 25119 ISO 25119 consists of following parts: General principles for design and development Concept phase Series development, hardware and software Production, operation, modification and supporting processes See also IEC 61508 References 25119 Safety engineering
ISO 25119
Engineering
109
1,736,364
https://en.wikipedia.org/wiki/Azipod
Azipod is a trademarked azimuth thruster pod design, a marine propulsion unit consisting of a fixed pitch propeller mounted on a steerable gondola ("pod") containing the electric motor driving the propeller, allowing ships to be more maneuverable. They were developed in Finland in the late 1980s jointly by Wärtsilä Marine, Strömberg and the Finnish National Board of Navigation. Although "Azipod" is a registered brand name owned by ABB, it is sometimes used as a generic trademark for podded propulsion units manufactured by other companies. Concept In the conventional azimuth thrusters such as Z-drive and L-drive thrusters, the propeller is driven by an electric motor or a diesel engine inside the ship's hull. The propeller is coupled to the prime mover with shafts and bevel gears that allow rotating the propeller about a vertical axis. This type of propulsion system has a long tradition throughout the 1990s and today such propulsion units are produced by a number of companies around the world. In the Azipod unit, the electric motor is mounted inside the propulsion unit and the propeller is connected directly to the motor shaft. Electric power for the propulsion motor is conducted through slip rings that let the Azipod unit rotate 360 degrees about the vertical axis. Because Azipod units utilize fixed-pitch propellers, power is always fed through a variable-frequency drive or cycloconverter that allows speed and direction control of the propulsion motors. The pod's propeller usually faces forward because in this pulling (or tractor) configuration the propeller is more efficient due to operation in undisturbed flow. Because it can rotate around its mount axis, the pod can apply its thrust in any direction. Azimuth thrusters allow ships to be more maneuverable and enable them to travel backward nearly as efficiently as they can travel forward. In order for the full capabilities of podded propulsion units to be realized in commercial service, shiphandling training on simulators and crewed models is required for the crew. The podded design typically achieved a 9% better fuel efficiency than the conventional propulsion system when it was first installed in the 1990s. In the meantime, improvements to the conventional designs have shrunk the gap to 6–8%, but on the other hand the hydrodynamic flow around the Azipod has been improved by fin retrofits and a dynamic computer optimization of the respective operating angles of the pods in multipod installations, thereby yielding overall efficiency improvements now in the range of 18%. History Development In 1987, the Finnish National Board of Navigation made a co-operation proposal to the electrical equipment company Strömberg (later ABB) and the Finnish shipbuilder Wärtsilä Marine for the development of a new type of electric propulsion unit. Prior to this, the companies had been working together for decades in the field of diesel-electric propulsion systems and in the 1980s produced the first icebreakers with alternating current propulsion motors and cycloconverters. The development of the prototype started in 1989 and the first unit was ready for installation in the following year. The 1.5MW unit, dubbed "Azipod" (short for azimuthing electric podded drive) was installed on the 1979-built Finnish fairway support vessel Seili at Hietalahti shipyard in Helsinki, Finland. After the refit, the vessel's icebreaking performance was considerably increased and she was also found out to be capable of breaking ice astern (backwards). This discovery of a new operating mode eventually led to the development of the double acting ship concept in the early 1990s. When Seili was refitted with new propulsion system in the 2000s, the prototype unit was donated to Forum Marinum and put on display in Turku, Finland. Following the encouraging experiences from the prototype installation, the development of the Azipod concept continued and the next units were retrofitted on two Finnish oil tankers, Uikku and Lunni, in 1993 and 1994, respectively. Nearly eight times as powerful as the prototype, the 11.4MW Azipod units considerably increased the icegoing ability of the vessels that were already built with independent icebreaking capability in mind. Since the 1990s, the vast majority of ships capable of operating in ice without icebreaker escort have been fitted with Azipod propulsion system. The first three Azipod units were of so-called "pushing" type in which the propeller is mounted behind the gondola. In the subsequent installations, ABB adopted the more efficient "pulling" configuration similar to propeller-driven airplanes. The world's first cruise ship fitted with Azipod propulsion units, Elation, was delivered by Kværner Masa-Yards Helsinki shipyard in the spring of 1998. Even though the Azipod was initially developed for icebreaking vessels, cruise ships have become the largest group of ships by type to be fitted with Azipod propulsion system since the 1990s and the success of the electric podded propulsion units has paved the way for competitors such as the Rolls-Royce Mermaid. Among the vessels fitted with Azipod units are Royal Caribbean International's Voyager-, Freedom-, Oasis- and Icon-class cruise ships, each of which held the title of the largest cruise ship in the world at the time of delivery. Another further development of the original electric podded propulsion concept is the Compact Azipod, a smaller Azipod unit introduced in the early 2000s. It is intended for smaller ships such as research vessels and yachts as well as dynamically positioned drilling rigs that may utilize up to eight such propulsors. The smaller Azipod Compact differs from the full-size unit by its permanent magnet synchronous motor which is directly cooled by sea water. For drilling vessels, it is also available in "pushing" configuration and can be fitted with a nozzle to increase bollard pull thrust in stationkeeping applications. Unlike the full-sized Azipod units which are assembled in Finland, the Compact Azipod units are manufactured in China. Bearing-related problems During the initial years in service, some widely publicised cruise ship service disruptions with the bigger Azipod V design have occurred. The latest design, the Azipod X, incorporates these improvements, with a view to a service interval of five years, and features bearings that can be taken apart and repaired from inside the pod while the ship is harbored normally. See also Manoeuvring thruster Azimuth thruster Z-drive Cycloidal drive Rim-driven thruster References Marine propulsion Finnish inventions
Azipod
Engineering
1,349
21,534,694
https://en.wikipedia.org/wiki/Spinmechatronics
Spinmechatronics is neologism referring to an emerging field of research concerned with the exploitation of spin-dependent phenomena and established spintronic methodologies and technologies in conjunction with electro-mechanical, magno-mechanical, acousto-mechanical and opto-mechanical systems. Most especially, spinmechatronics (or spin mechatronics) concerns the integration of micro- and nano- mechatronic systems with spin physics and spintronics. History and origins While spinmechatronics has been recognised only recently (2008) as an independent field, hybrid spin-mechanical system development dates back to the early nineteen-nineties, with devices combining spintronics and micromechanics emerging at the turn of the twenty-first century. One of the longest established spinmechatronic systems is the Magnetic Resonance Force Microscope or MRFM. First proposed by J. A. Sidles in a seminal paper of 1991 – and since extensively developed both theoretically and experimentally by a number of international research groups – the MRFM operates by coupling a magnetically loaded micro-mechanical cantilever to an excited nuclear, proton or electron spin system. The MRFM concept effectively combines scanning atomic force microscopy (AFM) with magnetic resonance spectroscopy to provide a spectroscopic tool of unparalleled sensitivity. Nanometre resolution is possible, and the technique potentially forms the basis for ultra-high sensitivity, ultra-high resolution magnetic, biochemical, biomedical, and clinical diagnostics. The synergy of micromechanics and established spintronic technologies for sensing applications is one of the most significant spinmechatronic developments of the last decade. At the beginning of this century, strain sensors incorporating magnetoresistive technologies emerged and a wide range of devices exploiting similar principles are likely to realize research and commercial potential by 2015. Contemporary innovation in spinmechatronics drives forward the independent advancement of cutting-edge science in spin physics, spintronics and micro- and nano-mechatronics and catalyses the development of wholly new instrumentation, control and fabrication techniques to facilitate and exploit their integration. Key constitutive technologies Micro- and nano- mechatronics MEMS: micro-electromechanical systems are the key ingredient of micro-mechatronics. Micro-electromechanical systems are – as the name suggests – devices with significant dimensions in the micrometre regime or less. Highly suited to integration with electronic and microwave circuitry, they provide the key to electro-mechanical functionalities unachievable with classical precision mechatronics. Commercialisation of mass-produced Microelectromechanical systems products is rapidly picking up pace and includes printer ink-jet technology, 3D accelerometers, integrated pressure sensors, and Digital Light Processing (DLP) displays. At the cutting edge of Microelectromechanical systems fabrication and integration technologies are nano- electromechanical systems (NEMS). Typical examples are micrometres long, tens of nanometres thick, and have mechanical resonance frequencies approaching 100 MHz. Their small physical dimensions and mass (of order pico-grams) makes them highly sensitive to changes in stiffness; this, their synergy with mechanical and data processing systems, and the option of attaching chemical/ biological molecules, makes them ideal for ultra high-performance mechanical, chemical and biological sensing applications. Spin physics Spin physics is a broad and active area of condensed-matter physics research. ‘Spin’ in this context refers to a quantum mechanical property of certain elementary particles and nuclei, and should not be confused with the classical (and better-known) concept of rotation. Spin physics spans studies of nuclear, electron and proton magnetic resonance, magnetism, and certain areas of optics. Spintronics is a branch of spin physics. Perhaps the two best known applications of spin physics are Magnetic Resonance Imaging (or MRI) and the spintronic giant-magnetoresistive (GMR) hard disk read head. Spintronics Spintronic magnetoresistance is a major scientific and commercial success story. Today, most families own a spintronic device: the giant-magnetoresistive (GMR) hard disk read head in their computer. The science that gave rise to this phenomenal business opportunity – and earned the 2007 Nobel Prize for Physics – was the recognition that electrical carriers are characterized by both charge and spin. Today, tunnelling-magnetoresistance (TMR) – which uses the electron spin as a label to allow or forbid electron tunnelling – dominates the hard disk market and is rapidly establishing itself in areas as diverse as magnetic logic devices and biosensors. Ongoing development is pushing the frontiers of TMR devices towards the nanoscale. See also Spintronics Mechatronics Microelectromechanical systems Nanoelectromechanical systems Magnetic resonance force microscopy List of nanotechnology applications Giant magnetoresistance Albert Fert Peter Grünberg Nobel Prize in Physics Hard disk drive Magnetism Quantum tunnelling References External links Electric and magnetic fields in matter Electrical engineering Materials science Nanoelectronics Microtechnology Electromechanical engineering Control engineering
Spinmechatronics
Physics,Chemistry,Materials_science,Engineering
1,064
12,799
https://en.wikipedia.org/wiki/Graphic%20design
Graphic design is a profession, academic discipline and applied art whose activity consists in projecting visual communications intended to transmit specific messages to social groups, with specific objectives. Graphic design is an interdisciplinary branch of design and of the fine arts. Its practice involves creativity, innovation and lateral thinking using manual or digital tools, where it is usual to use text and graphics to communicate visually. The role of the graphic designer in the communication process is that of the encoder or interpreter of the message. They work on the interpretation, ordering, and presentation of visual messages. Usually, graphic design uses the aesthetics of typography and the compositional arrangement of the text, ornamentation, and imagery to convey ideas, feelings, and attitudes beyond what language alone expresses. The design work can be based on a customer's demand, a demand that ends up being established linguistically, either orally or in writing, that is, that graphic design transforms a linguistic message into a graphic manifestation. Graphic design has, as a field of application, different areas of knowledge focused on any visual communication system. For example, it can be applied in advertising strategies, or it can also be applied in the aviation world or space exploration. In this sense, in some countries graphic design is related as only associated with the production of sketches and drawings, this is incorrect, since visual communication is a small part of a huge range of types and classes where it can be applied. With origins in Antiquity and the Middle Ages, graphic design as applied art was initially linked to the boom of the rise of printing in Europe in the 15th century and the growth of consumer culture in the Industrial Revolution. From there it emerged as a distinct profession in the West, closely associated with advertising in the 19th century and its evolution allowed its consolidation in the 20th century. Given the rapid and massive growth in information exchange today, the demand for experienced designers is greater than ever, particularly because of the development of new technologies and the need to pay attention to human factors beyond the competence of the engineers who develop them. Terminology The term "graphic design" makes an early appearance in a 4 July 1908 issue (volume 9, number 27) of Organized Labor, a publication of the Labor Unions of San Francisco, in an article about technical education for printers: An Enterprising Trades Union … The admittedly high standard of intelligence which prevails among printers is an assurance that with the elemental principles of design at their finger ends many of them will grow in knowledge and develop into specialists in graphic design and decorating. … A decade later, the 1917–1918 course catalog of the California School of Arts & Crafts advertised a course titled Graphic Design and Lettering, which replaced one called Advanced Design and Lettering. Both classes were taught by Frederick Meyer. History In both its lengthy history and in the relatively recent explosion of visual communication in the 20th and 21st centuries, the distinction between advertising, art, graphic design and fine art has disappeared. They share many elements, theories, principles, practices, languages and sometimes the same benefactor or client. In advertising, the ultimate objective is the sale of goods and services. In graphic design, "the essence is to give order to information, form to ideas, expression, and feeling to artifacts that document the human experience." The definition of the graphic designer profession is relatively recent concerning its preparation, activity, and objectives. Although there is no consensus on an exact date when graphic design emerged, some date it back to the Interwar period. Others understand that it began to be identified as such by the late 19th century. It can be argued that graphic communications with specific purposes have their origins in Paleolithic cave paintings and the birth of written language in the third millennium BCE. However, the differences in working methods, auxiliary sciences, and required training are such that it is not possible to clearly identify the current graphic designer with prehistoric man, the 15th-century xylographer, or the lithographer of 1890. The diversity of opinions stems from some considering any graphic manifestation as a product of graphic design, while others only recognize those that arise as a result of the application of an industrial production model—visual manifestations that have been "projected" to address various needs: productive, symbolic, ergonomic, contextual, among others. Nevertheless, the evolution of graphic design as a practice and profession has been closely linked to technological innovations, social needs, and the visual imagination of professionals. Graphic design has been practiced in various forms throughout history; in fact, good examples of graphic design date back to manuscripts from ancient China, Egypt, and Greece. As printing and book production developed in the 15th century, advances in graphic design continued over the subsequent centuries, with composers or typographers often designing pages according to established type. By the late 19th century, graphic design emerged as a distinct profession in the West, partly due to the process of labor specialization that occurred there and partly due to the new technologies and business possibilities brought about by the Industrial Revolution. New production methods led to the separation of the design of a communication medium (such as a poster) from its actual production. Increasingly, throughout the 19th and early 20th centuries, advertising agencies, book publishers, and magazines hired art directors who organized all visual elements of communication and integrated them into a harmonious whole, creating an expression appropriate to the content. In 1922, typographer William A. Dwiggins coined the term graphic design to identify the emerging field. Throughout the 20th century, the technology available to designers continued to advance rapidly, as did the artistic and commercial possibilities of design. The profession expanded greatly, and graphic designers created, among other things, magazine pages, book covers, posters, CD covers, postage stamps, packaging, brands, signs, advertisements, kinetic titles for TV programs and movies, and websites. By the early 21st century, graphic design had become a global profession as advanced technology and industry spread worldwide. Historical background In China, during the Tang dynasty (618–907) wood blocks were cut to print on textiles and later to reproduce Buddhist texts. A Buddhist scripture printed in 868 is the earliest known printed book. Beginning in the 11th century in China, longer scrolls and books were produced using movable type printing, making books widely available during the Song dynasty (960–1279). In the mid-15th century in Mainz, Germany, Johannes Gutenberg developed a way to reproduce printed pages at a faster pace using movable type made with a new metal alloy that created a revolution in the dissemination of information. Nineteenth century In 1849, Henry Cole became one of the major forces in design education in Great Britain, informing the government of the importance of design in his Journal of Design and Manufactures. He organized the Great Exhibition as a celebration of modern industrial technology and Victorian design. From 1891 to 1896, William Morris' Kelmscott Press was a leader in graphic design associated with the Arts and Crafts movement, creating hand-made books in medieval and Renaissance era style, in addition to wallpaper and textile designs. Morris' work, along with the rest of the Private Press movement, directly influenced Art Nouveau. Will H. Bradley became one of the notable graphic designers in the late nineteenth-century due to creating art pieces in various Art Nouveau styles. Bradley created a number of designs as promotions for a literary magazine titled The Chap-Book. Twentieth century In 1917, Frederick H. Meyer, director and instructor at the California School of Arts and Crafts, taught a class entitled "Graphic Design and Lettering". Raffe's Graphic Design, published in 1927, was the first book to use "Graphic Design" in its title. In 1936, author and graphic designer Leon Friend published his book titled "Graphic Design" and it is known to be the first piece of literature to cover the topic extensively. The signage in the London Underground is a classic design example of the modern era. Although he lacked artistic training, Frank Pick led the Underground Group design and publicity movement. The first Underground station signs were introduced in 1908 with a design of a solid red disk with a blue bar in the center and the name of the station. The station name was in white sans-serif letters. It was in 1916 when Pick used the expertise of Edward Johnston to design a new typeface for the Underground. Johnston redesigned the Underground sign and logo to include his typeface on the blue bar in the center of a red circle. In the 1920s, Soviet constructivism applied 'intellectual production' in different spheres of production. The movement saw individualistic art as useless in revolutionary Russia and thus moved towards creating objects for utilitarian purposes. They designed buildings, film and theater sets, posters, fabrics, clothing, furniture, logos, menus, etc. Jan Tschichold codified the principles of modern typography in his 1928 book, New Typography. He later repudiated the philosophy he espoused in this book as fascistic, but it remained influential. Tschichold, Bauhaus typographers such as Herbert Bayer and László Moholy-Nagy and El Lissitzky greatly influenced graphic design. They pioneered production techniques and stylistic devices used throughout the twentieth century. The following years saw graphic design in the modern style gain widespread acceptance and application. The professional graphic design industry grew in parallel with consumerism. This raised concerns and criticisms, notably from within the graphic design community with the First Things First manifesto. First launched by Ken Garland in 1964, it was re-published as the First Things First 2000 manifesto in 1999 in the magazine Emigre 51 stating "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication – a mindshift away from product marketing and toward the exploration and production of a new kind of meaning. The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design." Applications Graphic design can have many applications, from road signs to technical schematics and reference manuals. It is often used in branding products and elements of company identity such as logos, colors, packaging, labelling and text. From scientific journals to news reporting, the presentation of opinions and facts is often improved with graphics and thoughtful compositions of visual information – known as information design. With the advent of the web, information designers with experience in interactive tools are increasingly used to illustrate the background to news stories. Information design can include Data and information visualization, which involves using programs to interpret and form data into a visually compelling presentation, and can be tied in with information graphics. Skills A graphic design project may involve the creative presentation of existing text, ornament, and images. The "process school" is concerned with communication; it highlights the channels and media through which messages are transmitted and by which senders and receivers encode and decode these messages. The semiotic school treats a message as a construction of signs which through interaction with receivers, produces meaning; communication as an agent. Typography Typography includes type design, modifying type glyphs and arranging type. Type glyphs (characters) are created and modified using illustration techniques. Type arrangement is the selection of typefaces, point size, tracking (the space between all characters used), kerning (the space between two specific characters) and leading (line spacing). Typography is performed by typesetters, compositors, typographers, graphic artists, art directors, and clerical workers. Until the digital age, typography was a specialized occupation. Certain fonts communicate or resemble stereotypical notions. For example, the 1942 Report is a font which types text akin to a typewriter or a vintage report. Page layout Page layout deals with the arrangement of elements (content) on a page, such as image placement, text layout and style. Page design has always been a consideration in printed material and more recently extended to displays such as web pages. Elements typically consist of type (text), images (pictures), and (with print media) occasionally place-holder graphics such as a dieline for elements that are not printed with ink such as die/laser cutting, foil stamping or blind embossing. Grids A grid serves as a method of arranging both space and information, allowing the reader to easily comprehend the overall project. Furthermore, a grid functions as a container for information and a means of establishing and maintaining order. Despite grids being utilized for centuries, many graphic designers associate them with Swiss design. The desire for order in the 1940s resulted in a highly systematic approach to visualizing information. However, grids were later regarded as tedious and uninteresting, earning the label of "designersaur." Today, grids are once again considered crucial tools for professionals, whether they are novices or veterans. Tools In the mid-1980s desktop publishing and graphic art software applications introduced computer image manipulation and creation capabilities that had previously been manually executed. Computers enabled designers to instantly see the effects of layout or typographic changes, and to simulate the effects of traditional media. Traditional tools such as pencils can be useful even when computers are used for finalization; a designer or art director may sketch numerous concepts as part of the creative process. Styluses can be used with tablet computers to capture hand drawings digitally. Computers and software Designers disagree whether computers enhance the creative process. Some designers argue that computers allow them to explore multiple ideas quickly and in more detail than can be achieved by hand-rendering or paste-up. While other designers find the limitless choices from digital design can lead to paralysis or endless iterations with no clear outcome. Most designers use a hybrid process that combines traditional and computer-based technologies. First, hand-rendered layouts are used to get approval to execute an idea, then the polished visual product is produced on a computer. Graphic designers are expected to be proficient in software programs for image-making, typography and layout. Nearly all of the popular and "industry standard" software programs used by graphic designers since the early 1990s are products of Adobe Inc. Adobe Photoshop (a raster-based program for photo editing) and Adobe Illustrator (a vector-based program for drawing) are often used in the final stage. CorelDraw, a vector graphics editing software developed and marketed by Corel Corporation, is also used worldwide. Designers often use pre-designed raster images and vector graphics in their work from online design databases. Raster images may be edited in Adobe Photoshop, vector logos and illustrations in Adobe Illustrator and CorelDraw, and the final product assembled in one of the major page layout programs, such as Adobe InDesign, Serif PagePlus and QuarkXPress. Many free and open-source programs are also used by both professionals and casual graphic designers. Inkscape uses Scalable Vector Graphics (SVG) as its primary file format and allows importing and exporting other formats. Other open-source programs used include GIMP for photo-editing and image manipulation, Krita for digital painting, and Scribus for page layout. Related design fields Print design A specialized branch of graphic design and historically its earliest form, print design involves creating visual content intended for reproduction on physical substrates such as silk, paper, and later, plastic, for mass communication and persuasion (e.g., marketing, governmental publishing, propaganda). Print design techniques have evolved over centuries, beginning with the invention of movable type by the Chinese alchemist Pi Sheng, later refined by the German inventor Johannes Gutenberg. Over time, methods such as lithography, screen printing, and offset printing have been developed, culminating in the contemporary use of digital presses that integrate traditional print techniques with modern digital technology. Interface design Since the advent of personal computers, many graphic designers have become involved in interface design, in an environment commonly referred to as a Graphical user interface (GUI). This has included web design and software design when end user-interactivity is a design consideration of the layout or interface. Combining visual communication skills with an understanding of user interaction and online branding, graphic designers often work with software developers and web developers to create the look and feel of a web site or software application. An important aspect of interface design is icon design. User experience design User experience design (UX) is the study, analysis, and development of creating products that provide meaningful and relevant experiences to users. This involves the creation of the entire process of acquiring and integrating the product, including aspects of branding, design, usability, and function. UX design involves creating the interface and interactions for a website or application, and is considered both an act and an art. This profession requires a combination of skills, including visual design, social psychology, development, project management, and most importantly, empathy towards the end-users. Experiential graphic design Experiential graphic design is the application of communication skills to the built environment. This area of graphic design requires practitioners to understand physical installations that have to be manufactured and withstand the same environmental conditions as buildings. As such, it is a cross-disciplinary collaborative process involving designers, fabricators, city planners, architects, manufacturers and construction teams. Experiential graphic designers try to solve problems that people encounter while interacting with buildings and space (also called environmental graphic design). Examples of practice areas for environmental graphic designers are wayfinding, placemaking, branded environments, exhibitions and museum displays, public installations and digital environments. Occupations Graphic design career paths cover all parts of the creative spectrum and often overlap. Workers perform specialized tasks, such as design services, publishing, advertising and public relations. As of 2023, median pay was $58,910 per year. The main job titles within the industry are often country specific. They can include graphic designer, art director, creative director, animator and entry level production artist. Depending on the industry served, the responsibilities may have different titles such as "DTP associate" or "Graphic Artist". The responsibilities may involve specialized skills such as illustration, photography, animation, visual effects or interactive design. Employment in design of online projects was expected to increase by 35% by 2026, while employment in traditional media, such as newspaper and book design, expect to go down by 22%. Graphic designers will be expected to constantly learn new techniques, programs, and methods. Graphic designers can work within companies devoted specifically to the industry, such as design consultancies or branding agencies, others may work within publishing, marketing or other communications companies. Especially since the introduction of personal computers, many graphic designers work as in-house designers in non-design oriented organizations. Graphic designers may also work freelance, working on their own terms, prices, ideas, etc. A graphic designer typically reports to the art director, creative director or senior media creative. As a designer becomes more senior, they spend less time designing and more time leading and directing other designers on broader creative activities, such as brand development and corporate identity development. They are often expected to interact more directly with clients, for example taking and interpreting briefs. Crowdsourcing in graphic design Jeff Howe of Wired Magazine first used the term "crowdsourcing" in his 2006 article, "The Rise of Crowdsourcing." It spans such creative domains as graphic design, architecture, apparel design, writing, illustration, and others. Tasks may be assigned to individuals or a group and may be categorized as convergent or divergent. An example of a divergent task is generating alternative designs for a poster. An example of a convergent task is selecting one poster design. Companies, startups, small businesses and entrepreneurs have all benefitted from design crowdsourcing since it helps them source great graphic designs at a fraction of the budget they used to spend before. Getting a logo design through crowdsourcing being one of the most common. Major companies that operate in the design crowdsourcing space are generally referred to as design contest sites.] Role of graphic design Graphic design is essential for advertising, branding, and marketing, influencing how people act. Good graphic design builds strong, recognizable brands, communicates messages clearly, and shapes how consumers see and react to things. One way that graphic design influences consumer behavior is through the use of visual elements, such as color, typography, and imagery. Studies have shown that certain colors can evoke specific emotions and behaviors in consumers, and that typography can influence how information is perceived and remembered. For example, serif fonts are often associated with tradition and elegance, while sans-serif fonts are seen as modern and minimalistic. These factors can all impact the way consumers perceive a brand and its messaging. Another way that graphic design impacts consumer behavior is through its ability to communicate complex information in a clear and accessible way. For example, infographics and data visualizations can help to distill complex information into a format that is easy to understand and engaging for consumers. This can help to build trust and credibility with consumers, and encourage them to take action. Ethical consideration in graphic design Ethics are an important consideration in graphic design, particularly when it comes to accurately representing information and avoiding harmful stereotypes. Graphic designers have a responsibility to ensure that their work is truthful, accurate, and free from any misleading or deceptive elements. This requires a commitment to honesty, integrity, and transparency in all aspects of the design process. One of the key ethical considerations in graphic design is the responsibility to accurately represent information. This means ensuring that any claims or statements made in advertising or marketing materials are true and supported by evidence. For example, a company should not use misleading statistics to promote their product or service, or make false claims about its benefits. Graphic designers must take care to accurately represent information in all visual elements, such as graphs, charts, and images, and avoid distorting or misrepresenting data. Another important ethical consideration in graphic design is the need to avoid harmful stereotypes. This means avoiding any images or messaging that perpetuate negative or harmful stereotypes based on race, gender, religion, or other characteristics. Graphic designers should strive to create designs that are inclusive and respectful of all individuals and communities, and avoid reinforcing negative attitudes or biases. Future of graphic design The future of graphic design is likely to be heavily influenced by emerging technologies and social trends. Advancements in areas such as artificial intelligence, virtual and augmented reality, and automation are likely to transform the way that graphic designers work and create designs. Social trends, such as a greater focus on sustainability and inclusivity, are also likely to impact the future of graphic design. One area where emerging technologies are likely to have a significant impact on graphic design is in the automation of certain tasks. Machine learning algorithms, for example, can analyze large datasets and create designs based on patterns and trends, freeing up designers to focus on more complex and creative tasks. Virtual and augmented reality technologies may also allow designers to create immersive and interactive experiences for users, blurring the lines between the digital and physical worlds. Artificial intelligence has also led to many challenges within the world of graphic design. Some of those challenges include maintaining brand authenticity, ensuring quality, issues of bias, and preserving creative control. Social trends are also likely to shape the future of graphic design. As consumers become more conscious of environmental issues, for example, there may be a greater demand for designs that prioritize sustainability and minimize waste. Similarly, there is likely to be a growing focus on inclusivity and diversity in design, with designers seeking to create designs that are accessible and representative of a wide range of individuals and communities. See also Related areas Related topics References Bibliography Fiell, Charlotte and Fiell, Peter (editors). Contemporary Graphic Design. Taschen Publishers, 2008. Wiedemann, Julius and Taborda, Felipe (editors). Latin-American Graphic Design. Taschen Publishers, 2008. External links The Universal Arts of Graphic Design – Documentary produced by Off Book Graphic Designers, entry in the Occupational Outlook Handbook of the Bureau of Labor Statistics of the United States Department of Labor Communication design
Graphic design
Engineering
4,898
21,539,688
https://en.wikipedia.org/wiki/Optical%20Mechanics%2C%20Inc.
Optical Mechanics, Inc. or OMI is a high-end American telescope and optics instrument manufacturer. OMI was founded in 2002 and produces observatory telescopes, Lidar telescopes, optical tube assemblies, telescope mirrors and reflective coatings for mirrors. OMI mirrors are used by other telescope makers such as Obsession Telescopes. Also taking on custom projects, they produced the 48-inch Dob, a aperture, 4, Dobsonian telescope called "Barbarella" and featured in Astronomy Technology Today magazine (June 2008 Issue). OMI is located in the US state of Iowa. OMI procured the assets of the former optics company Torus Technologies. OMI has an optics shop where it does work on telescopes. OMI produced the 60 cm, f/10 telescope for TUBITAK National Observatory in Turkey. OMI built the telescope mount for the SuperWASP telescope. The Robotic telescope Rigel Telescope was finished in 2002, a Talon program controlled 0.37-meter (14.5 in) F/14 telescope. OMI helped re-furbish the Gueymard Research Telescope at The George Observatory at Brazos Bend State Park. The telescope's mirror was degrading after many decades of use and exposure. The 10 ton 36 inch aperture telescope was acquired from Louisiana State University in 1990. It is a cassegrain telescope, and one of the largest that is open to public viewing through an eyepiece. OMI had to strip off aluminum coating and re-surface the glass mirror. The mirror was ground to a hyperbolic shape, and the refurbishment was conducted in 2014. OMI refitted the 0.8 m telescope at McDonald Observatory in 2011 to 2012. Another telescope OMI built was the CESAR Cebreros Optical Telescope at Cebreros observatory at the ESA Deep Space Tracking Station The telescope is a Cassegrain design with 50cm and a F/10 ratio. (see also Cebreros Station) OMI Evolution-30 OMI developed the 30-inch mirror for Obsession Telescopes's 30-inch reflector When Obsession withdrew from the 30-inch market, OMI still wanted to offer their 30-inch mirrors. Drawing on help from Obsession Telescopes and their own experience with the OMI 48-inch telescope, they offered the OMI Evolution-30 in 2009. References External links OMI Telescope manufacturers Optics manufacturing companies Companies established in 2002 2002 establishments in the United States
Optical Mechanics, Inc.
Astronomy
490
23,729,712
https://en.wikipedia.org/wiki/C6H7NO3S
{{DISPLAYTITLE:C6H7NO3S}} The molecular formula C6H7NO3S (molar mass: 173.190 g/mol, exact mass: 173.0147 u) may refer to: Metanilic acid Orthanilic acid Piloty's acid Sulfanilic acid
C6H7NO3S
Chemistry
72
43,972
https://en.wikipedia.org/wiki/Partial%20pressure
In a mixture of gases, each constituent gas has a partial pressure which is the notional pressure of that constituent gas as if it alone occupied the entire volume of the original mixture at the same temperature. The total pressure of an ideal gas mixture is the sum of the partial pressures of the gases in the mixture (Dalton's Law). The partial pressure of a gas is a measure of thermodynamic activity of the gas's molecules. Gases dissolve, diffuse, and react according to their partial pressures but not according to their concentrations in gas mixtures or liquids. This general property of gases is also true in chemical reactions of gases in biology. For example, the necessary amount of oxygen for human respiration, and the amount that is toxic, is set by the partial pressure of oxygen alone. This is true across a very wide range of different concentrations of oxygen present in various inhaled breathing gases or dissolved in blood; consequently, mixture ratios, like that of breathable 20% oxygen and 80% Nitrogen, are determined by volume instead of by weight or mass. Furthermore, the partial pressures of oxygen and carbon dioxide are important parameters in tests of arterial blood gases. That said, these pressures can also be measured in, for example, cerebrospinal fluid. Symbol The symbol for pressure is usually or which may use a subscript to identify the pressure, and gas species are also referred to by subscript. When combined, these subscripts are applied recursively. Examples: or = pressure at time 1 or = partial pressure of hydrogen or or PaO2 = arterial partial pressure of oxygen or or PvO2 = venous partial pressure of oxygen Dalton's law of partial pressures Dalton's law expresses the fact that the total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the individual gases in the mixture. This equality arises from the fact that in an ideal gas, the molecules are so far apart that they do not interact with each other. Most actual real-world gases come very close to this ideal. For example, given an ideal gas mixture of nitrogen (N2), hydrogen (H2) and ammonia (NH3): where: = total pressure of the gas mixture = partial pressure of nitrogen (N2) = partial pressure of hydrogen (H2) = partial pressure of ammonia (NH3) Ideal gas mixtures Ideally the ratio of partial pressures equals the ratio of the number of molecules. That is, the mole fraction of an individual gas component in an ideal gas mixture can be expressed in terms of the component's partial pressure or the moles of the component: and the partial pressure of an individual gas component in an ideal gas can be obtained using this expression: The mole fraction of a gas component in a gas mixture is equal to the volumetric fraction of that component in a gas mixture. The ratio of partial pressures relies on the following isotherm relation: VX is the partial volume of any individual gas component (X) Vtot is the total volume of the gas mixture pX is the partial pressure of gas X ptot is the total pressure of the gas mixture nX is the amount of substance of gas (X) ntot is the total amount of substance in gas mixture Partial volume (Amagat's law of additive volume) The partial volume of a particular gas in a mixture is the volume of one component of the gas mixture. It is useful in gas mixtures, e.g. air, to focus on one particular gas component, e.g. oxygen. It can be approximated both from partial pressure and molar fraction: VX is the partial volume of an individual gas component X in the mixture Vtot is the total volume of the gas mixture pX is the partial pressure of gas X ptot is the total pressure of the gas mixture nX is the amount of substance of gas X ntot is the total amount of substance in the gas mixture Vapor pressure Vapor pressure is the pressure of a vapor in equilibrium with its non-vapor phases (i.e., liquid or solid). Most often the term is used to describe a liquid's tendency to evaporate. It is a measure of the tendency of molecules and atoms to escape from a liquid or a solid. A liquid's atmospheric pressure boiling point corresponds to the temperature at which its vapor pressure is equal to the surrounding atmospheric pressure and it is often called the normal boiling point. The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point of the liquid. The vapor pressure chart displayed has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. At higher altitudes, the atmospheric pressure is less than that at sea level, so boiling points of liquids are reduced. At the top of Mount Everest, the atmospheric pressure is approximately 0.333 atm, so by using the graph, the boiling point of diethyl ether would be approximately 7.5 °C versus 34.6 °C at sea level (1 atm). Equilibrium constants of reactions involving gas mixtures It is possible to work out the equilibrium constant for a chemical reaction involving a mixture of gases given the partial pressure of each gas and the overall reaction formula. For a reversible reaction involving gas reactants and gas products, such as: {\mathit{a}A} + {\mathit{b}B} <=> {\mathit{c}C} + {\mathit{d}D} the equilibrium constant of the reaction would be: For reversible reactions, changes in the total pressure, temperature or reactant concentrations will shift the equilibrium so as to favor either the right or left side of the reaction in accordance with Le Chatelier's Principle. However, the reaction kinetics may either oppose or enhance the equilibrium shift. In some cases, the reaction kinetics may be the overriding factor to consider. Henry's law and the solubility of gases Gases will dissolve in liquids to an extent that is determined by the equilibrium between the undissolved gas and the gas that has dissolved in the liquid (called the solvent). The equilibrium constant for that equilibrium is: where: =  the equilibrium constant for the solvation process =  partial pressure of gas in equilibrium with a solution containing some of the gas =  the concentration of gas in the liquid solution The form of the equilibrium constant shows that the concentration of a solute gas in a solution is directly proportional to the partial pressure of that gas above the solution. This statement is known as Henry's law and the equilibrium constant is quite often referred to as the Henry's law constant. Henry's law is sometimes written as: where is also referred to as the Henry's law constant. As can be seen by comparing equations () and () above, is the reciprocal of . Since both may be referred to as the Henry's law constant, readers of the technical literature must be quite careful to note which version of the Henry's law equation is being used. Henry's law is an approximation that only applies for dilute, ideal solutions and for solutions where the liquid solvent does not react chemically with the gas being dissolved. In diving breathing gases In underwater diving the physiological effects of individual component gases of breathing gases are a function of partial pressure. Using diving terms, partial pressure is calculated as: partial pressure = (total absolute pressure) × (volume fraction of gas component) For the component gas "i": pi = P × Fi For example, at underwater, the total absolute pressure is (i.e., 1 bar of atmospheric pressure + 5 bar of water pressure) and the partial pressures of the main components of air, oxygen 21% by volume and nitrogen approximately 79% by volume are: pN2 = 6 bar × 0.79 = 4.7 bar absolute pO2 = 6 bar × 0.21 = 1.3 bar absolute The minimum safe lower limit for the partial pressures of oxygen in a breathing gas mixture for diving is absolute. Hypoxia and sudden unconsciousness can become a problem with an oxygen partial pressure of less than 0.16 bar absolute. Oxygen toxicity, involving convulsions, becomes a problem when oxygen partial pressure is too high. The NOAA Diving Manual recommends a maximum single exposure of 45 minutes at 1.6 bar absolute, of 120 minutes at 1.5 bar absolute, of 150 minutes at 1.4 bar absolute, of 180 minutes at 1.3 bar absolute and of 210 minutes at 1.2 bar absolute. Oxygen toxicity becomes a risk when these oxygen partial pressures and exposures are exceeded. The partial pressure of oxygen also determines the maximum operating depth of a gas mixture. Narcosis is a problem when breathing gases at high pressure. Typically, the maximum total partial pressure of narcotic gases used when planning for technical diving may be around 4.5 bar absolute, based on an equivalent narcotic depth of . The effect of a toxic contaminant such as carbon monoxide in breathing gas is also related to the partial pressure when breathed. A mixture which may be relatively safe at the surface could be dangerously toxic at the maximum depth of a dive, or a tolerable level of carbon dioxide in the breathing loop of a diving rebreather may become intolerable within seconds during descent when the partial pressure rapidly increases, and could lead to panic or incapacitation of the diver. In medicine The partial pressures of particularly oxygen () and carbon dioxide () are important parameters in tests of arterial blood gases, but can also be measured in, for example, cerebrospinal fluid. See also References Engineering thermodynamics Equilibrium chemistry Gas laws Gases Physical chemistry Pressure Underwater diving physics Distillation
Partial pressure
Physics,Chemistry,Engineering
2,077
28,732,393
https://en.wikipedia.org/wiki/LL%20Pegasi
LL Pegasi (AFGL 3068) is a Mira variable star surrounded by a pinwheel-shaped nebula, IRAS 23166+1655, thought to be a preplanetary nebula. It is a binary system that includes an extreme carbon star. The pair is hidden by the dust cloud ejected from the carbon star and is only visible in infrared light. Variability LL Pegasi is obscured at visual wavelengths, but is strongly variable in brightness at infrared wavelengths. It is classified as a Mira variable and has a period of about 696 days. Nebula The nebula displays an unusual Archimedean spiral shape. The shape is thought to be formed through the interaction between the stellar companion and the carbon star, as has been seen in other binary systems, although not with such a precise geometric form. The distance between the spiral arms and their rate of expansion is consistent with estimates of the pair's 810 year orbital period based on their apparent angular separation. Gallery See also List of largest known stars References External links 3D view of LL Pegasi Celestial spiral with a twist An Extraordinary Celestial Spiral Celestial spiral goes viral Hubble Spots Ghostly Space Spiral — discovery.com An Extraordinary Spiral from LL Pegasi, APOD Protoplanetary nebulae IRAS catalogue objects Pegasi, LL Pegasus (constellation) Carbon stars Mira variables J23191260+1711331 TIC objects
LL Pegasi
Astronomy
276
44,689,684
https://en.wikipedia.org/wiki/Energy%20informatics
Energy informatics is a research field covering the use of information and communication technology to address energy utilization and management challenges. Methods used for "smart" implementations often combine IoT sensors with artificial intelligence and machine learning. Energy Informatics is founded on flow networks that are the major suppliers and consumers of energy. Their efficiency can be improved by collecting and analyzing information. Application areas The field among other consider application areas within: Smart Buildings by developing ICT-centred solutions for improving the energy-efficiency of buildings. Smart Cities by investigating the synergies between demand patterns and supply availability of energy flows in cities and communities to improve energy efficiency, increase integration of renewable sources, and provide resilience towards system faults caused by extreme situations, like hurricanes and flooding. Smart Industries including the development of ICT-centred solutions for improving the energy efficiency and predictability of energy intensive industrial processes, without compromising process and product quality. Smart Energy Networks by developing ICT-centred solutions for coordinating the supply and demand in environmentally sustainable energy networks. References Energy Information science
Energy informatics
Physics,Technology
208
52,977,719
https://en.wikipedia.org/wiki/Low-code%20development%20platform
A low-code development platform (LCDP) provides a development environment used to create application software, generally through a graphical user interface (as opposed to only writing code, though some coding is possible and may be required). A low-coded platform may produce entirely operational applications, or require additional coding for specific situations. Low-code development platforms are typically on a high abstraction level, and can reduce the amount of traditional time spent, enabling accelerated delivery of business applications. A common benefit is that a wider range of people can contribute to the application's development, not only those with coding skills, but good governance is needed to be able to adhere to common rules and regulations. LCDPs can also lower the initial cost of setup, training, deployment, and maintenance. Low-code development platforms trace their roots back to fourth-generation programming language and the rapid application development tools of the 1990s and early 2000s. Similar to these predecessor development environments, LCDPs are based on the principles of model-driven architecture, automatic code generation, and visual programming. The concept of end-user development also existed previously, although LCDPs brought some new ways of approaching this development. The low-code development platform market traces its origins back to 2011. The specific name "low-code" was not put forward until 9 June, 2014, when it was used by the industry analyst Forrester Research. Along with no-code development platforms, low-code was described as "extraordinarily disruptive" in Forbes magazine in 2017. Use As a result of the microcomputer revolution, businesses have deployed computers widely across their employee bases, enabling widespread automation of business processes using software. The need for software automation and new applications for business processes places demands on software developers to create custom applications in volume, tailoring them to organizations' unique needs. Low-code development platforms have been developed as a means to allow for quick creation and use of working applications that can address the specific process and data needs of the organization. Reception Research firm Forrester estimated in 2016 that the total market for low-code development platforms would grow to $15.5 billion by 2020. Segments in the market include database, request handling, mobile, process, and general purpose low-code platforms. Low-code development's market growth can be attributed to its flexibility and ease. Low-code development platforms are shifting their focus toward general purpose of applications, with the ability to add in custom code when needed or desired. Mobile accessibility is one of the driving factors of using low-code development platforms. Instead of developers having to spend time creating multi-device software, low-code packages typically come with that feature as standard. Because they require less coding knowledge, nearly anyone in a software development environment can learn to use a low-code development platform. Features like drag and drop interfaces help users visualize and build the application Security and compliance concerns Concerns over low-code development platform security and compliance are growing, especially for apps that use consumer data. There can be concerns over the security of apps built so quickly and possible lack of due governance leading to compliance issues. However, low-code apps do also fuel security innovations. With continuous app development in mind, it becomes easier to create secure data workflows. Still the fact remains that low-code development platforms that do not apply and strictly adhere to normalized systems theory do not solve the challenge of increasing complexity due to changes. Criticisms Some IT professionals question whether low-code development platforms are suitable for large-scale and mission-critical enterprise applications. Others have questioned whether these platforms actually make development cheaper or easier. Additionally, some CIOs have expressed concern that adopting low-code development platforms internally could lead to an increase in unsupported applications built by shadow IT. See also DRAKON End-user computing End-user development Flow-based programming List of online database creator apps List of low-code development platforms Visual programming language Backend as a service References Enterprise architecture Software development
Low-code development platform
Technology,Engineering
804
10,640,506
https://en.wikipedia.org/wiki/Theories%20about%20Stonehenge
Stonehenge has been the subject of many theories about its origin, ranging from the academic worlds of archaeology to explanations from mythology and the paranormal. Early theories Many early historians were influenced by supernatural folktales in their explanations. Some legends held that Merlin had a giant build the structure for him or that he had magically transported it from Mount Killaraus in Ireland, while others held the Devil responsible. Henry of Huntingdon was the first to write of the monument around AD 1130 soon followed by Geoffrey of Monmouth who was the first to record fanciful associations with Merlin which led the monument to be incorporated into the wider cycle of European medieval romance. According to Geoffrey's Historia Regum Britanniae, when asked what might serve as an appropriate burial place for Britain's dead princes, Merlin advised King Aurelius Ambrosius to raise an army and collect some magical stones from Mount Killarus in Ireland. Whilst at Mount Killarus, Merlin laughed at the soldiers' failed attempts to remove the stones using ladders, ropes, and other machinery. Shortly thereafter, Merlin oversaw the removal of stones using his own machinery and commanded they be loaded onto the soldiers' ships and sailed back to England where they were reconstructed into Stonehenge. Contrary to popular belief Geoffrey did not claim Merlin had commanded a giant to build Stonehenge for him, it appears this detail was embellished by Robert Wace who later translated Geoffrey's original text into French. In 1655, the architect John Webb, writing in the name of his former superior Inigo Jones, argued that Stonehenge was a Roman temple, dedicated to Caelus, (a Latin name for the Greek sky-god Uranus), and built following the Tuscan order. Later commentators maintained that the Danes erected it. Indeed, up until the late nineteenth century, the site was commonly attributed to the Saxons or other relatively recent societies. Druids and scientific evidence The first academic effort to survey and understand the monument was made around 1640 by John Aubrey. He declared Stonehenge the work of Druids. This view was greatly popularised by William Stukeley. Aubrey also contributed the first measured drawings of the site, which permitted greater analysis of its form and significance. From this work, he was able to demonstrate an astronomical or calendrical role in the stones' placement. The architect John Wood was to undertake the first truly accurate survey of Stonehenge in 1740. However Wood's interpretation of the monument as a place of pagan ritual was vehemently attacked by Stukeley who saw the druids not as pagans, but as biblical patriarchs. By the turn of the nineteenth century, John Lubbock was able to attribute the site to the Bronze Age based on the bronze objects found in the nearby barrows. Radiocarbon dating Radiocarbon dating of the site indicates that the building of the monument at the site began around the year 3100 BC and ended around the year 1600 BC. This allows the elimination of a few of the theories that have been presented. The theory that the Druids were responsible may be the most popular one; however, the Celtic society that spawned the Druid priesthood came into being only after the year 300 BC. Additionally, the Druids are unlikely to have used the site for sacrifices, because they performed the majority of their rituals in the woods or mountains, areas better suited for "earth rituals" than an open field. The fact that the Romans first came to the British Isles when Julius Caesar led an expedition in 55 BC negates the theories of Inigo Jones and others that Stonehenge was built as a Roman temple. Early references to Stonehenge The classical Greek writer Diodorus Siculus (1st century BC) may refer to Stonehenge in a passage from his Bibliotheca historica. Citing the 4th-century BC historian Hecataeus of Abdera and "certain others", Diodorus says that in "a land beyond the Celts" (i.e. Gaul) there is "an island no smaller than Sicily" in the northern sea called Hyperborea, so named because it is beyond the source of the north wind or Boreas. The inhabitants of this place chiefly worship Apollo, and there is "both a magnificent sacred precinct of Apollo and a notable temple which is adorned with many votive offerings and is spherical in shape." Some writers have suggested that Diodorus' "Hyperborea" may indicate Great Britain, and that the spherical temple may be an early reference of Stonehenge. Christopher Chippindale commented that "This might be Stonehenge, but the description is short and vague, and there are discrepancies = the climate of the Hyperboreans is so mild they grow two crops a year." Aubrey Burl noted that other parts of Diodorus' description make it a poor fit for Stonehenge and its neighbourhood. Diodorus also says that in that area Apollo (meaning, the Sun or the Moon) "skimmed the earth at a very low height". However, both the Moon and the Sun are always seen far above the horizon at the latitude of Stonehenge; only 500 miles farther north can they be observed to remain near the horizon. The bluestones J. F. S. Stone felt that a bluestone monument had earlier stood near the nearby Stonehenge Cursus and been moved to their current site from there. If Mercer's theory is correct then the bluestones may have been transplanted to cement an alliance or display superiority over a conquered enemy, although this can only be speculation. An oval-shaped setting of bluestones similar to those at Stonehenge 3iv occurs at Bedd Arthur in the Preseli Hills, but that does not imply a direct cultural link. Some archaeologists have suggested that the igneous bluestones and sedimentary sarsens had some symbolism, of a union between two cultures from different landscapes and therefore from different backgrounds. Recent analysis of contemporary burials found nearby known as the Boscombe Bowmen, has indicated that at least some of the individuals associated with Stonehenge 3 came either from Wales or from some other European area of ancient rocks. Petrological analysis of the stones themselves has verified that some of them have come from the Preseli Hills but that others have come from the north Pembrokeshire coast and possibly the Brecon Beacons. The main source of the bluestones is now identified with the dolerite outcrops around Carn Goedog although work led by Olwen Williams-Thorpe of the Open University has shown that other bluestones came from outcrops up to 10 km away. Dolerite is composed of an intrusive volcanic rock of plagioclase feldspar that is harder than granite. Aubrey Burl and a number of geologists and geomorphologists contend that the bluestones were not transported by human agency at all and were instead brought by glaciers at least part of the way from Wales during the Pleistocene. There is good geological and glaciological evidence that glacier ice did move across Preseli and did reach the Somerset coast. It is uncertain that it reached Salisbury Plain, although a spotted dolerite boulder was found in a long barrow at Heytesbury in Wiltshire, which was built long before the stone settings at Stonehenge were installed. One current view is that glacier ice transported the stones as far as Somerset, and that they were transported from there by the builders of Stonehenge. However, in 2015, researchers reported they had confirmed the Preseli Spotted Dolerite stones at Stonehenge came from two Neolithic quarries at Carn Goedog and Craig Rhos-y-felin in the Preseli Hills. Using radiocarbon dating, researchers dated the quarry activities to around 3400 BC for Craig Rhos-y-felin and 3200 BC for Carn Goedog. Project director Professor Mike Parker Pearson of the UCL Institute of Archaeology noted the finding was "intriguing because the bluestones didn't get put up at Stonehenge until around 2900 BC… It could have taken those Neolithic stone-draggers nearly 500 years to get them to Stonehenge, but that's pretty improbable in my view. It's more likely that the stones were first used in a local monument, somewhere near the quarries, that was then dismantled and dragged off to Wiltshire." In 2018 two of the quarries – Carn Goedog and Craig Rhos-y-felin – underwent more excavation to reveal evidence of megalith quarrying around 3000 BC. If true, this shortens the period between excavation and transportation to the Stonehenge site. During 2017 and 2018, excavations by Pearson's team at Waun Mawn, a small and unimpressive-seeming fragmentary stone circle in the Preseli Hills, revealed that the site had originally housed a 110 metre diameter stone circle of the same size as Stonehenge's original bluestone circle, and also like it, oriented towards the summer solstice. The circle at Waun Mawn also contained a hole from one stone which had a distinctive pentagonal shape, very closely matching the one pentagonal stone at Stonehenge (stonehole 91 at Waun Mawn/stone 62 at Stonehenge). Soil dating of the sediments within the revealed stone holes, via optically stimulated luminescence (OSL), suggested the absent stones at Waun Mawn had been erected around 3400-3200 BC, and removed around 300-400 years later, a date consistent with theories that the same stones were moved and used at the more famous site, before later being reorganised into their present locations and supplemented with local sarsens as was already understood. Human activity at Waun Mawn ceased around the same time, and overall the pattern, along with isotope studies suggesting that at least some of the population at Stonehenge originated and lived in the Western Wales area, suggested migration to the researchers. However, as it seems unlikely that Waun Mawn ever contained as many of the same type of stones, as Stonehenge, it is considered possible that stones from other sources may have been added, perhaps from other dismantled circles in the region. The discoveries were published in February 2021, and popularised in a documentary the same month. Healing Britain's Geoffrey Wainwright, president of the London Society of Antiquaries, and Timothy Darvill, on 22 September 2008, speculated that it may have been an ancient healing and pilgrimage site, since burials around Stonehenge showed trauma and deformity evidence: "It was the magical qualities of these stones which ... transformed the monument and made it a place of pilgrimage for the sick and injured of the Neolithic world." Radio-carbon dating places the construction of the circle of bluestones at between 2400-2200 BC, but they discovered charcoals dating 7000 BC, showing human activity in the site. It could be the primeval equivalent of Lourdes, since the area was already visited 4,000 years before the oldest stone circle, and attracted visitors for centuries after its abandonment. Some tentative support for this view comes from the first-century BC Greek historian, Diodorus Siculus, who cites a lost account set down three centuries earlier, which described "a magnificent precinct sacred to Apollo and a notable spherical temple" on a large island in the far north, opposite what is now France. Amongst other attributes Apollo was recognised as the god of medicine and healing. This theory is hotly disputed, on the grounds that it is not adequately underpinned by evidence on the ground, either in the Preseli Hills area or at Stonehenge. Acoustic properties A study by researchers at the Royal College of Art, London, has proposed that the bluestones may have been attractive for their acoustic properties. Mnemonic centre Lynne Kelly in her work Knowledge and Power in Prehistoric Societies: Orality, Memory, and the Transmission of Culture (2015), investigates the link between power and the control of knowledge in oral cultures, as well as the different mnemonic techniques and devices used by those cultures. According to Kelly's theory, Stonehenge served the purpose of a mnemonic centre for recording and retrieving knowledge by Neolithic Britons, who lacked written language. The knowledge could have included pragmatic information on animal classification and behaviour, geography and navigation, land management and crop cycles, as well as cultural knowledge on history, politics, genealogy and religion (see here). Stonehenge as part of a ritual landscape Many archaeologists believe Stonehenge was an attempt to render in permanent stone the more common timber structures that dotted Salisbury Plain at the time, such as those that stood at Durrington Walls. Modern anthropological evidence has been used by Mike Parker Pearson and the Malagasy archaeologist Ramilisonina to suggest that timber was associated with the living and stone with the ancestral dead amongst prehistoric peoples. They have argued that Stonehenge was the terminus of a long, ritualised funerary procession for treating the dead, which began in the east, during sunrise at Woodhenge and Durrington Walls, moved down the Avon and then along the Avenue reaching Stonehenge in the west at sunset. The journey from wood to stone via water was, they consider, a symbolic journey from life to death. There is no satisfactory evidence to suggest that Stonehenge's astronomical alignments were anything more than symbolic and current interpretations favour a ritual role for the monument that takes into account its numerous burials and its presence within a wider landscape of sacred sites. Many also believe that the site may have had astrological/spiritual significance attached to it. Support for this view also comes from the historian of religions Mircea Eliade, who compares the site to other megalithic constructions around the world devoted to the cult of the dead (ancestors). Like other similar English monuments [For example, Eliade identifies, Woodhenge, Avebury, Arminghall, and Arbor Low] the Stonehenge cromlech was situated in the middle of a field of funeral barrows. This famous ceremonial centre constituted, at least in its primitive form, a sanctuary built to insure relations with the ancestors. In terms of structure, Stonehenge can be compared with certain megalithic complexes developed, in other cultures, from a sacred area: temples or cities. We have the same valourisation of the sacred space as "centre of the world," the privileged place that affords communication with heaven and the underworld, that is, with the gods, the chtonian goddesses, and the spirits of the dead. In addition to the English sites, Eliade identifies, among others, the megalithic architecture of Malta, which represents a "spectacular expression" of the cult of the dead and worship of a Great Goddess. Radar mapping also reveals that three chalk ridges in the Stonehenge area are aligned by geological accident on the midsummer sunrise/midwinter axis. This natural solstitial alignment would have symbolized cosmic unity to the ancients, a place where Heaven and Earth were unified by some supernatural force. This seems to have set the blueprint for solstitial alignments in Stonehenge and the timber circles at Durrington Walls and Woodhenge as well. Mike Parker Pearson also believes that the Stonehenge was a monument of unification, bringing together different groups with different ancestries. He surmises that the five trilithons in the centre of Stonehenge could have symbolized five tribal lineages charting their descent from five original ancestors. The Preseli Hills might have had some ancestral significance for the stonehenge builders as well (perhaps this was their place of origin), this may have been the motive behind dragging the bluestones all the way from Preseli Hills to Wiltshire. The trilithons may have also represented a D-shaped meeting house of which similar structures have been found at other Neolithic sites in Britain. This may have represented a meeting place for the ancestors of the Stonehenge builders. Others have suggested that the trilithons represented doorways to another world. According to architect and archaeologist Didier Laroche, it is a funerary monument with a central courtyard that was originally partially included in a tumulus and of which only the stone structures remained, as is the case for many other megalithic tombs. Construction techniques and design A recently published analysis draws attention to the fact that the stones display mirrored symmetry and that the only undisputed alignment to be found is that of the solstices, which can be regarded as the axis of that symmetry. This interpretation sees the monument as having been designed off-site, largely prefabricated and set out to conform to survey markers set out to an exact geometric plan. The idea of ‘precision’ (below) demands that exact points of reference were used, both between the structural elements and in relation to the axis (i.e. that of the solstices). Johnson's theory asserts that prehistoric survey markers could not have been placed within the footprint of the stones, but must have been (as in any construction) external to the stones. That almost all the stones have one ‘better’ i.e. flatter face, and that face is almost invariably inwards, suggests that the construction was set out so that the prehistoric builders could use the center point of the inner faces as reference. This is very significant in respect of the Great Trilithon; the surviving upright has its flatter face outwards (see image on right), towards the midwinter sunset, and was raised from the inside. The remainder of the trilithon array (and almost all of the stones of the Sarsen Circle) had construction ramps which sloped inwards, and were therefore set up from the outside. Placing the centre face of the stones (regardless of their thickness) against markers would mean that the ‘gaps’ between the stones were simply consequential. The study of the geometric layout of the monument shows that such methods were used and that there is a clear argument for regarding other outlying elements as part of a geometric scheme (e.g. the ‘Station Stones’ and the stoneholes 92 and 94 which mark two opposing facets of an octagon). A geometric design is scalable from concept to construction, removing much of the need for measurements to be made at all. Much speculation has surrounded the engineering feats required to build Stonehenge. Assuming the bluestones were brought from Wales by hand, and not transported by glaciers as Aubrey Burl has claimed, various methods of moving them relying only on timber and rope have been suggested. In a 2001 exercise in experimental archaeology, an attempt was made to transport a large stone along a land and sea route from Wales to Stonehenge. Volunteers pulled it for some miles (with great difficulty) on a wooden sledge over land, using modern roads and low-friction netting to assist sliding, but it became clear that it would have been incredibly difficult for even the most organized of tribal groups to have pulled large numbers of stones across the densely wooded, rough and boggy terrain of West Wales. In 2010, Nova's "Secrets of Stonehenge" broadcast an effective technique for moving the stones over short distances using ball bearings in a wooden track as originally envisioned by Andrew Young, a graduate student of Bruce Bradley—director of experimental archaeology at the University of Exeter. Experts hit on the new idea after examining mysterious stone balls found near Stonehenge-like monuments in Aberdeenshire, Scotland. About the size of a cricket ball, they are precisely fashioned to be within a millimetre of the same size. This suggests they were meant to be used together in some way rather than individually. In 1997 Julian Richards teamed up with Mark Witby and Roger Hopkins to conduct several experiments to replicate the construction at Stonehenge for Nova'''s "Secrets of Lost Empires" mini series. They arranged for a gang of 130 people to attempt to tow a 40-ton concrete replica on a sledge which was placed on wooden tracks. They used grease to make it easier to tow up a slight incline and still they were unable to budge it. They gathered additional men and had some of them use levers to try to pry the megalith while others towed it at the same time. When they all worked together at the same time they were able to move it forward. They were uncertain whether this would be the way they would have transported the largest stones 25 miles. To do this would require an enormous amount of track and a lot of coordination for a large number of people. In some cases this would involve towing the stones over rougher terrain. They also conducted an experiment to erect 2 forty ton replicas and put a 9-ton lintel on top. After a lot of experimenting they were able to erect 2 megaliths using a large number of people towing and using levers. They also managed to tow the lintel up a steel ramp. They were unable to determine this was the final answer but they demonstrated that this was a possible method. At times they were forced to use modern technology for safety reasons. Josh Bernstein and Julian Richards organized an experiment to pull a 2-ton stone on wooden tracks with a group of about 16 men. They placed the stone on a wooden sledge then placed the sledge on a wooden track. They pulled this with two gangs of about eight men. To move the stones as many miles across Southern England, the creators of Stonehenge would have had to build a lot of track, or move and rebuild track in pieces, as the stones were taken to their final destination. A recent article has argued that the massive stones could be moved by submerging them in water and towing them below an ancient vessel or group of vessels. This technique would have two significant advantages. It would reduce the load borne by the vessel while part of the stone's weight is displaced by the water. Secondly, the arrangement of the load below the vessel would be much more stable and reduce the risk of catastrophic failure. Naturally, this would apply only for transportation over water. The technique was tried during the Millennium Stone Project 2000, with a single bluestone slung beneath two large curraghs. The sling frayed away, and the stone plunged to the bed of Milford Haven. It has been suggested that timber A-frames were erected to raise the stones, and that teams of people then hauled them upright using ropes. The topmost stones may have been raised up incrementally on timber platforms and slid into place or pushed up ramps. The carpentry-type joints used on the stones imply a people well skilled in woodworking and they could easily have had the knowledge to erect the monument using such methods. In 2003 retired construction worker Wally Wallington demonstrated ingenious techniques based on fundamental principles of levers, fulcrums and counterweights to show that a single man can rotate, walk, lift and tip a ten-ton cast-concrete monolith into an upright position. He is progressing with his plan to construct a simulated Stonehenge with eight uprights and two lintels. Alexander Thom was of the opinion that the site was laid out with the necessary precision using his megalithic yard. The engraved weapons on the sarsens are unique in megalithic art in the British Isles, where more abstract designs were invariably favoured. Similarly, the horseshoe arrangements of stones are unusual in a culture that otherwise arranged stones in circles. The axe motif is, however, common to the peoples of Brittany at the time, and it has been suggested at least two stages of Stonehenge were built under continental influence. This would go some way towards explaining the monument's atypical design, but overall, Stonehenge is still inexplicably unusual in the context of any prehistoric European culture. Estimates of the manpower needed to build Stonehenge put the total effort involved at millions of hours of work. Stonehenge 1 probably needed around 11,000 man-hours (or 460 man-days) of work, Stonehenge 2 around 360,000 (15,000 man-days or 41 years). The various parts of Stonehenge 3 may have involved up to 1.75 million hours (73,000 days or 200 years) of work. The working of the stones is estimated to have required around 20 million hours (830,000 days or 2,300 years) of work using the primitive tools available at the time. Certainly, the will to produce such a site must have been strong, and an advanced social organization would have been necessary to build and maintain it. However, Wally Wallington's work suggests that Stonehenge's construction may have required fewer man-hours than previously estimated. Ley lines British author John Michell wrote that Alfred Watkins's ley lines appeared to be in alignment with various traditional sacred sites around the country, such as the 'Perpetual Choirs' apparently mentioned in the Welsh Triads. Michell wrote that "There is a curious symmetry about the positioning of the three Perpetual Choirs in Britain. Stonehenge and Llantwit Major are equidistant from Glastonbury, some 38.9 miles away, and two straight lines drawn on the map from Glastonbury to the other two choirs form an angle of 144 degrees...The axis of Glastonbury Abbey points toward Stonehenge, and there is some evidence that it was built on a stretch on ancient trackway which once ran between the two Choirs". But as Glastonbury Abbey was built some four thousand years after Stonehenge, the relevance or likelihood of a link between them is debatable. Michell created diagrams that illustrated correlations between the design of Stonehenge and astronomical proportions and relationships. However, it is claimed that the Welsh Triads refer not to Stonehenge but to the village of Amesbury, which is two miles from Stonehenge.Rattue, James The Living Stream: Holy Wells in Historical Context The Boydell Press (24 Aug 1995) p.46 See also List of megalithic sites Egyptian pyramid construction techniques References Bibliography Chippindale, Christopher, Stonehenge Complete (Thames and Hudson, London, 2004) . Johnson, Anthony, Solving Stonehenge: The New Key to an Ancient Enigma (Thames & Hudson, London 2008) Kelly, Lynne Knowledge and Power in Prehistoric Societies: Orality, Memory, and the Transmission of Culture'' (2015) Cambridge University Press Thomas N.L., "Stonehengen Sacred Symbolism" 2011 @(www.bookpod.com.au) pbk, ebook. External links Nature Precedings — Pegs and Ropes Geometry at Stonehenge Stonehenge History of construction Wiltshire folklore
Theories about Stonehenge
Engineering
5,479
77,182
https://en.wikipedia.org/wiki/Prejudice
Prejudice can be an affective feeling towards a person based on their perceived social group membership. The word is often used to refer to a preconceived (usually unfavourable) evaluation or classification of another person based on that person's perceived personal characteristics, such as political affiliation, sex, gender, gender identity, beliefs, values, social class, friendship, age, disability, religion, sexuality, race, ethnicity, language, nationality, culture, complexion, beauty, height, body weight, occupation, wealth, education, criminality, sport-team affiliation, music tastes or other perceived characteristics. The word "prejudice" can also refer to unfounded or pigeonholed beliefs and it may apply to "any unreasonable attitude that is unusually resistant to rational influence". Gordon Allport defined prejudice as a "feeling, favorable or unfavorable, toward a person or thing, prior to, or not based on, actual experience". Auestad (2015) defines prejudice as characterized by "symbolic transfer", transfer of a value-laden meaning content onto a socially-formed category and then on to individuals who are taken to belong to that category, resistance to change, and overgeneralization. The United Nations Institute on Globalization, Culture and Mobility has highlighted research considering prejudice as a global security threat due to its use in scapegoating some populations and inciting others to commit violent acts towards them and how this can endanger individuals, countries, and the international community. Etymology The word prejudice has been used since Middle English around the year 1300. It comes from the Old French word préjudice, which comes from Latin praeiūdicium which comes from prae (before) and iūdicium (judgment). Historical approaches The first psychological research conducted on prejudice occurred in the 1920s. This research attempted to prove white supremacy. One article from 1925 which reviewed 73 studies on race concluded that the studies seemed "to indicate the mental superiority of the white race". These studies, along with other research, led many psychologists to view prejudice as a natural response to races believed to be inferior. In the 1930s and 1940s, this perspective began to change due to the increasing concern about anti-Semitism due to the ideology of the Nazis. At the time, theorists viewed prejudice as pathological and they thus looked for personality syndromes linked with racism. Theodor Adorno believed that prejudice stemmed from an authoritarian personality; he believed that people with authoritarian personalities were the most likely to be prejudiced against groups of lower status. He described authoritarians as "rigid thinkers who obeyed authority, saw the world as black and white, and enforced strict adherence to social rules and hierarchies". In 1954, Gordon Allport, in his classic work The Nature of Prejudice, linked prejudice to categorical thinking. Allport claimed that prejudice is a natural and normal process for humans. According to him, "The human mind must think with the aid of categories... Once formed, categories are the basis for normal prejudgment. We cannot possibly avoid this process. Orderly living depends upon it." In his book, he emphasizes the importance of the contact hypothesis. This theory posits that contact between different (ethnic) groups can reduce prejudices against those groups. Allport acknowledges the importance of the circumstances in which such contact occurs. He has attached conditions to it to promote positive contact and reduce prejudices. In the 1970s, research began to show that prejudice tends to be based on favoritism towards one's own groups, rather than negative feelings towards another group. According to Marilyn Brewer, prejudice "may develop not because outgroups are hated, but because positive emotions such as admiration, sympathy, and trust are reserved for the ingroup". In 1979, Thomas Pettigrew described the ultimate attribution error and its role in prejudice. The ultimate attribution error occurs when ingroup members "(1) attribute negative outgroup behavior to dispositional causes (more than they would for identical ingroup behavior), and (2) attribute positive outgroup behavior to one or more of the following causes: (a) a fluke or exceptional case, (b) luck or special advantage, (c) high motivation and effort, and (d) situational factors"/ Young-Bruehl (1996) argued that prejudice cannot be treated in the singular; one should rather speak of different prejudices as characteristic of different character types. Her theory defines prejudices as being social defences, distinguishing between an obsessional character structure, primarily linked with anti-semitism, hysterical characters, primarily associated with racism, and narcissistic characters, linked with sexism. Contemporary theories and empirical findings The out-group homogeneity effect is the perception that members of an out-group are more similar (homogenous) than members of the in-group. Social psychologists Quattrone and Jones conducted a study demonstrating this with students from the rival schools Princeton University and Rutgers University. Students at each school were shown videos of other students from each school choosing a type of music to listen to for an auditory perception study. Then the participants were asked to guess what percentage of the videotaped students' classmates would choose the same. Participants predicted a much greater similarity between out-group members (the rival school) than between members of their in-group. The justification-suppression model of prejudice was created by Christian Crandall and Amy Eshleman. This model explains that people face a conflict between the desire to express prejudice and the desire to maintain a positive self-concept. This conflict causes people to search for justification for disliking an out-group, and to use that justification to avoid negative feelings (cognitive dissonance) about themselves when they act on their dislike of the out-group. The realistic conflict theory states that competition between limited resources leads to increased negative prejudices and discrimination. This can be seen even when the resource is insignificant. In the Robber's Cave experiment, negative prejudice and hostility was created between two summer camps after sports competitions for small prizes. The hostility was lessened after the two competing camps were forced to cooperate on tasks to achieve a common goal. Another contemporary theory is the integrated threat theory (ITT), which was developed by Walter G Stephan. It draws from and builds upon several other psychological explanations of prejudice and ingroup/outgroup behaviour, such as the realistic conflict theory and symbolic racism. It also uses the social identity theory perspective as the basis for its validity; that is, it assumes that individuals operate in a group-based context where group memberships form a part of individual identity. ITT posits that outgroup prejudice and discrimination is caused when individuals perceive an outgroup to be threatening in some way. ITT defines four threats: Realistic threats Symbolic threats Intergroup anxiety Negative stereotypes Realistic threats are tangible, such as competition for a natural resource or a threat to income. Symbolic threats arise from a perceived difference in cultural values between groups or a perceived imbalance of power (for example, an ingroup perceiving an outgroup's religion as incompatible with theirs). Intergroup anxiety is a feeling of uneasiness experienced in the presence of an outgroup or outgroup member, which constitutes a threat because interactions with other groups cause negative feelings (e.g., a threat to comfortable interactions). Negative stereotypes are similarly threats, in that individuals anticipate negative behaviour from outgroup members in line with the perceived stereotype (for example, that the outgroup is violent). Often these stereotypes are associated with emotions such as fear and anger. ITT differs from other threat theories by including intergroup anxiety and negative stereotypes as threat types. Additionally, social dominance theory states that society can be viewed as group-based hierarchies. In competition for scarce resources such as housing or employment, dominant groups create prejudiced "legitimizing myths" to provide moral and intellectual justification for their dominant position over other groups and validate their claim over the limited resources. Legitimizing myths, such as discriminatory hiring practices or biased merit norms, work to maintain these prejudiced hierarchies. Prejudice can be a central contributing factor to depression. This can occur in someone who is a prejudice victim, being the target of someone else's prejudice, or when people have prejudice against themselves that causes their own depression. Paul Bloom argues that while prejudice can be irrational and have terrible consequences, it is natural and often quite rational. This is because prejudices are based on the human tendency to categorise objects and people based on prior experience. This means people make predictions about things in a category based on prior experience with that category, with the resulting predictions usually being accurate (though not always). Bloom argues that this process of categorisation and prediction is necessary for survival and normal interaction, quoting William Hazlitt, who stated "Without the aid of prejudice and custom, I should not be able to find my way my across the room; nor know how to conduct myself in any circumstances, nor what to feel in any relation of life". In recent years, researchers have argued that the study of prejudice has been traditionally too narrow. It is argued that since prejudice is defined as a negative affect towards members of a group, there are many groups against whom prejudice is acceptable (such as rapists, men who abandon their families, pedophiles, neo-Nazis, drink-drivers, queue jumpers, murderers etc.), yet such prejudices are not studied. It has been suggested that researchers have focused too much on an evaluative approach to prejudice, rather than a descriptive approach, which looks at the actual psychological mechanisms behind prejudiced attitudes. It is argued that this limits research to targets of prejudice to groups deemed to be receiving unjust treatment, while groups researchers deem treated justly or deservedly of prejudice are overlooked. As a result, the scope of prejudice has begun to expand in research, allowing a more accurate analysis of the relationship between psychological traits and prejudice. Some researchers had advocated looking into understanding prejudice from the perspective of collective values than just as biased psychological mechanism and different conceptions of prejudice, including what lay people think constitutes prejudice. This is due to concerns that the way prejudice has been operationalised does not fit its psychological definition and that it is often used to indicate a belief is faulty or unjustified without actually proving this to be the case. Some research has connected dark triad personality traits (Machiavellianism, grandiose narcissism, and psychopathy) with being more likely to hold racist, sexist, xenophobic, homophobic, and transphobic views. Evolutionary psychology Problems with psychological models One problem with the notion that prejudice evolved because of a necessity to simplify social classifications because of limited brain capacity and at the same time can be mitigated through education is that the two contradict each other, the combination amounting to saying that the problem is a shortage of hardware and at the same time can be mitigated by stuffing even more software into the hardware one just said was overloaded with too much software. The distinction between men's hostility to outgroup men being based on dominance and aggression and women's hostility to outgroup men being based on fear of sexual coercion is criticized with reference to the historical example that Hitler and other male Nazis believed that intergroup sex was worse than murder and would destroy them permanently which they did not believe that war itself would, i.e. a view of outgroup male threat that evolutionary psychology considers to be a female view and not a male view. Types of prejudice One can be prejudiced against or have a preconceived notion about someone due to any characteristic they find to be unusual or undesirable. A few commonplace examples of prejudice are those based on someone's race, gender, nationality, social status, sexual orientation, or religious affiliation, and controversies may arise from any given topic. Gender identity Transgender and non-binary people can be discriminated against because they identify with a gender that does not align with their assigned sex at birth. Refusal to call them by their preferred pronouns, or claims that they are not the gender they identify as could be considered discrimination, especially if the victim of this discrimination has expressed repetitively what their preferred identity is. Gender identity is now considered a protected category of discrimination. Therefore, severe cases of this discrimination can lead to criminal penalty or prosecution in some countries, and workplaces are required to protect against discrimination based on gender identity. Sexism Nationalism Nationalism is a sentiment based on common cultural characteristics that binds a population and often produces a policy of national independence or separatism. It suggests a "shared identity" amongst a nation's people that minimizes differences within the group and emphasizes perceived boundaries between the group and non-members. This leads to the assumption that members of the nation have more in common than they actually do, that they are "culturally unified", even if injustices within the nation based on differences like status and race exist. During times of conflict between one nation and another, nationalism is controversial since it may function as a buffer for criticism when it comes to the nation's own problems since it makes the nation's own hierarchies and internal conflicts appear to be natural. It may also serve a way of rallying the people of the nation in support of a particular political goal. Nationalism usually involves a push for conformity, obedience, and solidarity amongst the nation's people and can result not only in feelings of public responsibility but also in a narrow sense of community due to the exclusion of those who are considered outsiders. Since the identity of nationalists is linked to their allegiance to the state, the presence of strangers who do not share this allegiance may result in hostility. Classism Classism is defined by dictionary.com as "a biased or discriminatory attitude on distinctions made between social or economic classes". Some argue that economic inequality is an unavoidable aspect of society and the inequality of abilities, so there will always be a ruling class. Some also argue that, even within the most egalitarian societies in history, some form of ranking based on worth-based and worth-based individual status takes place. Therefore, one may believe the existence of social classes is a natural feature of society. Hierarchies can also be found in animals such as apes and other primates. Others argue the contrary. According to anthropological evidence, for the majority of the time the human species has been in existence, humans have lived in a manner in which the land and resources were not privately owned, although were common merely among the members of the same kin-based band or tribe. Also, since it was kin-oriented, when social ranking did occur, it was not antagonistic or hostile like the current class system. Sexual discrimination Individuals with non-heterosexual sexual attraction, such as homosexuals and bisexuals, may experience hatred from others due to their sexual orientation; a term for such hatred based upon one's sexual orientation is homophobia. However, more specific words for discrimination directed towards specific sexualities exist under other names, such as biphobia. Due to what social psychologists call the vividness effect, a tendency to notice only certain distinctive characteristics, the majority population tends to draw conclusions like gays flaunt their sexuality. Such images may be easily recalled to mind due to their vividness, making it harder to appraise the entire situation. The majority population may not only think that homosexuals flaunt their sexuality or are "too gay", but may also erroneously believe that homosexuals are easy to identify and label as being gay or lesbian when compared to others who are not homosexual. The idea of heterosexual privilege has been known to flourish in society. Research and questionnaires are formulated to fit the majority; i.e., heterosexuals. The status of assimilating or conforming to heterosexual standards may be referred to as "heteronormativity", or it may refer to ideology that the primary or only social norm is being heterosexual. In the US legal system, all groups are not always considered equal under the law. The gay or queer panic defense is a term for defenses or arguments used to defend the accused in court cases, that defense lawyers may use to justify their client's hate crime against someone that the client thought was LGBT. The controversy comes when defense lawyers use the victim's minority status as an excuse or justification for crimes that were directed against them. This may be seen as an example of victim blaming. One method of this defense, homosexual panic disorder, is to claim that the victim's sexual orientation, body movement patterns (such as their walking patterns or how they dance), or appearance that is associated with a minority sexual orientation provoked a violent reaction in the defendant. This is not a proven disorder, is no longer recognized by the DSM, and, therefore, is not a disorder that is medically recognized, but it is a term to explain certain acts of violence. Research shows that discrimination on the basis of sexual orientation is a powerful feature of many labor markets. For example, studies show that gay men earn 10–32% less than heterosexual men in the United States, and that there is significant discrimination in hiring on the basis of sexual orientation in many labor markets. Racism Racism is defined as the belief that physical characteristics determine cultural traits, and that racial characteristics make some groups superior. By separating people into hierarchies based upon their race, it has been argued that unequal treatment among the different groups of people is just and fair due to their genetic differences. Racism can occur amongst any group that can be identified based upon physical features or even characteristics of their culture. Though people may be lumped together and called a specific race, everyone does not fit neatly into such categories, making it hard to define and describe a race accurately. Scientific racism Scientific racism began to flourish in the eighteenth century and was greatly influenced by Charles Darwin's evolutionary studies, as well as ideas taken from the writings of philosophers like Aristotle; for example, Aristotle believed in the concept of "natural slaves". This concept focuses on the necessity of hierarchies and how some people are bound to be on the bottom of the pyramid. Though racism has been a prominent topic in history, there is still debate over whether race actually exists, making the discussion of race a controversial topic. Even though the concept of race is still being debated, the effects of racism are apparent. Racism and other forms of prejudice can affect a person's behavior, thoughts, and feelings, and social psychologists strive to study these effects. Religious discrimination While various religions teach their members to be tolerant of those who are different and to have compassion, throughout history there have been wars, pogroms and other forms of violence motivated by hatred of religious groups. In the modern world, researchers in western, educated, industrialized, rich and democratic countries have done various studies exploring the relationship between religion and prejudice; thus far, they have received mixed results. A study done with US college students found that those who reported religion to be very influential in their lives seem to have a higher rate of prejudice than those who reported not being religious. Other studies found that religion has a positive effect on people as far as prejudice is concerned. This difference in results may be attributed to the differences in religious practices or religious interpretations amongst the individuals. Those who practice "institutionalized religion", which focuses more on social and political aspects of religious events, are more likely to have an increase in prejudice. Those who practice "interiorized religion", in which believers devote themselves to their beliefs, are most likely to have a decrease in prejudice. Linguistic discrimination Individuals or groups may be treated unfairly based solely on their use of language. This use of language may include the individual's native language or other characteristics of the person's speech, such as an accent or dialect, the size of vocabulary (whether the person uses complex and varied words), and syntax. It may also involve a person's ability or inability to use one language instead of another. In the mid-1980s, linguist Tove Skutnabb-Kangas captured this idea of discrimination based on language as the concept of linguicism. Kangas defined linguicism as the ideologies and structures used to "legitimate, effectuate, and reproduce unequal division of power and resources (both material and non-material) between groups which are defined on the basis of language". Neurological discrimination High-functioning Broadly speaking, attribution of low social status to those who do not conform to non-autistic expectations of personality and behaviour. This can manifest through assumption of 'disability' status to those who are high functioning enough to exist outside of diagnostic criteria, yet do not desire to (or are unable to) conform their behaviour to conventional patterns. This is a controversial and somewhat contemporary concept; with various disciplinary approaches promoting conflicting messages what normality constitutes, the degree of acceptable individual difference within that category, and the precise criteria for what constitutes medical disorder. This has been most prominent in the case of high-functioning autism, where direct cognitive benefits increasingly appear to come at the expense of social intelligence. Discrimination may also extend to other high functioning individuals carrying pathological phenotypes, such as those with attention deficit hyperactivity disorder and bipolar spectrum disorders. In these cases, there are indications that perceived (or actual) socially disadvantageous cognitive traits are directly correlated with advantageous cognitive traits in other domains, notably creativity and divergent thinking, and yet these strengths might become systematically overlooked. The case for "neurological discrimination" as such lies in the expectation that one's professional capacity may be judged by the quality of ones social interaction, which can in such cases be an inaccurate and discriminatory metric for employment suitability. Since there are moves by some experts to have these higher-functioning extremes reclassified as extensions of human personality, any legitimisation of discrimination against these groups would fit the very definition of prejudice, as medical validation for such discrimination becomes redundant. Recent advancements in behavioural genetics and neuroscience have made this a very relevant issue of discussion, with existing frameworks requiring significant overhaul to accommodate the strength of findings over the last decade. Multiculturalism Humans have an evolved propensity to think categorically about social groups, manifested in cognitive processes with broad implications for public and political endorsement of multicultural policy, according to psychologists Richard J. Crisp and Rose Meleady. They postulated a cognitive-evolutionary account of human adaptation to social diversity that explains general resistance to multiculturalism, and offer a reorienting call for scholars and policy-makers who seek intervention-based solutions to the problem of prejudice. Reducing prejudice Contact hypothesis The contact hypothesis predicts that prejudice can only be reduced when in-group and out-group members are brought together. Academics Thomas Pettigrew and Linda Tropp conducted a meta-analysis of 515 studies involving a quarter of a million participants in 38 nations to examine how intergroup contact reduces prejudice. They found that three mediators are of particular importance: Intergroup contact reduces prejudice by (1) enhancing knowledge about the outgroup, (2) reducing anxiety about intergroup contact, and (3) increasing empathy and perspective-taking. While all three of these mediators had mediational effects, the mediational value of increased knowledge was less strong than anxiety reduction and empathy. In addition, some individuals confront discrimination when they see it happen, with research finding that individuals are more likely to confront when they perceive benefits to themselves, and are less likely to confront when concerned about others' reactions. Jigsaw (teaching technique) In Elliot Aronson's "jigsaw" teaching technique there are six conditions that must be met to reduce prejudice. First, the in- and out-groups must have a degree of mutual interdependence. Second, both groups need to share a common goal. Third, the two groups must have equal status. Fourth, there must be frequent opportunities for informal and interpersonal contact between groups. Fifth, there should be multiple contacts between the in- and the out-groups. Finally, social norms of equality must exist and be present to foster prejudice reduction. See also References Further reading Adorno, Th. W., Frenkel-Brunswik, E., Levinson, D. J. and Sanford, R. N. (1950). The authoritarian personality. New York: Harper. BACILA, Carlos Roberto. Criminologia e Estigmas: Um estudo sobre os Preconceitos. São Paulo: Gen Atlas, 2016. Dorschel, A., Rethinking prejudice. Aldershot, Hampshire – Burlington, Vermont – Singapore – Sydney: Ashgate, 2000 (New Critical Thinking in Philosophy, ed. Ernest Sosa, Alan H. Goldman, Alan Musgrave et alii). – Reissued: Routledge, London – New York, NY, 2020. Eskin, Michael, The DNA of Prejudice: On the One and the Many. New York: Upper West Side Philosophers, Inc. 2010 (Next Generation Indie Book Award for Social Change). Paluck, Elizabeth Levy; Porat, Roni; Clark, Chelsey S.; Green, Donald P. (2021). "Prejudice Reduction: Progress and Challenges". Annual Review of Psychology. 72 (1). doi:10.1146/annurev-psych-071620-030619. Amodio, David M.; Cikara, Mina (2021). "The Social Neuroscience of Prejudice". Annual Review of Psychology. 72 (1). doi:10.1146/annurev-psych-010419-050928. Abuse Anti-social behaviour Barriers to critical thinking Prejudice and discrimination
Prejudice
Biology
5,280
1,966,084
https://en.wikipedia.org/wiki/Passive%20infrared%20sensor
A passive infrared sensor (PIR sensor) is an electronic sensor that measures infrared (IR) light radiating from objects in its field of view. They are most often used in PIR-based motion detectors. PIR sensors are commonly used in security alarms and automatic lighting applications. PIR sensors detect general movement, but do not give information on who or what moved. For that purpose, an imaging IR sensor is required. PIR sensors are commonly called simply "PIR", or sometimes "PID", for "passive infrared detector". The term passive refers to the fact that PIR devices do not radiate energy for detection purposes. They work entirely by detecting infrared radiation (radiant heat) emitted by or reflected from objects. Operating principles All objects with a temperature above absolute zero emit heat energy in the form of electromagnetic radiation. Usually this radiation isn't visible to the human eye because it radiates at infrared wavelengths, but it can be detected by electronic devices designed for such a purpose. PIR-based motion detector A PIR-based motion detector is used to sense movement of people, animals, or other objects. They are commonly used in burglar alarms and automatically activated lighting systems. Operation A PIR sensor can detect changes in the amount of infrared radiation impinging upon it, which varies depending on the temperature and surface characteristics of the objects in front of the sensor. When an object, such as a person, passes in front of the background, such as a wall, the temperature at that point in the sensor's field of view will rise from room temperature to body temperature, and then back again. The sensor converts the resulting change in the incoming infrared radiation into a change in the output voltage, and this triggers the detection. Objects of similar temperature but different surface characteristics may also have a different infrared emission pattern, and thus moving them with respect to the background may trigger the detector as well. PIRs come in many configurations for a wide variety of applications. The most common models have numerous Fresnel lenses or mirror segments, an effective range of about 10 meters (30 feet), and a field of view less than 180°. Models with wider fields of view, including 360°, are available, typically designed to mount on a ceiling. Some larger PIRs are made with single segment mirrors and can sense changes in infrared energy over 30 meters (100 feet) from the PIR. There are also PIRs designed with reversible orientation mirrors which allow either broad coverage (110° wide) or very narrow "curtain" coverage, or with individually selectable segments to "shape" the coverage. Differential detection Pairs of sensor elements may be wired as opposite inputs to a differential amplifier. In such a configuration, the PIR measurements cancel each other so that the average temperature of the field of view is removed from the electrical signal; an increase of IR energy across the entire sensor is self-cancelling and will not trigger the device. This allows the device to resist false indications of change in the event of being exposed to brief flashes of light or field-wide illumination. (Continuous high energy exposure may still be able to saturate the sensor materials and render the sensor unable to register further information.) At the same time, this differential arrangement minimizes common-mode interference, allowing the device to resist triggering due to nearby electric fields. However, a differential pair of sensors cannot measure temperature in this configuration, and therefore is only useful for motion detection. Practical Implementation When a PIR sensor is configured in a differential mode, it specifically becomes applicable as a motion detector device. In this mode, when a movement is detected within the "line of sight" of the sensor, a pair of complementary pulses are processed at the output pin of the sensor. In order to implement this output signal for a practical triggering of a load such as a relay or a data logger, or an alarm, the differential signal is rectified using a bridge rectifier and fed to a transistorized relay driver circuit. The contacts of this relay close and open in response to the signals from the PIR, activating the attached load across its contacts, acknowledging the detection of a person within the predetermined restricted area. Product design The PIR sensor is typically mounted on a printed circuit board containing the necessary electronics required to interpret the signals from the sensor itself. The complete assembly is usually contained within a housing, mounted in a location where the sensor can cover the area to be monitored. The housing will usually have a plastic "window" through which the infrared energy can enter. Despite often being only translucent to visible light, infrared energy is able to reach the sensor through the window because the plastic used is transparent to infrared radiation. The plastic window reduces the chance of foreign objects (dust, insects, rain, etc.) from obscuring the sensor's field of view, damaging the mechanism, and/or causing false alarms. The window may be used as a filter, to limit the wavelengths to 8-14 micrometres, which is closest to the infrared radiation emitted by humans. It may also serve as a focusing mechanism; see below. Focusing Different mechanisms can be used to focus the distant infrared energy onto the sensor surface. Lenses The plastic window covering may have multiple facets molded into it, to focus the infrared energy onto the sensor. Each individual facet is a Fresnel lens. Mirrors Some PIRs are manufactured with internal, segmented parabolic mirrors to focus the infrared energy. Where mirrors are used, the plastic window cover generally has no Fresnel lenses molded into it. Beam pattern As a result of the focussing, the detector view is actually a beam pattern. Under certain angles (zones), the PIR sensor receives almost no radiation energy and under other angles the PIR receives concentrated amounts of infrared energy. This separation helps the motion detector to discriminate between field-wide illumination and moving objects. When a person walks from one angle (beam) to another, the detector will only intermittently see the moving person. This results in a rapidly changing sensor signal which is used by the electronics to trigger an alarm or to turn on lighting. A slowly changing signal will be ignored by the electronics. The number, shape, distribution and sensitivity of these zones are determined by the lens and/or mirror. Manufacturers do their best to create the optimal sensitivity beam pattern for each application. Automatic lighting applications When used as part of a lighting system, the electronics in the PIR typically control an integral relay capable of switching mains voltage. This means the PIR can be set up to turn on lights that are connected to the PIR when movement is detected. This is most commonly used in outdoor scenarios either to deter criminals (security lighting) or for practical uses like the front door light turning on so you can find your keys in the dark. Additional uses can be in public toilets, walk-in pantries, hallways or anywhere that automatic control of lights is useful. This can provide energy savings as the lights are only turned on when they are needed and there is no reliance on users remembering to turn the lights off when they leave the area. Security applications When used as part of a security system, the electronics in the PIR typically control a small relay. This relay completes the circuit across a pair of electrical contacts connected to a detection input zone of the burglar alarm control panel. The system is usually designed such that if no motion is being detected, the relay contact is closed—a 'normally closed' (NC) relay. If motion is detected, the relay will open the circuit, triggering the alarm; or, if a wire is disconnected, the alarm will also operate. Placement Manufacturers recommend careful placement of their products to prevent false alarms (i.e., any detection not caused by an intruder). They suggest mounting the PIRs in such a way that the PIR cannot "see" out of a window. Although the wavelength of infrared radiation to which the chips are sensitive does not penetrate glass very well, a strong infrared source (such as from a vehicle headlight or sunlight) can overload the sensor and cause a false alarm. A person moving on the other side of the glass would not be "seen" by the PID. That may be good for a window facing a public sidewalk, or bad for a window in an interior partition. It is also recommended that the PIR not be placed in such a position that an HVAC vent would blow hot or cold air onto the surface of the plastic which covers the housing's window. Although air has very low emissivity (emits very small amounts of infrared energy), the air blowing on the plastic window cover could change the plastic's temperature enough to trigger a false alarm. Sensors are also often designed to "ignore" domestic pets, such as dogs or cats, by setting a higher sensitivity threshold, or by ensuring that the floor of the room remains out of focus. Since PIR sensors have ranges of up to 10 meters (30 feet), a single detector placed near the entrance is typically all that is necessary for rooms with only a single entrance. PIR-based security systems are also viable in outdoor security and motion-sensitive lighting; one advantage is their low power draw, which allows them to be solar-powered. PIR remote-based thermometer Designs have been implemented in which a PIR circuit measures the temperature of a remote object. In such a circuit, a non-differential PIR output is used. The output signal is evaluated according to a calibration for the IR spectrum of a specific type of matter to be observed. By this means, relatively accurate and precise temperature measurements may be obtained remotely. Without calibration to the type of material being observed, a PIR thermometer device is able to measure changes in IR emission which correspond directly to temperature changes, but the actual temperature values cannot be calculated. See also Heat detector Infrared point sensor Infrared sensor List of sensors References External links How Infrared motion detector components work Design advice and assembly instructions from a motion detector kit , Infrared Intrusion Detector System, issued November 21, 1972 to Herbert L. Berman, contains a very clear explanation Optical devices Security technology Sensors
Passive infrared sensor
Materials_science,Technology,Engineering
2,074
1,135,893
https://en.wikipedia.org/wiki/Mark%20of%20the%20Unicorn
Mark of the Unicorn (MOTU) is a music-related computer software and hardware supplier. It is based in Cambridge, Massachusetts and has created music software since 1984. In the mid-1980s, Mark of the Unicorn sold productivity software and several games for the Macintosh, Atari ST, and Amiga. Products Current Digital Performer AudioDesk Past MINCE and SCRIBBLE, an Emacs-like editor and Scribe-like text formatter for CP/M machines. MINCE was also available for the Atari ST. FinalWord word processor (sold and became Sprint). Professional Composer, one of the first graphical music-notation editors. Mouse Stampede, arguably the first arcade-style game available for the Apple Macintosh (1984). Hex game for the Atari ST and Amiga computers (released in 1985). The first FireWire Audio Interface for Mac and Windows. PC/Intercomm, VT100 emulator for the Atari ST. References External links Computer companies of the United States Computer hardware companies Companies based in Cambridge, Massachusetts
Mark of the Unicorn
Technology
210
32,658,840
https://en.wikipedia.org/wiki/Fluorocitric%20acid
Fluorocitric acid is an organic compound with the chemical formula . It is a fluorinated carboxylic acid derived from citric acid by substitution of one methylene hydrogen by a fluorine atom. The appropriate anion is called fluorocitrate. Fluorocitrate is formed in two steps from fluoroacetate. Fluoroacetate is first converted to fluoroacetyl-CoA by acetyl-CoA synthetase in the mitochondria. Then fluoroacetyl-CoA condenses with oxaloacetate to form fluorocitrate. This step is catalyzed by citrate synthase. Flurocitrate is a metabolite of fluoroacetic acid and is very toxic because it is not processable using aconitase in the citrate cycle (where fluorocitrate takes place of citrate as the substrate). The enzyme is inhibited and the cycle stops working. See also Citric acid Fluoroacetic acid Citrate cycle References External links PubChem: Fluorocitrate Human Metabolome Database (HMDB): Fluorocitric acid The Chemical and Biochemical Properties of Fluorocitric Acid Pdf Tricarboxylic acids Organofluorides Fluorohydrins Respiratory toxins Aconitase inhibitors Fluorinated carboxylic acids
Fluorocitric acid
Chemistry
286
2,550,004
https://en.wikipedia.org/wiki/Proton%20CamPro%20engine
The Proton CamPro engine is the first flagship automotive engine developed together with Lotus by the Malaysian automobile manufacturer, Proton. The name CamPro is short for Cam Profiling. This engine powers the Proton Gen-2, Proton Satria Neo, Proton Waja Campro, Proton Persona, Proton Saga, Proton Exora, Proton Preve, Proton Suprima S and Proton Iriz. The CamPro engine was created to show Proton's ability to make its own engines that produce good power output and meet newer emission standards. The engine prototype was first unveiled on 6 October 2000 at the Lotus factory in UK before it debuted in the 2004 Proton Gen•2. All CamPro engines incorporate drive-by-wire technology (specifically electronic throttle control) for better response, eliminating the need for friction-generating mechanical linkages and cables. CamPro technical specifications Variants Original CamPro engine The first CamPro engine made its debut in 2004 fitted to the newly released Gen•2 models. It was codenamed S4PH and was a DOHC 16-valve 1.6-litre engine that produced of power at 6,000 rpm and of torque at 4,000 rpm. The S4PH engine was ironically not equipped with Cam Profile Switching (CPS) even though its Campro designation was an abbreviation of Cam Profile Switching. It also lacked the Variable Inlet Manifold (VIM) technology of later CamPro engines. Proton also produced a 1.3-litre version of this original CamPro engine and codenamed it S4PE. Even though the S4PH engine had contemporary maximum power and torque outputs, its performance was reportedly sluggish in real world driving. This performance deficiency was attributed to a pronounced torque dip in the crucial 2,500 to 3,500 rpm mid engine speed range where torque actually decreased before picking up back to the maximum torque level at 4,000 rpm. This torque characteristic could also clearly be seen in manufacturer published engine performance curves. The original Campro 1.3-litre variant produced of power at 6,000 rpm and of torque at 4,000 rpm, again contemporary outputs for a 1.3-litre passenger car engine of the time. This engine also displayed a torque dip in the mid engine speed range, similar to the one in the larger variant. The bore x stroke dimensions for both engines are as follows:- S4PH (1.6L): x , resulting the displacement of 1,598 cc. S4PE (1.3L): x , resulting the displacement of 1,332 cc. Applications: 2004 – 2008 Proton Gen-2 2006 – 2008 Proton Waja 2006 – 2009 Proton Satria Neo (Lite & M-Line) 2007 – 2008 Proton Persona 2010 – 2015 Youngman Lotus L5 CamPro CPS and VIM engine The CamPro CPS engine uses a variable valve lift system (Cam Profile Switching system) and a variable length intake manifold (VIM; not to be confused with the stand-alone IAFM used in the 2008 Proton Saga) to boost maximum power and improve the CPS engine's torque curve over the original CamPro engine. The engine's Variable-length Intake Manifold (VIM) switches between a long intake manifold at low engine speeds and a short intake manifold at higher engine speeds. Proton cars use a longer intake manifold to achieve slower air flow; as it was found that promotes better mixing with fuel. The short intake manifold allows more air in faster. This is beneficial at high RPMs. The Cam Profile Switching (CPS) system uses a tri-lobe camshaft to switch between two different cam profiles. One cam profile provides low valve lift, while the other cam profile has a high valve lift. The low valve lift cam profile is used at low to mid engine speeds to maintain idling quality and reduce emissions, while the high lift cam profile is used when the engine is spinning at mid to high engine speeds improve peak horsepower and torque. Unlike the other similar variable valve timing systems such as the Honda VTEC, the Toyota VVT-i and the Mitsubishi MIVEC which use rocker arm locking pins to change the valve timing, the CPS system uses direct-acting tappets with locking pins to change the valve timing and lift profile. VIM switches from the long to short runner at 4,800 rpm, while the CPS system switches over at 3,800 rpm (4,400 rpm in the Proton Satria Neo CPS). The result is at 6,500 rpm and of torque at 4,500 rpm compared to the non-CPS CamPro's at 6,000 rpm and of torque at 4,000 rpm. Proton claims that there is better response and torque at low engine speeds of between 2000 - 2500 rpm. The new CPS engine first made its debut in the face-lifted Proton Gen•2 launched in Thailand in 2008, and made its first Malaysian debut in the Proton Waja CamPro 1.6 Premium (CPS). Applications: 2008 – 2010 Proton Gen-2 (H-line) 2008 – 2011 Proton Waja 2009 –2015 Proton Satria Neo (H-Line) 2009 – 2016 Proton Exora 2010 – 2012 Proton Gen-2 Facelift (M-Line) CamPro IAFM engine The CamPro IAFM (Intake Air-Fuel Module) is essentially an original CamPro engine equipped with a variable-length intake manifold, developed under a joint fast track programme that began in April 2005 by EPMB, Bosch and Proton. However, the IAFM differs from the VIM (Variable Inlet Manifold) for the CamPro CPS engine as follows: The IAFM is a stand-alone module that can be fitted with an original CamPro engine whereas the VIM needs to work in conjunction with the CPS system in a CamPro CPS engine. The IAFM is operated by the engine vacuum, while the VIM uses an ECU-controlled solenoid. The Intake Air-Fuel Module for Proton's CamPro engine debuted in the second-generation Proton Saga, which was launched on 18 January 2008. It was first made known to the public in October 2006, when it was still in its advanced tooling stages. With the IAFM, the 1.3L engine used in the Proton Saga now produces @ 6,500rpm. The maximum torque is slightly reduced to ; however, the engine has broader torque range and the noticeable torque dip in the original CamPro engine has been eliminated. The official brochure is only published with the familiar at 6,000 rpm power and at 4,000 rpm torque for consistency with other 1.3-litre Proton models. Meanwhile, the output of the 1.6-litre version of the IAFM engine which debuted in the 2008 Proton Gen-2 M-Line produces @ 6,500 rpm of power and of torque, and the torque dip around 2,500-3,500 rpm has been eliminated. While the IAFM works great when it was new, the quality of parts are not durable over long term. When the solenoid breaks, the flap can no longer be functional to direct air in the manifold and causes it to produce the infamous 'tak, tak, tak' sound, similar to the noise of tappets while also making it a major vacuum leak to the engine. The second-generation Campro IAFM engine, known as IAFM+ engine, debuted in the 2011 Proton Saga FLX. The new IAFM+ engine is tweaked to be paired with the new CVT gearbox by Punch Powertrain that requires the maximum operating engine speed to be reduced from the previous 6,500 rpm in the first-generation IAFM engine to only 6,000 rpm. As a result, the 1.3L IAFM+ engine produces @ 5,750 rpm of horsepower and of torque, while the 1.6L IAFM+ engine produces @ 5,750 rpm of horsepower and of torque. The combination of the new Campro IAFM+ engine with the CVT gearbox results 4% and 10% reduction on fuel consumption for urban and highway driving respectively. Applications: 2008 – 2016 Proton Saga 2008 – 2010 Proton Gen-2 (M-line) 2008 – 2016 Proton Persona 2012 – 2018 Proton Preve Hybrid CamPro engine In March 2007, Proton and Lotus have announced their concept model of a Proton Gen-2 powered by a hybrid powerplant that uses the CamPro engine. The concept model was revealed during the 2007 Geneva Motor Show from 8 ~ 18 March 2007. The hybrid power-plant system, which is known as EVE system (Efficient, Viable, Environmental) will be using the same S4PH engine as the one that powers the present gasoline version of the Gen•2, combined with a 30 kW, 144V electric motor. The main purpose of the hybrid powerplant system is to provide a hybrid system that can be retrofitted to existing models, retaining the same power-plant and also eliminating the need to develop a completely different platform, like the Honda Civic Hybrid. Unlike the IMA (Integrated Motor Assist) technology in the Civic Hybrid that uses a bulky Ni-MH battery pack, the EVE Hybrid system will use a Li-ion battery pack inside the engine bay. The EVE Hybrid System will have 3 key technologies: "Micro-hybrid" start-stop system - An integrated starter-alternator system is installed to switch off the engine automatically when the engine stops, for example at a traffic light. The engine will automatically restart when the gas pedal is depressed. Full parallel hybrid technology - Combines the existing S4PH engine with a 30 kW, 144V electric motor, resulting in higher power (141 bhp combined), higher torque (233 N-m combined), lower emission (up to 22% carbon dioxide reduction) and better fuel economy (up to 4.6 L/100 km). The system also includes regenerative braking system. Continuously Variable Transmission (CVT) - The CVT system provides an infinite number of gear ratios for better efficiency. The combined power and torque for the power-plant system are as follows: Max power (gasoline engine only): @ 6,000 rpm Max torque (gasoline engine only): @ 4,000 rpm Max power (combined): @ 5,500 rpm Max torque (combined): @ 1,500rpm (limited to continuous) Proton will start commercialising their upcoming hybrid vehicles equipped with the EVE Hybrid System in the future. CamPro CFE engine The CamPro CFE engine is the light-pressure intercooled turbocharged version of the 1.6-litre CamPro engine, with the maximum boost pressure of . The CFE is the acronym of "Charged Fuel Efficiency". The idea of the production was first revealed by Proton Managing Director Datuk Syed Zainal Abidin on 13 December 2008, due to the new market trend of having small displacement engine but forced-aspirated to produce the power output equivalent to a larger motor, a similar concept as the Volkswagen TSI twincharger technology and the Ford EcoBoost engine. The finalised engine was debuted during the KLIMS 2010. The engine is capable of producing at 5,000 rpm of power and at 2,000-4,000 rpm of torque. To accommodate the increase of engine power, several changes to the technical specification have been done. While the engine bore remains at , the stroke is shortened to compared with as in other 1.6L Campro engine variants, resulting the engine displacement of 1561 cc. The compression ratio is reduced to 8.9:1 from the previous 10:1. A variable valve timing mechanism is also added for the intake valves, but it alters the cam phasing and valve opening timing continuously rather than altering the valve lift at a preset engine speed as in the CPS mechanism. In 2016, a public recall affecting more than 90,000 CamPro CFE equipped vehicles took place for the oil cooler hose. Together with the recall, service intervals for oil cooler hose replacement were lowered to every 40,000 kilometres. The intervals was increased to every 80,000 kilometres following the availability of a higher quality oil cooler hose in 2018 replacing an all rubber component from before with a part rubber, part metal component. Applications: 2012 – 2023 Proton Exora 2012 – 2018 Proton Preve 2013 – 2019 Proton Suprima S VVT engine The VVT (Variable Valve Timing) engine was unveiled in September 2014 with its first application in the Proton Iriz. The VVT engine has a new block, new pistons and new valves, and incorporates variable valve timing (VVT). However, some technology in the new VVT family is shared with the old CamPro, but due to the various changes and modifications made to the CamPro family over the past decade, Proton has decided not to use the 'CamPro' nameplate after its 2014 revision. However, older models like the Exora, Prevé and Suprima S will continue to use the old 'CamPro' name until it is eventually retired in favour of the upcoming GDi engines. The latest application of the VVT engine in the 2016 Proton Persona, 2017 Proton Iriz and Proton Saga features Proton's ECO Drive Assist program. The system assesses the driver's throttle input, and a green indicator on the instrument cluster will light up when the car is being driven in an economical manner. The engine is capable to produce at 5,750 rpm of power and at 4,000 rpm of torque for the 1.3 variant compared to the 1.6 variant which delivers at 5,750 rpm of power and at 4,000 rpm of torque. While the engine bore for the 1.3 variant is , the stroke is compared with the 1.6 variant which is . The VVT engines are like the CFE engines with the VVT for the intake valves. Applications: 2014 – present Proton Iriz 2016 – present Proton Persona (equipped only with 1.6 variant) 2016 – present Proton Saga (equipped only with 1.3 variant) Future plans Currently, Proton is planning to develop a new engine known as the code name "GDi/TGDi engine" with option of displacement between 1.0/1.2L three cylinders,1.3/1.5 naturally aspirated and turbocharged and progressively 2.0 L, 2.3 L all in the variant of either natural aspirated or in force induction type. The existing CamPro engines which are limited to 1.3-litre and 1.6-litre engine options only will be EOL (End of Life) soon after. The 1.3 and 1.5 turbo slated to churns out 140 hp/210 nm & 180 hp/250 nm respectively. References External links Automobile engines Proton engines Lotus engines 2004 introductions 2004 establishments in Malaysia British inventions Malaysian inventions
Proton CamPro engine
Technology
3,038
28,937,040
https://en.wikipedia.org/wiki/Smoluchowski%20factor
The Smoluchowski factor, also known as von Smoluchowski's f-factor is related to inter-particle interactions. It is named after Marian Smoluchowski. References See also Flocculation Smoluchowski coagulation equation Einstein–Smoluchowski relation Physical chemistry
Smoluchowski factor
Physics,Chemistry
64
4,820,778
https://en.wikipedia.org/wiki/Money%20train
A money train is one or more railcars used to collect cash fare revenue from stations on a subway system and return it to a central location for processing. This train was typically used to carry money bags guarded by transit police to deter robberies. On the New York City Subway, a "money train" was first mentioned in 1905, a year after the system opened. Their trains were converted from subway cars that have been removed from passenger service. This has since been discontinued, with the last service running in January 2006. Two of the cars are preserved by the New York Transit Museum in Brooklyn. The use of a train was necessary because of difficulties in getting to and from stations using over-street transport, and because, since the subway reaches every station, the rail system itself can be used to collect money from ticket machines. The 1995 American crime thriller film Money Train depicts a robbery of such a train. Singapore's Mass Rapid Transit system introduced a money train (cash train) where a specially modified Driving Trailer car is coupled to a regular 3 car C151 train when the system was commissioned in 1987. Used to transfer cash trolleys from stations to a counting facility at Bishan Depot, the increased use of stored value tickets resulted in the train being decommissioned in 2007. The Washington Metro system continues to use money trains as of August 2024. The Light Rail in Hong Kong uses money trains, which are regular passenger trains taken out of service, to collect fares from the ticket vending machines. Pay car In Australia, the reverse procedure occurred with the New South Wales Government Railways fleet of pay buses. A small self-powered railcar, they were used to deliver pay packets containing cash to employees at remote railway stations, as well as maintenance gangs working on the tracks. This operation remained in service until the 1980s when it was supplanted by electronic payments. References Fare collection systems New York City Subway fare payment Trains
Money train
Technology
386
54,113,190
https://en.wikipedia.org/wiki/Magnetic%20secure%20transmission
Magnetic secure transmission (MST) is the name for mobile payment technology in which devices such as smartphones emit a signal that mimics the magnetic stripe on a traditional payment card. Overview MST sends a magnetic signal from the device to the payment terminal's card reader. It emulates swiping a physical card without having to upgrade the terminal's software or hardware to support more advanced technology, such as contactless payments. Hence, in contrast to payments using near-field communication, MST technology is compatible with nearly all payment terminals that possess a magnetic stripe reader. MST is designed to transmit from within of the magnetic card reader. Outside of physical transmission, there are no changes to the magnetic stripe card system (i.e., reception, processing, information content, and cryptographic protocols). However, the information being transmitted being dynamic may allow tokenization. MST was originally developed by LoopPay, which was acquired by Samsung in 2015 and incorporated into its Samsung Pay service. In 2017, LG launched its competing LG Pay service, which uses a similar technology called Wireless Magnetic Communication (WMC). The original MST and WMC mimicked unencrypted magnetic stripe technology in order to be compatible with older credit card terminals. The wireless transmissions were not encrypted and therefore not considered "secure". The Samsung Pay and LG implementations of MST use secure EMV compatible tokens and are considered to be secure. References Payment cards
Magnetic secure transmission
Technology
304
71,514,894
https://en.wikipedia.org/wiki/Voevodsky%20Institute%20of%20Chemical%20Kinetics%20and%20Combustion
V. V. Voevodsky Institute of Chemical Kinetics and Combustion of the Siberian Branch of the RAS, ICKC SB RAS () is a research institute in Novosibirsk, Russia. It was founded in 1957. History The institute was founded in 1957. Its team was formed of scientists led by A. A. Kovalsky and V. V. Voevodsky. In 2002, the number of employees was 284. Activities The study of combustion mechanisms in gas and condensed phases, the processes of formation and distribution of aerosols etc. The institute has developed methods for high-resolution radiospectroscopy and methods for the filtration combustion of gases. It created aerosol technologies for the protection of crops and forests. Awards The works of the institute staff were awarded the USSR Council of Ministers Prize (1985), the Lenin Prize (1986), two USSR State Prizes (1968, 1988) and the State Prize of the Russian Federation (1994). References External links Институт химической кинетики и горения им. В. В. Воеводского СО РАН. СО РАН. Research institutes in Novosibirsk Chemical research institutes 1957 establishments in the Soviet Union Research institutes established in 1957 Research institutes in the Soviet Union
Voevodsky Institute of Chemical Kinetics and Combustion
Chemistry
294
3,628,041
https://en.wikipedia.org/wiki/Moscow%E2%80%93Washington%20hotline
The Moscow–Washington hotline (formally known in the United States as the Washington–Moscow Direct Communications Link; ) is a system that allows direct communication between the leaders of the United States and the Russian Federation (formerly the Soviet Union). This hotline was established in 1963 and links the Pentagon with the Kremlin (historically, with Soviet Communist Party leadership across the square from the Kremlin itself). Although in popular culture it is known as the "red telephone", the hotline was never a telephone line, and no red phones were used. The first implementation used Teletype equipment, and shifted to fax machines in 1986. Since 2008, the Moscow–Washington hotline has been a secure computer link over which messages are exchanged by a secure form of email. Origins Background Several people came up with the idea for a hotline, including Harvard professor Thomas Schelling, who had worked on nuclear war policy for the Defense Department previously. Schelling credited the pop fiction novel Red Alert (the basis of the film Dr. Strangelove) with making governments more aware of the benefit of direct communication between the superpowers. In addition, Parade editor Jess Gorkin personally badgered 1960 presidential candidates John F. Kennedy and Richard Nixon, and buttonholed the Soviet premier Nikita Khrushchev during a U.S. visit to adopt the idea. During this period Gerard C. Smith, as head of the State Department Policy Planning Staff, proposed direct communication links between Moscow and Washington. Objections from others in the State Department, the U.S. military, and the Kremlin delayed introduction. The 1962 Cuban Missile Crisis made the hotline a priority. During the standoff, official diplomatic messages typically took six hours to deliver; unofficial channels, such as via television network correspondents, had to be used too as they were quicker. The experience of the crisis convinced both sides of the need for better communications. During the crisis, the United States took nearly twelve hours to receive and decode Nikita Khrushchev's 3,000-word-initial settlement message – a dangerously long time. By the time Washington had drafted a reply, a tougher message from Moscow had been received, demanding that U.S. missiles be removed from Turkey. White House advisers thought faster communications could have averted the crisis, and resolved it quickly. The two countries signed the Hot Line Agreement on June 20, 1963 – the first time they formally took action to cut the risk of starting a nuclear war unintentionally. It was used for the first time by U.S. President John F. Kennedy on August 30, 1963. Agreement The "hotline", as it would come to be known, was established after the signing of a "Memorandum of Understanding Regarding the Establishment of a Direct Communications Line" on June 20, 1963, in Geneva, Switzerland, by representatives of the Soviet Union and the United States. Political criticism The Republican Party criticized the hotline in its 1964 national platform; it said the Kennedy administration had "sought accommodations with Communism without adequate safeguards and compensating gains for freedom. It has alienated proven allies by opening a 'hot line' first with a sworn enemy rather than with a proven friend, and in general pursued a risky path such as began at Munich a quarter of a century ago." Technology and procedure The Moscow–Washington hotline was intended for text only; speech might be misinterpreted. Leaders wrote in their native language and messages were translated at the receiving end. Teletype The first generation of the hotline used two full-time duplex telegraph circuits. The primary circuit was routed from Washington, D.C. via London, Copenhagen, Stockholm and Helsinki to Moscow. TAT-1, the first submarine transatlantic telephone cable, carried messages from Washington to London. A secondary radio line for back-up and service messages linked Washington and Moscow via Tangier. This network was originally built by Harris Corporation. In July 1963 the United States sent four sets of teleprinters with the Latin alphabet to Moscow for the terminal there. A month later the Soviet equipment, four sets of East German teleprinters with the Cyrillic alphabet made by Siemens, arrived in Washington. The hotline started operations on August 30, 1963. Encryption A Japanese-built device called Electronic Teleprinter Cryptographic Regenerative Repeater Mixer II (ETCRRM II) encrypted the teletype messages using a shared one-time pad. Each country delivered keying tapes used to encode its messages via its embassy abroad. An advantage of the one-time pad was that neither country had to reveal more sensitive encryption methods to the other. Satellite In September 1971, Moscow and Washington decided to upgrade the system. The countries also agreed for the first time when the line should be used. Specifically, they agreed to notify each other immediately in the event of an accidental, unauthorized or unexplained incident involving a nuclear weapon that could increase the risk of nuclear war. Two new satellite communication lines supplemented the terrestrial circuits using two U.S. Intelsat satellites, and two Soviet Molniya II satellites. This arrangement lasted from 1971 to 1978; it made the radio link via Tangier redundant. Facsimile In May 1983, President Ronald Reagan proposed to upgrade the hotline by the addition of high-speed facsimile capability. The Soviet Union and the United States agreed formally to do this on July 17, 1984. According to the agreement, upgrades were to take place through use of Intelsat satellites and modems, facsimile machines, and computers. The facsimile terminals were operational by 1986. The teletype circuits were cut in 1988 after several years of testing and use proved the fax links to be reliable. The Soviets transferred the hotline link to the newer, geostationary Gorizont-class satellites of the Stationar system. In 1988, the US side of the hotline system was located at the National Military Command Center in the Pentagon. Each MOLINK (Moscow Link) team worked an eight-hour shift: a non-commissioned officer looked after the equipment, and a commissioned officer who was fluent in Russian and well-briefed on world affairs was translator. The hotline was tested hourly. U.S. test messages included excerpts of William Shakespeare, Mark Twain, encyclopedias, and a first-aid manual; Soviet tests included passages from the works of Anton Chekhov. MOLINK staffers took special care not to include innuendo or literary imagery that could be misinterpreted, such as passages from Winnie the Pooh, given that a bear is considered the national symbol of Russia. The Soviets also asked, during the Carter administration, that Washington not send routine communications through the hotline. Upon receipt of the message at the NMCC, the message was translated into English, and both the original Russian and the translated English texts are transmitted to the White House Situation Room. However, if the message were to indicate "an imminent disaster, such as an accidental nuclear strike", the MOLINK team would telephone the gist of the message to the Situation Room duty officer who would brief the president before a formal translation was complete. Email In 2007, the Moscow–Washington hotline was upgraded; a dedicated computer network links Moscow and Washington. The new system started operations on January 1, 2008. It continues to use the two satellite links but a fiber optic cable replaced the old back-up cable. Commercial software is used for both chat and email: chat to coordinate operations, and email for actual messages. Transmission is nearly instantaneous. Usage The first message transmitted over the hotline was on August 30, 1963. Washington sent Moscow the text: "THE QUICK BROWN FOX JUMPED OVER THE LAZY DOG'S BACK 1234567890". The message was sent in all capital letters, since the equipment did not support lowercase. Later, during testing, the Russian translators sent a message asking their American counterparts, "What does it mean when your people say 'The quick brown fox jumped over the lazy dog'?" The primary link was accidentally cut several times, for example near Copenhagen by a Danish bulldozer operator, and by a Finnish farmer who ploughed it up once. Regular testing of both the primary and backup links took place daily. During the even hours, the US sent test messages to the Soviet Union. In the odd hours, the Soviet Union sent test messages to the US. The line was used during: 1963: Assassination of President Kennedy 1967: Six Day War 1968: Apollo 8 mission progress 1971: War between India and Pakistan 1973: Yom Kippur War 1974: Turkish Invasion of Cyprus 1979: Soviet–Afghan War 1981: Threat of Soviet Invasion of Poland 1982: Israeli Invasion of Lebanon 1991: Gulf War 2001: The 9/11 attacks 2003: Aftermath of Iraq War On October 31, 2016, the Moscow–Washington hotline was used to reinforce Barack Obama's September warning that the U.S. would consider any interference on Election Day a grave matter. Other hotlines with Moscow Another hotline-type mechanism for formal communications between Washington and Moscow are the US Nuclear Risk Reduction Center and Russian National Nuclear Risk Reduction Center, which were initiated by Ronald Reagan and Mikhail Gorbachev in 1985 following the Reykjavik Summit to reduce the risk of nuclear war. The negotiations began in May 1986, and the sides agreed in 1987. The sides established NRRCs in Washington and in Moscow, exchanging arms control and confidence building measures notifications, initially including those required by the agreement on Measures to Reduce the Risk of Outbreak of Nuclear War and the 1972 Agreement on the Prevention of Incidents on and over the High Seas, with their duties expanding over the decades to include notifications covering more than 16 treaties and agreements. In 2012, it was announced that a proposal was being negotiated with Moscow to add cyber warfare to the topics to be discussed on the hotline. Since 2007 there has been a hotline between Beijing and Washington and also Beijing and Moscow. At the beginning of the Russian invasion of Ukraine, the United States and Russia created a deconfliction line to prevent miscalculations or escalation. In November 2022, an anonymous U.S. official told Reuters that the line had only been used once in the war. The official said that the line was used to communicate concerns about Russian military operations near Ukrainian infrastructure, but did not elaborate. The official said it was not used when a missile hit Poland. In popular culture In numerous books, movies, video games, etc., the hotline between Washington and Moscow is represented by a red phone, although the real hotline has never been a telephone line. A hotline telephone was depicted in the film Fail-Safe as the "Red 1 / Ultimate 1 Touch phone", and also in Stanley Kubrick's film Dr. Strangelove, both from 1964 and both loosely based on Peter George's Cold War thriller novel Red Alert from 1958. The 1970 science fiction film Colossus: The Forbin Project depicts the hotline as a sophisticated video conference link. In the 1979 film Meteor a direct telephone link is used as the hotline. A more realistic depiction of the Hotline was Tom Clancy's novel The Sum of All Fears from 1991 and its 2002 film adaptation, in which a text-based computer communications system was depicted, resembling the actual Hotline equipment from the 1980s and 1990s. In the novel the isolated and unprepared President and National Security Advisor consistently misinterpret the Russian messages, prompting the Vice President aboard the National Emergency Airborne Command Post to remark "These damned messages over the Hot Line are making things worse instead of better." The protagonist Jack Ryan then communicates information over the Hotline from the Pentagon's NMCC to both country's leaders that defuses the crisis. In the 1990 HBO film By Dawn's Early Light, the White House Situation Room equipment that receives the (translated) hotline message, apparently relayed by the Pentagon-NMCC MOLINK team, is depicted as a teleprinter (and not as a fax machine, the technology already in use at the NMCC itself by that year). A telephone is used in the intro cinematic of the video game Command & Conquer: Red Alert 2. The call is placed by the US president to the Kremlin in the wake of a global Soviet invasion. In "World War Three", a 2005 episode of the British sci-fi television series Doctor Who, the Slitheen await a phone call to plunge the planet into a nuclear holocaust on an actual red telephone, directly pastiching the cold war fears related to the hotline. Political advertising The "red phone" was the centerpiece of television commercials used in the 1984 Democratic primary and 1984 presidential election and the 2008 Democratic primary elections. In 1984, an advertisement made by Bob Beckel and Roy Spence on behalf of candidate Walter Mondale suggested that "The most awesome, powerful responsibility in the world lies in the hand that picks up this phone." The advertisement was intended to raise questions about candidate Gary Hart's readiness for the presidency. The red phone was also featured prominently in an advertisement from that year targeting President Ronald Reagan's Strategic Defense Initiative. In the second ad, the ringing phone goes unanswered while the narrator says, "there will be no time to wake a president – computers will take control." Roy Spence revived the "red phone" idea in 2008 in an advertisement for candidate Hillary Clinton. See also Islamabad–New Delhi hotline Seoul–Pyongyang hotline Beijing–Washington hotline Notes and references External links "DCL: The Direct Communications Link", Cryptolog, December 1983, declassified internal newsletter of the National Security Agency; five-page illustrated article detailing the political and technical history of the hotline up to the Reagan administration. Top Level Telecommunications: The Washington-Moscow Hot Line Crypto Machines: The Washington-Moscow Hot Line The original Hotline Agreement Texts of 1963, 1971 and 1984 Communication circuits Foreign relations of the Soviet Union History of the foreign relations of the United States Soviet Union–United States relations Cold War history of the United States Cold War history of the Soviet Union 1963 in politics Russia–United States relations Telecommunications-related introductions in 1963 Communications in the Soviet Union Cold War terminology Military communications of the United States Military communications of Russia 1963 establishments in Washington, D.C. 1963 establishments in Russia Hotlines between countries Presidency of John F. Kennedy
Moscow–Washington hotline
Engineering
2,959
19,745,659
https://en.wikipedia.org/wiki/Resolution%20%28algebra%29
In mathematics, and more specifically in homological algebra, a resolution (or left resolution; dually a coresolution or right resolution) is an exact sequence of modules (or, more generally, of objects of an abelian category) that is used to define invariants characterizing the structure of a specific module or object of this category. When, as usually, arrows are oriented to the right, the sequence is supposed to be infinite to the left for (left) resolutions, and to the right for right resolutions. However, a finite resolution is one where only finitely many of the objects in the sequence are non-zero; it is usually represented by a finite exact sequence in which the leftmost object (for resolutions) or the rightmost object (for coresolutions) is the zero-object. Generally, the objects in the sequence are restricted to have some property P (for example to be free). Thus one speaks of a P resolution. In particular, every module has free resolutions, projective resolutions and flat resolutions, which are left resolutions consisting, respectively of free modules, projective modules or flat modules. Similarly every module has injective resolutions, which are right resolutions consisting of injective modules. Resolutions of modules Definitions Given a module M over a ring R, a left resolution (or simply resolution) of M is an exact sequence (possibly infinite) of R-modules The homomorphisms di are called boundary maps. The map ε is called an augmentation map. For succinctness, the resolution above can be written as The dual notion is that of a right resolution (or coresolution, or simply resolution). Specifically, given a module M over a ring R, a right resolution is a possibly infinite exact sequence of R-modules where each Ci is an R-module (it is common to use superscripts on the objects in the resolution and the maps between them to indicate the dual nature of such a resolution). For succinctness, the resolution above can be written as A (co)resolution is said to be finite if only finitely many of the modules involved are non-zero. The length of a finite resolution is the maximum index n labeling a nonzero module in the finite resolution. Free, projective, injective, and flat resolutions In many circumstances conditions are imposed on the modules Ei resolving the given module M. For example, a free resolution of a module M is a left resolution in which all the modules Ei are free R-modules. Likewise, projective and flat resolutions are left resolutions such that all the Ei are projective and flat R-modules, respectively. Injective resolutions are right resolutions whose Ci are all injective modules. Every R-module possesses a free left resolution. A fortiori, every module also admits projective and flat resolutions. The proof idea is to define E0 to be the free R-module generated by the elements of M, and then E1 to be the free R-module generated by the elements of the kernel of the natural map E0 → M etc. Dually, every R-module possesses an injective resolution. Projective resolutions (and, more generally, flat resolutions) can be used to compute Tor functors. Projective resolution of a module M is unique up to a chain homotopy, i.e., given two projective resolutions P0 → M and P1 → M of M there exists a chain homotopy between them. Resolutions are used to define homological dimensions. The minimal length of a finite projective resolution of a module M is called its projective dimension and denoted pd(M). For example, a module has projective dimension zero if and only if it is a projective module. If M does not admit a finite projective resolution then the projective dimension is infinite. For example, for a commutative local ring R, the projective dimension is finite if and only if R is regular and in this case it coincides with the Krull dimension of R. Analogously, the injective dimension id(M) and flat dimension fd(M) are defined for modules also. The injective and projective dimensions are used on the category of right R-modules to define a homological dimension for R called the right global dimension of R. Similarly, flat dimension is used to define weak global dimension. The behavior of these dimensions reflects characteristics of the ring. For example, a ring has right global dimension 0 if and only if it is a semisimple ring, and a ring has weak global dimension 0 if and only if it is a von Neumann regular ring. Graded modules and algebras Let M be a graded module over a graded algebra, which is generated over a field by its elements of positive degree. Then M has a free resolution in which the free modules Ei may be graded in such a way that the di and ε are graded linear maps. Among these graded free resolutions, the minimal free resolutions are those for which the number of basis elements of each Ei is minimal. The number of basis elements of each Ei and their degrees are the same for all the minimal free resolutions of a graded module. If I is a homogeneous ideal in a polynomial ring over a field, the Castelnuovo–Mumford regularity of the projective algebraic set defined by I is the minimal integer r such that the degrees of the basis elements of the Ei in a minimal free resolution of I are all lower than r-i. Examples A classic example of a free resolution is given by the Koszul complex of a regular sequence in a local ring or of a homogeneous regular sequence in a graded algebra finitely generated over a field. Let X be an aspherical space, i.e., its universal cover E is contractible. Then every singular (or simplicial) chain complex of E is a free resolution of the module Z not only over the ring Z but also over the group ring Z [π1(X)]. Resolutions in abelian categories The definition of resolutions of an object M in an abelian category A is the same as above, but the Ei and Ci are objects in A, and all maps involved are morphisms in A. The analogous notion of projective and injective modules are projective and injective objects, and, accordingly, projective and injective resolutions. However, such resolutions need not exist in a general abelian category A. If every object of A has a projective (resp. injective) resolution, then A is said to have enough projectives (resp. enough injectives). Even if they do exist, such resolutions are often difficult to work with. For example, as pointed out above, every R-module has an injective resolution, but this resolution is not functorial, i.e., given a homomorphism M → M' , together with injective resolutions there is in general no functorial way of obtaining a map between and . Abelian categories without projective resolutions in general One class of examples of Abelian categories without projective resolutions are the categories of coherent sheaves on a scheme . For example, if is projective space, any coherent sheaf on has a presentation given by an exact sequence The first two terms are not in general projective since for . But, both terms are locally free, and locally flat. Both classes of sheaves can be used in place for certain computations, replacing projective resolutions for computing some derived functors. Acyclic resolution In many cases one is not really interested in the objects appearing in a resolution, but in the behavior of the resolution with respect to a given functor. Therefore, in many situations, the notion of acyclic resolutions is used: given a left exact functor F: A → B between two abelian categories, a resolution of an object M of A is called F-acyclic, if the derived functors RiF(En) vanish for all i > 0 and n ≥ 0. Dually, a left resolution is acyclic with respect to a right exact functor if its derived functors vanish on the objects of the resolution. For example, given a R-module M, the tensor product    is a right exact functor Mod(R) → Mod(R). Every flat resolution is acyclic with respect to this functor. A flat resolution is acyclic for the tensor product by every M. Similarly, resolutions that are acyclic for all the functors Hom( ⋅ , M) are the projective resolutions and those that are acyclic for the functors Hom(M,  ⋅ ) are the injective resolutions. Any injective (projective) resolution is F-acyclic for any left exact (right exact, respectively) functor. The importance of acyclic resolutions lies in the fact that the derived functors RiF (of a left exact functor, and likewise LiF of a right exact functor) can be obtained from as the homology of F-acyclic resolutions: given an acyclic resolution of an object M, we have where right hand side is the i-th homology object of the complex This situation applies in many situations. For example, for the constant sheaf R on a differentiable manifold M can be resolved by the sheaves of smooth differential forms: The sheaves are fine sheaves, which are known to be acyclic with respect to the global section functor . Therefore, the sheaf cohomology, which is the derived functor of the global section functor Γ is computed as Similarly Godement resolutions are acyclic with respect to the global sections functor. See also Standard resolution Hilbert–Burch theorem Hilbert's syzygy theorem Free presentation Matrix factorizations (algebra) Notes References Homological algebra Module theory
Resolution (algebra)
Mathematics
2,027
19,634,741
https://en.wikipedia.org/wiki/Out%20of%20position%20%28crash%20testing%29
Out of position (OOP), in crash testing and car accident medical literature, indicates a passenger position which is not the normal upright and forward-facing position. For example, a common case observed in crashes is the position of an occupant when reaching for the car radio, or panic braking in unbelted passengers. The concept is of interest because small changes in a passenger's position can have profound effects on the actual kinematic response, especially in rear impacts, as shown both in practical testing and theoretical models. Risk of injury Out of position occupants are at increased risk of injury. Even low speed impacts can cause disc herniation and lumbar fracture on OOP passengers. Airbags can prove fatal on OOP passengers: modulating the airbag folding pattern has been proposed as a method to reduce injuries. Crash testing has shown increased forces acting on the neck and torso when dummies were leaning forward and not upright; a partial correlation with the seat stiffness has been observed, with stiffer seats increasing the force loads on the upper neck. References External Weblinks U.S. FMVSS 208 Test Configurations Transport safety
Out of position (crash testing)
Physics
236
1,591,333
https://en.wikipedia.org/wiki/Java%20Cryptography%20Architecture
In computing, the Java Cryptography Architecture (JCA) is a framework for working with cryptography using the Java programming language. It forms part of the Java security API, and was first introduced in JDK 1.1 in the package. The JCA uses a "provider"-based architecture and contains a set of APIs for various purposes, such as encryption, key generation and management, secure random-number generation, certificate validation, etc. These APIs provide an easy way for developers to integrate security into application code. See also Java Cryptography Extension Bouncy Castle (cryptography) External links Official JCA guides: JavaSE6, JavaSE7, JavaSE8, JavaSE9, JavaSE10, JavaSE11 JDK components Cryptographic software
Java Cryptography Architecture
Mathematics
159
6,881,402
https://en.wikipedia.org/wiki/Submarine%20groundwater%20discharge
Submarine groundwater discharge (SGD) is a hydrological process which commonly occurs in coastal areas. It is described as submarine inflow of fresh-, and brackish groundwater from land into the sea. Submarine groundwater discharge is controlled by several forcing mechanisms, which cause a hydraulic gradient between land and sea. Considering the different regional settings the discharge occurs either as (1) a focused flow along fractures in karst and rocky areas, (2) a dispersed flow in soft sediments, or (3) a recirculation of seawater within marine sediments. Submarine groundwater discharge plays an important role in coastal biogeochemical processes and hydrological cycles such as the formation of offshore plankton blooms, hydrological cycles, and the release of nutrients, trace elements and gases. It affects coastal ecosystems and has been used as a freshwater resource by some local communities for millennia. Forcing mechanisms In coastal areas the groundwater and seawater flows are driven by a variety of factors. Both types of water can circulate in marine sediments due to tidal pumping, waves, bottom currents or density driven transport processes. Meteoric freshwaters can discharge along confined and unconfined aquifers into the sea or the oppositional process of seawater intruding into groundwater charged aquifers can take place. The flow of both fresh and sea water is primarily controlled by the hydraulic gradients between land and sea and differences in the densities between both waters and the permeabilities of the sediments. According to Drabbe and Badon-Ghijben (1888) and Herzberg (1901), the thickness of a freshwater lens below sea level (z) corresponds with the thickness of the freshwater level above sea level (h) as: z= ρf/((ρs-ρf))*h With z being the thickness between the saltwater-freshwater interface and the sea level, h being the thickness between the top of the freshwater lens and the sea level, ρf being the density of freshwater and ρs being the density of saltwater. Including the densities of freshwater (ρf = 1.00 g •cm-3) and seawater (ρs = 1.025 g •cm-3) equation (2) simplifies to: z=40*h Together with Darcy's Law, the length of a salt wedge from the shoreline into the hinterland can be calculated: L= ((ρs-ρf)Kf m)/(ρf Q) With Kf being the hydraulic conductivity, m the aquifer thickness and Q the discharge rate. Assuming an isotropic aquifer system the length of a salt wedge solely depends on the hydraulic conductivity, the aquifer thickness and is inversely related to the discharge rate. These assumptions are only valid under hydrostatic conditions in the aquifer system. In general the interface between fresh and saline water forms a zone of transition due to diffusion/dispersion or local anisotropy. Methods The first study about submarine groundwater discharge was done by Sonrel (1868), who speculated on the risk of submarine springs for sailors. However, until the mid-1990s, SGD remained rather unrecognized by the scientific community because it was hard to detect and measure the freshwater discharge. The first elaborated method to study SGD was done by Moore (1996), who used radium-226 as a tracer for groundwater. Since then several methods and instruments have been developed to attempt to detect and quantify discharge rates. Radium-226 The first study which detected and quantified submarine groundwater discharge on a regional basis was done by Moore (1996) in the South Atlantic Bight off South Carolina. He measured enhanced radium-226 concentrations within the water column near shore and up to about from the shoreline. Radium-226 is a decay product of thorium-230, which is produced within sediments and supplied by rivers. However, these sources could not explain the high concentrations present in the study area. Moore (1996) hypothesized that submarine groundwater, enriched in radium-226, was responsible for the high concentrations. This hypothesis has been tested numerous times at sites around the world and confirmed at each site. Seepage meter Lee (1977) designed a seepage meter, which consists of a chamber which is connected to a sampling port and a plastic bag. The chamber is inserted into the sediment and water discharging through the sediments is caught within the plastic bag. The change in volume of water which is caught in the plastic bag over time represents the freshwater flux. Pore water profiles According to Schlüter et al. (2004) chloride pore water profiles can be used to investigate submarine groundwater discharge. Chloride can be used as a conservative tracer, as it is enriched in seawater and depleted in groundwater. Three different shapes of chloride pore water profiles reflect three different transport modes within marine sediments. A chloride profile showing constant concentrations with depth indicates that no submarine groundwater is present. A chloride profile with a linear decline indicates a diffusive mixing between groundwater and seawater and a concave shaped chloride profile represents an advective admixture of submarine groundwater from below. Stable isotope ratios in the water molecule may also be used to trace and quantify the sources of a submarine groundwater discharge. See also Wonky hole, freshwater submarine exit points for coral and sediment covered sediment filled old river channels References Fresh water Physical oceanography Biogeochemistry
Submarine groundwater discharge
Physics,Chemistry,Environmental_science
1,123
31,842,060
https://en.wikipedia.org/wiki/Michaab
The Michaab was an early medical device, invented by Al-Zahrawi, a form of lithotrite which was minimally-invasive. He was able to crush the stone inside the bladder without the need for a surgical incision. It was later modified by Jean Civiale, and was used to perform transurethral lithotripsy, the first known minimally invasive surgery, to crush stones inside the bladder without having to open the abdomen (lithotomy). To remove a calculus the instrument was inserted through the urethra and holes bored in the stone. Afterwards, it was crushed with the same instrument and resulting fragments aspirated or allowed to flow normally with urine. References Medical equipment Urologic procedures
Michaab
Biology
152
31,008,821
https://en.wikipedia.org/wiki/Conveyor%20belt%20furnace
A conveyor belt furnace is a furnace that uses a conveyor or belt to carry process parts or material through a primary heating chamber for rapid thermal processing. It is designed for fast drying and curing of products, and has widespread use in the firing process of thick film and metallization process of solar cell manufacturing. Other names for conveyor belt furnace include metallization furnace, belt furnace, atmosphere furnace, infrared furnace and fast fire furnace. Conveyor furnaces typically adopt a tunnel structure and contain multiple controlled zones including preheating, binder burn out, heating, firing, and cooling. A conveyor furnace also features fast thermal responses, uniform and stable temperature distribution. Some can heat treated parts to around 1050 °C. The belt speed of a conveyor furnace can be as high as to 6000 mm/min. Products are heated efficiently by infrared radiation (a furnace can also use ceramic heaters or IR lamps) and are dried and fired after passing through the controlled zones, followed by rapid cooling. Process applications Thick film processing After a paste is screened onto a substrate and it settles for 5–15 minutes at room temperature, it undergoes oven drying at 100-150 °C for 10–15 minutes to remove solvents. Firing is then completed in conveyor belt furnaces at temperatures between 500 and 1000 °C. Crystalline silicon solar cell Manufacturing Electrical contacts are usually formed by screen printing. The firing is done in conveyor belt furnaces at a temperature of about 700 °C for a few minutes. Upon firing, the organic solvents evaporate and the metal powder becomes a conducting path for the electric current. Thin film solar cell manufacturing A transparent conducting glass, coated with doped SnO2 or ITO film, is used as a substrate. A thin film, such as CdS, is then deposited through CSS or CBD techniques. The CdS film is heat treated by a conveyor belt furnace in a reducing atmosphere or in the presence of CdCl2 at 400-500 °C. Dye-sensitized solar cell (DSSC) manufacturing TiO2 nanoparticles have been used extensively to increase the interfacial surface area in dye-sensitized solar cells. Nanoparticle films are generally made by screen printing a paste of titania nanocrystals and then sintering the particles together at 450-500 °C in a conveyor belt furnace. References Industrial furnaces
Conveyor belt furnace
Chemistry
484
599,215
https://en.wikipedia.org/wiki/Synchrotron
A synchrotron is a particular type of cyclic particle accelerator, descended from the cyclotron, in which the accelerating particle beam travels around a fixed closed-loop path. The strength of the magnetic field which bends the particle beam into its closed path increases with time during the accelerating process, being synchronized to the increasing kinetic energy of the particles. The synchrotron is one of the first accelerator concepts to enable the construction of large-scale facilities, since bending, beam focusing and acceleration can be separated into different components. The most powerful modern particle accelerators use versions of the synchrotron design. The largest synchrotron-type accelerator, also the largest particle accelerator in the world, is the Large Hadron Collider (LHC) near Geneva, Switzerland, built in 2008 by the European Organization for Nuclear Research (CERN). It can accelerate beams of protons to an energy of 7 tera electronvolts (TeV or 1012 eV). The synchrotron principle was invented by Vladimir Veksler in 1944. Edwin McMillan constructed the first electron synchrotron in 1945, arriving at the idea independently, having missed Veksler's publication (which was only available in a Soviet journal, although in English). The first proton synchrotron was designed by Sir Marcus Oliphant and built in 1952. Types Large synchrotrons usually have a linear accelerator (linac) to give the particles an initial acceleration, and a lower energy synchrotron which is sometimes called a booster to increase the energy of the particles before they are injected into the high energy synchrotron ring. Several specialized types of synchrotron machines are used today: A collider is a type in which, instead of the particles striking a stationary target, particles traveling in two countercirculating rings collide head-on, making higher-energy collisions possible. A storage ring is a special type of synchrotron in which the kinetic energy of the particles is kept constant. A synchrotron light source is a combination of different electron accelerator types, including a storage ring in which the desired electromagnetic radiation is generated. This radiation is then used in experimental stations located on different beamlines. Synchrotron light sources in their entirety are sometimes called "synchrotrons", although this is technically incorrect. Principle of operation The synchrotron evolved from the cyclotron, the first cyclic particle accelerator. While a classical cyclotron uses both a constant guiding magnetic field and a constant-frequency electromagnetic field (and is working in classical approximation), its successor, the isochronous cyclotron, works by local variations of the guiding magnetic field, adapting to the increasing relativistic mass of particles during acceleration. In a synchrotron, this adaptation is done by variation of the magnetic field strength in time, rather than in space. For particles that are not close to the speed of light, the frequency of the applied electromagnetic field may also change to follow their non-constant circulation time. By increasing these parameters accordingly as the particles gain energy, their circulation path can be held constant as they are accelerated. This allows the vacuum chamber for the particles to be a large thin torus, rather than a disk as in previous, compact accelerator designs. Also, the thin profile of the vacuum chamber allowed for a more efficient use of magnetic fields than in a cyclotron, enabling the cost-effective construction of larger synchrotrons. While the first synchrotrons and storage rings like the Cosmotron and ADA strictly used the toroid shape, the strong focusing principle independently discovered by Ernest Courant et al. and Nicholas Christofilos allowed the complete separation of the accelerator into components with specialized functions along the particle path, shaping the path into a round-cornered polygon. Some important components are given by radio frequency cavities for direct acceleration, dipole magnets (bending magnets) for deflection of particles (to close the path), and quadrupole / sextupole magnets for beam focusing. The combination of time-dependent guiding magnetic fields and the strong focusing principle enabled the design and operation of modern large-scale accelerator facilities like colliders and synchrotron light sources. The straight sections along the closed path in such facilities are not only required for radio frequency cavities, but also for particle detectors (in colliders) and photon generation devices such as wigglers and undulators (in third generation synchrotron light sources). The maximum energy that a cyclic accelerator can impart is typically limited by the maximum strength of the magnetic fields and the minimum radius (maximum curvature) of the particle path. Thus one method for increasing the energy limit is to use superconducting magnets, these not being limited by magnetic saturation. Electron/positron accelerators may also be limited by the emission of synchrotron radiation, resulting in a partial loss of the particle beam's kinetic energy. The limiting beam energy is reached when the energy lost to the lateral acceleration required to maintain the beam path in a circle equals the energy added each cycle. More powerful accelerators are built by using large radius paths and by using more numerous and more powerful microwave cavities. Lighter particles (such as electrons) lose a larger fraction of their energy when deflected. Practically speaking, the energy of electron/positron accelerators is limited by this radiation loss, while this does not play a significant role in the dynamics of proton or ion accelerators. The energy of such accelerators is limited strictly by the strength of magnets and by the cost. Injection procedure Unlike in a cyclotron, synchrotrons are unable to accelerate particles from zero kinetic energy; one of the obvious reasons for this is that its closed particle path would be cut by a device that emits particles. Thus, schemes were developed to inject pre-accelerated particle beams into a synchrotron. The pre-acceleration can be realized by a chain of other accelerator structures like a linac, a microtron or another synchrotron; all of these in turn need to be fed by a particle source comprising a simple high voltage power supply, typically a Cockcroft-Walton generator. Starting from an appropriate initial value determined by the injection energy, the field strength of the dipole magnets is then increased. If the high energy particles are emitted at the end of the acceleration procedure, e.g. to a target or to another accelerator, the field strength is again decreased to injection level, starting a new injection cycle. Depending on the method of magnet control used, the time interval for one cycle can vary substantially between different installations. In large-scale facilities One of the early large synchrotrons, now retired, is the Bevatron, constructed in 1950 at the Lawrence Berkeley Laboratory. The name of this proton accelerator comes from its power, in the range of 6.3 GeV (then called BeV for billion electron volts; the name predates the adoption of the SI prefix giga-). A number of transuranium elements, unseen in the natural world, were first created with this machine. This site is also the location of one of the first large bubble chambers used to examine the results of the atomic collisions produced here. Another early large synchrotron is the Cosmotron built at Brookhaven National Laboratory which reached 3.3 GeV in 1953. Among the few synchrotrons around the world, 16 are located in the United States. Many of them belong to national laboratories; few are located in universities. As part of colliders Until August 2008, the highest energy collider in the world was the Tevatron, at the Fermi National Accelerator Laboratory, in the United States. It accelerated protons and antiprotons to slightly less than 1 TeV of kinetic energy and collided them together. The Large Hadron Collider (LHC), which has been built at the European Laboratory for High Energy Physics (CERN), has roughly seven times this energy (so proton-proton collisions occur at roughly 14 TeV). It is housed in the 27 km tunnel which formerly housed the Large Electron Positron (LEP) collider, so it will maintain the claim as the largest scientific device ever built. The LHC will also accelerate heavy ions (such as lead) up to an energy of 1.15 PeV. The largest device of this type seriously proposed was the Superconducting Super Collider (SSC), which was to be built in the United States. This design, like others, used superconducting magnets which allow more intense magnetic fields to be created without the limitations of core saturation. While construction was begun, the project was cancelled in 1994, citing excessive budget overruns — this was due to naïve cost estimation and economic management issues rather than any basic engineering flaws. It can also be argued that the end of the Cold War resulted in a change of scientific funding priorities that contributed to its ultimate cancellation. However, the tunnel built for its placement still remains, although empty. While there is still potential for yet more powerful proton and heavy particle cyclic accelerators, it appears that the next step up in electron beam energy must avoid losses due to synchrotron radiation. This will require a return to the linear accelerator, but with devices significantly longer than those currently in use. There is at present a major effort to design and build the International Linear Collider (ILC), which will consist of two opposing linear accelerators, one for electrons and one for positrons. These will collide at a total center of mass energy of 0.5 TeV. As part of synchrotron light sources Synchrotron radiation also has a wide range of applications (see synchrotron light) and many 2nd and 3rd generation synchrotrons have been built especially to harness it. The largest of those 3rd generation synchrotron light sources are the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, the Advanced Photon Source (APS) near Chicago, United States, and SPring-8 in Japan, accelerating electrons up to 6, 7 and 8 GeV, respectively. Synchrotrons which are useful for cutting edge research are large machines, costing tens or hundreds of millions of dollars to construct, and each beamline (there may be 20 to 50 at a large synchrotron) costs another two or three million dollars on average. These installations are mostly built by the science funding agencies of governments of developed countries, or by collaborations between several countries in a region, and operated as infrastructure facilities available to scientists from universities and research organisations throughout the country, region, or world. More compact models, however, have been developed, such as the Compact Light Source. Applications Life sciences: protein and large-molecule crystallography LIGA based microfabrication Drug discovery and research X-ray lithography X-ray microtomography Analysing chemicals to determine their composition Observing the reaction of living cells to drugs Inorganic material crystallography and microanalysis Fluorescence studies Semiconductor material analysis and structural studies Geological material analysis Medical imaging Particle therapy to treat some forms of cancer Radiometry: calibration of detectors and radiometric standards See also List of synchrotron radiation facilities Synchrotron radiation Cyclotron radiation Computed X-ray tomography Energy amplifier Superconducting radio frequency Coherent diffraction imaging References External links ESRF (European Synchrotron Radiation Facility) National Synchrotron Radiation Research Center (NSRRC) in Taiwan Elettra Sincrotrone Trieste - Elettra and Fermi lightsources Canadian Light Source Australian Synchrotron French synchrotron Soleil Diamond UK Synchrotron Lightsources.org IAEA database of electron synchrotron and storage rings CERN Large Hadron Collider Synchrotron Light Sources of the World A Miniature Synchrotron: room-size synchrotron offers scientists a new way to perform high-quality x-ray experiments in their own labs, Technology Review, February 4, 2008 Brazilian Synchrotron Light Laboratory Podcast interview with a scientist at the European Synchrotron Radiation Facility Indian SRS Spanish ALBA Light Source The tabletop synchrotron MIRRORCLE SOLARIS synchrotron in Poland Accelerator physics Synchrotron-related techniques Particle accelerators
Synchrotron
Physics
2,567
42,997,514
https://en.wikipedia.org/wiki/Butyriboletus%20abieticola
Butyriboletus abieticola is a pored mushroom in the family Boletaceae. It was originally described in 1975 by mycologist Harry Delbert Thiers as a species of Boletus, but transferred in 2014 to the newly created genus Butyriboletus. See also List of North American boletes References External links abieticola Fungi described in 1975 Fungi of North America Fungus species
Butyriboletus abieticola
Biology
87
4,234,662
https://en.wikipedia.org/wiki/Raymond%20Daudel
Raymond Daudel (2 February 1920 – 20 June 2006) was a French theoretical and quantum chemist. Trained as a physicist, he was an assistant to Irène Joliot-Curie at the Radium Institute. Daudel spent almost the entirety of his career as professor at the Sorbonne and director of a laboratory of the Centre National de la Recherche Scientifique (CNRS). He is quoted as saying that the latter "was much better because the CNRS was very rich". This allowed Daudel to attract many co-workers from elsewhere in France and internationally. Raymond Daudel was Officier de la Légion d'honneur and Officier de l'Ordre National du Mérite. He served as President of the European Academy of Arts Sciences and Humanities, in Paris, France. Daudel was a founding member and Honorary President of the International Academy of Quantum Molecular Science. An author as well as an academic, Raymond Daudel authored several books, including Quantum chemistry, originally with R. Lefebyre and C. Moser in 1959 (Interscience Publishers, Inc., New York) and later with G. Leroy, D. Peeters, and M. Sana, published by Wiley in 1983. He was responsible for the organization of the first International Congress in Quantum Chemistry, held in Menton, France in 1973. References 20th-century French chemists 1920 births 2006 deaths Academic staff of the University of Paris Theoretical chemists Members of the International Academy of Quantum Molecular Science Members of the French Academy of Sciences Officers of the Legion of Honour Research directors of the French National Centre for Scientific Research
Raymond Daudel
Chemistry
337
43,377,867
https://en.wikipedia.org/wiki/Iknife
An onkoknife, iKnife, or intelligent scalpel is a surgical knife that tests tissue as it contacts it during an operation and immediately gives information as to whether that tissue contains cancer cells. During a surgery this information is given continuously to the surgeon, significantly accelerating biological tissue analysis and enabling identification and removal of cancer cells. Electroknives have been in use since the 1920s and smart knife surgery is not limited only to cancer detection. In clinical studies the iKnife has shown impressive diagnostic accuracy - distinguishing benign ovarian tissue from cancerous tissue (97.4% sensitivity, 100% specificity), breast tumour from normal breast tissue (90.9% sensitivity, 98.8% specificity) and recognises histological features of poor prognostic outcome in colorectal carcinoma. Furthermore, the technology behind iKnife - rapid evaporative ionisation mass spectrometry (REIMS) - can identify Candida yeasts down to species level. Research and development Zoltán Takáts, Ph.D., a Hungarian research chemist associated with Semmelweis University, in Budapest, invented the intelligent surgical knife. He currently is Professor of Analytical Chemistry at Imperial College London (UK). His iKnife has been tested in three hospitals from 2010 through 2012. Following laboratory analysis of tissue samples in 302 patients that were included in a data base, they included 1624 of cancer and 1309 of non-cancer samples. The current pilot version for the iKnife cost the creating Hungarian scientist, MediMass Ltd. (Old Buda based company) participating in the research, colleagues at Imperial College, and the Hungarian government approximately £200 thousand (68 million HUF). According to Takáts, the investments will have been worth it, however, as the device is on a likely path to marketing. The instrument has been acquired by the Massachusetts Waters Corporation for development by MediMass Ltd., which identifies it as substantive innovative technology labelled, "Intelligent late" and "REIMS", according to their press release on 23 July 2014. The business transaction included all MediMass innovation, including patents, software, databases, and human resources related to the technology. Principle of operation History of direct examination of biological tissue by mass spectrometry (MS) Direct examination of biological tissue by mass spectrometry (MS) began in the 1970s, but at that time the next advance in technical conditions did not exist. The method did not provide any useful information on the chemical composition of the samples tested. The first breakthrough came with desorption ionisation methods (secondary ionization mass spectrometry - SIMS, matrix-assisted laser desorption ionization - MALDI) a release said. Using these methods, after appropriate sample preparation, chemical biological tissue imaging analysis may be achieved. From the end of the 1990s, it became apparent that mass spectrometry data in imaging studies showed a high degree of tissue specificity, that tissue histology could determine mass spectral information, and vice versa. In the case of the detected protein and peptide components, tissue-specific expression of the proteins is known commonly. Precise immunohistochemical methods are based on this phenomenon. The mass spectrometer detection, mainly from cell membranes and similar tissue, specifically, of complex lipids from similar tissue, however, yields surprising results. Since the distribution of proteins are in good agreement with the distribution patterns obtained by immunohistochemical methods, the distribution of the lipid components of the direct ionization mass spectrometric, previously were relative methods leading to the appearance of a new era in the study of biological specimens. The desorption electrospray ionization (DESI) was the first-MS technique, which allowed non-invasive testing of any objects (or organisms) without sample preparation, regardless of their shape or mechanical properties. Rapid evaporative ionization mass spectrometry During the summer of 2009, rapid evaporative ionization mass spectrometry (REIMS) was described. This is the second generation method. Primarily, lipid components of tissues provide the information, but different metabolite molecules and certain proteins also allow detection. The most important advantage of the specificity of mass spectrometry data is at the histological level, providing the opportunity to identify biological tissue based on chemical composition. The REIMS method is unique, in that, while the above-described mass spectrometry techniques specific to the particular method developed ion sources should be used, but it is difficult in the case of ion source devices used in surgical practice. With the operation of a variety of tissue-cutting tools, such as a diathermy knife, a surgical laser, or an ultrasonic tissue atomizer, an aerosol is formed having a composition characteristic of the tissue cut, which also contains ionized cell constructs. Among them, in terms of using the REIMS method, the intact membrane-forming phospholipids are important, which easily are detectable by mass spectrometry on the one hand, and on the other hand, contain the combination of the characteristics of the particular tissue type. Mass spectrometric analysis is just one implementation of an effective extraction system development that was needed to cut the surgical site at the time of running the generated aerosol mass spectrometer. For this purpose, a so-called Venturi-tube serves, as well as the above-mentioned surgical hand pieces, being modified to smoke the aerosols through them. Analysis of the flue gas in the mass spectrometer is realized instantaneously, within a few tenths of a second, resulting in a tissue-specific phospholipid mass spectra being obtained, allowing a response by the surgeon in less than two seconds. The analysis of the collected spectra is made of special-evaluation software, which was developed for this purpose. The software continuously compares the incoming data during surgery, validates mass spectra stored in a database, assigns the appropriate class, and the result is displayed visually to the surgeon. It also may provide information to the surgeon via an audio signal. It is estimated that the tissue identification accuracy during operation is higher than 92%. Therefore, the method is suitable for use in a surgical environment for carrying out measurements, as well as for being a part of a complex tissue identification system used during surgical tumor removal, and it can assist the surgeon in the operating surgical site with accurate histological mapping. The rapid evaporative ionization mass spectrometry (REIMS) is a novel technique that allows electrosurgery cuts with near real-time characterization of human tissue in vivo analysis through analysis of the vapors released during the process of tissue and aerosols. The REIMS technology and electro-surgical procedure adds tissue diagnosis to the intelligent knife iKnife operating principle. See also Instruments used in general surgery References External links Cancer Research UK: An intelligent knife can tell ovarian cancer and healthy tissue apart. Could it make surgery smarter? "Intelligent knife" tells surgeon if tissue is cancerous by Sam Wong Surgical Knife May Sniff Out Cancer By Tanya Lewis, Staff Writer | October 10, 2013 Heath, Nick, The Intelligent knife that helps surgeons sniff out cancer, European Technology, November 26, 2014, distributed in TechRepublic Daily Digest, TechRepublic.com, November 27, 2014 https://web.archive.org/web/20140322224442/http://www.doublexscience.org/iknife-excises-uncertainty-in-tumor-resection/ Surgical instruments Mass spectrometry
Iknife
Physics,Chemistry
1,570
229,623
https://en.wikipedia.org/wiki/List%20of%20animal%20names
In the English language, many animals have different names depending on whether they are male, female, young, domesticated, or in groups. The best-known source of many English words used for collective groupings of animals is The Book of Saint Albans, an essay on hunting published in 1486 and attributed to Juliana Berners. Most terms used here may be found in common dictionaries and general information web sites. Generic terms The terms in this table apply to many or all taxons in a particular biological family, class, or clade. Terms by species or taxon Usage of collective nouns Merriam-Webster writes that most terms of venery fell out of use in the 16th century, including a "murder" for crows. It goes on to say that some of the terms in The Book of Saint Albans were "rather fanciful", explaining that the book extended collective nouns to people of specific professions, such as a "poverty" of pipers. It concludes that for lexicographers, many of these do not satisfy criteria for entry by being "used consistently in running prose" without meriting explanation. Some terms that were listed as commonly used were "herd", "flock", "school", and "swarm". Writing for Audubon, Nicholas Lund says that many such terms are not used in actuality. When he interviewed scientists who specialize in studying specific animals, they had not heard of these terms, such as a "bask" of crocodiles or "wisdom" of wombats, being applied in their fields. Lund noted that the common plural nouns for animals were "flock" for birds and "herd" for cows, conceding that for certain animals in small groups, there was currency in usage such as a "pod" of whales or "gaggle" of geese. See also Animal epithet Lists of animals List of animal sounds wikt:Appendix:Animals, a similar list on English Wiktionary Notes References Further reading Gray, Peter (1970). The encyclopedia of the biological sciences. Van Nostrand Reinhold Company. . Names Animals English nouns
List of animal names
Biology
428
166,152
https://en.wikipedia.org/wiki/Zhuang%20Zhou
Zhuang Zhou (), commonly known as Zhuangzi (; ; literally "Master Zhuang"; also rendered in the Wade–Giles romanization as Chuang Tzu), was an influential Chinese philosopher who lived around the 4th century BCE during the Warring States period, a period of great development in Chinese philosophy, the Hundred Schools of Thought. He is credited with writing—in part or in whole—a work known by his name, the Zhuangzi, which is one of two foundational texts of Taoism, alongside the Tao Te Ching. Life The only account of the life of Zhuangzi is a brief sketch in chapter 63 of Sima Qian's Records of the Grand Historian, and most of the information it contains seems to have simply been drawn from anecdotes in the Zhuangzi itself. In Sima's biography, he is described as a minor official from the town of Meng (in modern Anhui) in the state of Song, living in the time of King Hui of Liang and King Xuan of Qi (late fourth century BC). Sima Qian writes that Zhuangzi was especially influenced by Laozi, and that he turned down a job offer from King Wei of Chu, because he valued his personal freedom. His existence has been questioned by Russell Kirkland, who asserts that "there is no reliable historical data at all" for Zhuang Zhou, and that most of the available information on the Zhuangzi comes from its third-century commentator, Guo Xiang. Writings Zhuangzi is traditionally credited as the author of at least part of the work bearing his name, the Zhuangzi. This work, in its current shape consisting of 33 chapters, is traditionally divided into three parts: the first, known as the "Inner Chapters", consists of the first seven chapters; the second, known as the "Outer Chapters", consist of the next 15 chapters; the last, known as the "Mixed Chapters", consist of the remaining 11 chapters. The meaning of these three names is disputed: according to Guo Xiang, the "Inner Chapters" were written by Zhuangzi, the "Outer Chapters" written by his disciples, and the "Mixed Chapters" by other hands; the other interpretation is that the names refer to the origin of the titles of the chapters—the "Inner Chapters" take their titles from phrases inside the chapter, the "Outer Chapters" from the opening words of the chapters, and the "Mixed Chapters" from a mixture of these two sources. Further study of the text does not provide a clear choice between these alternatives. On the one side, as Martin Palmer points out in the introduction to his translation, two of the three chapters Sima Qian cited in his biography of Zhuangzi, come from the "Outer Chapters" and the third from the "Mixed Chapters". "Neither of these are allowed as authentic Chuang Tzu chapters by certain purists, yet they breathe the very spirit of Chuang Tzu just as much as, for example, the famous 'butterfly passage' of chapter 2." On the other hand, chapter 33 has been often considered as intrusive, being a survey of the major movements during the "Hundred Schools of Thought" with an emphasis on the philosophy of Hui Shi. Further, A.C. Graham and other critics have subjected the text to a stylistic analysis and identified four strains of thought in the book: a) the ideas of Zhuangzi or his disciples; b) a "primitivist" strain of thinking similar to Laozi in chapters 8-10 and the first half of chapter 11; c) a strain very strongly represented in chapters 28-31 which is attributed to the philosophy of Yang Zhu; and d) a fourth strain which may be related to the philosophical school of Huang-Lao. In this spirit, Martin Palmer wrote that "trying to read Chuang Tzu sequentially is a mistake. The text is a collection, not a developing argument." Zhuangzi was renowned for his brilliant wordplay and use an original form of gōng'àn (Chinese: 公案) or parables to convey messages. His critiques of Confucian society and historical figures are humorous and at times ironic. See also Dream argument Goblet word Liezi Tao Te Ching Notes Citations References Ames, Roger T. (1991), 'The Mencian Concept of Ren Xing: Does it Mean Human Nature?' in Chinese Texts and Philosophical Contexts, ed. Henry Rosemont, Jr. LaSalle, Ill.: Open Court Press. Ames, Roger T. (1998) ed. Wandering at Ease in the Zhuangzi. Albany: State University of New York Press. Bruya, Brian (translator). (2019). Zhuangzi: The Way of Nature. Princeton: Princeton University Press. . Graham A.C, Chuang-Tzû, the seven inner chapters, Allen & Unwin, London, 1981 Chuang-tzu: The Inner Chapters and other Writings from the Book of Chuang-tzu (London: Unwin Paperbacks, 1986) Hansen, Chad (2003). "The Relatively Happy Fish," Asian Philosophy 13:145-164. Herbjørnsrud, Dag (2018). "A Sea for Fish on Dry Land," the blog of the Journal of History of Ideas. (Google Books) Merton, Thomas. (1969). The Way of Chuang Tzu. New York: New Directions. Waltham, Clae (editor). (1971). Chuang Tzu: Genius of the Absurd. New York: Ace Books. The complete work of Chuang Tzu, Columbia University Press, 1968 External links Zhuangzi Bilingual Chinese-English version (James Legge's translation) - Chinese Text Project The Zhuangzi "Being Boundless", Complete translation of Zhuangzi by Nina Correa Chuang Tzu at Taoism.net, Chuang Tzu's Stories and Teachings - translations by Derek Lin Zhuangzi, The Internet Encyclopedia of Philosophy Zhuangzi, Stanford Encyclopedia of Philosophy Selection from The Zhuangzi, translated by Patricia Ebrey Chuang-tzu at Taopage.org Zhuang Zi, chapter 1 Zhuang Zi, chapter 2 James Legge Complete Translation In English The Legge translation of the complete Chuang Tzu (Zhuangzi) updated 360s BC births 280s BC deaths Year of birth uncertain Year of death uncertain 4th-century BC Chinese people 4th-century BC Chinese philosophers 3rd-century BC Chinese people 3rd-century BC Chinese philosophers Metaphysicians Chinese ethicists Chinese logicians Guqin players People from Bozhou Possibly fictional people from Asia Philosophers from Anhui Philosophers of culture Philosophers of education Philosophers of language Philosophers of logic Philosophers of science Chinese political philosophers Proto-anarchists Proto-evolutionary biologists Social philosophers Taoist immortals Zhou dynasty philosophers Zhou dynasty Taoists
Zhuang Zhou
Biology
1,417
52,023,256
https://en.wikipedia.org/wiki/Red%20Dead%20Redemption%202
Red Dead Redemption 2 is a 2018 action-adventure game developed and published by Rockstar Games. The game is the third entry in the Red Dead series and a prequel to the 2010 game Red Dead Redemption. The story is set in a fictionalized representation of the United States in 1899 and follows the exploits of Arthur Morgan, an outlaw and member of the Van der Linde gang, who must deal with the decline of the Wild West while attempting to survive against government forces, rival gangs, and other adversaries. The game is presented through first- and third-person perspectives, and the player may freely roam its interactive open world. Gameplay elements include shootouts, robberies, hunting, horseback riding, interacting with non-player characters, and maintaining the character's honor rating through moral choices and deeds. A bounty system governs the response of law enforcement and bounty hunters to crimes committed by the player. The game's development lasted over eight years, beginning soon after Red Dead Redemptions release, and it became one of the most expensive video games ever made. Rockstar co-opted all of its studios into one large team to facilitate development. They drew influence from real locations as opposed to film or art, focused on creating an accurate reflection of the time with the game's characters and world. The game was Rockstar's first built specifically for eighth-generation consoles, having tested their technical capabilities while porting Grand Theft Auto V. The game's soundtrack features an original score composed by Woody Jackson and several vocal tracks produced by Daniel Lanois. Development included a crunch schedule of 100-hour weeks, leading to reports of mandatory and unpaid overtime. Red Dead Online, the game's online multiplayer mode, lets up to 32 players engage in a variety of cooperative and competitive game modes. Red Dead Redemption 2 was released for the PlayStation 4 and Xbox One in October 2018, and for Windows and Stadia in November 2019. It broke several records and had the second-biggest launch in the history of entertainment, generating in sales from its opening weekend and exceeding the lifetime sales of Red Dead Redemption in two weeks. The game received critical acclaim, with praise directed at its story, characters, open world, graphics, music, and level of detail, but some criticism at its control scheme and emphasis on realism over player freedom. It won more than 175 Game of the Year awards and received multiple other accolades from awards shows and gaming publications. It is considered one of eighth-generation console gaming's most significant titles and among the greatest video games ever made. It is among the best-selling video games with over 67million copies shipped. Gameplay Red Dead Redemption 2 is a Western-themed action-adventure game. Played from a first- or third-person perspective, the game is set in an open-world environment featuring a fictionalized version of the United States in 1899. It features single-player and online multiplayer components, the latter released under Red Dead Online. For most of the game, the player controls outlaw Arthur Morgan, a member of the Van der Linde gang, as he completes missions—linear scenarios with set objectives—to progress the story; from the epilogue, the player controls Red Dead Redemption protagonist John Marston. Outside of missions, they can freely roam the interactive world. They may engage in combat with enemies using melee attacks, firearms, bow and arrow, throwables, or dynamite, and can dual wield weapons. The player can swim as Arthur but not as John. Red Dead Redemption 2s world features different landscapes with occasional travelers, bandits, and wildlife, and urban settlements ranging from farmhouses to towns and cities. Horses are the main forms of transportation, of which there are various breeds with different attributes. The player can steal horses and must train or tame wild horses to use them; to own a horse, they must saddle or stable it. Repeated use of a horse begins a bonding process, increased by leading, petting, cleaning, and feeding it, and the player will acquire advantages as they ride their horse. Stagecoaches and trains can be used to travel; the player can hijack a train or stagecoach by threatening the driver and rob its contents or passengers. The player may witness or partake in random events in the world, including ambushes, crimes, pleas for assistance, ride-by shootings, public executions, and animal attacks. They may be rewarded when helping others. They may partake in side activities, including tasks with companions and strangers, dueling, bounty hunting, searching for collectibles such as rock carvings, and playing poker, blackjack, dominoes, and five finger filet. Hunting animals provides food, income, and materials for crafting items. The choice of weapon and shot placement affect the quality and value of meat and pelt, and the player can skin the animal or carry the carcass, which will rot over time, decrease its value, and attract predators. Some story moments give the player the option to accept or decline additional missions and lightly shape the plot around their choices. They can choose different dialogue trees with non-player characters (NPCs), such as being friendly or insulting. If they choose to kill an NPC, they can loot their corpse. The Honor system measures how the player's actions are perceived: morally-positive choices and deeds like helping strangers, following the law, and sparing opponents in a duel will increase Honor, while negative deeds such as theft and harming innocents will decrease it. Dialogue and outcomes often differ based on Honor level, and attaining milestones grants unique benefits: high Honor provides special outfits and store discounts, while low Honor grants more items from looting. In addition to health and stamina bars, the player has cores, which affect the rate at which their health and stamina regenerate. Freezing or overheating rapidly drains cores, preventable by wearing weather-appropriate clothing. The player can gain or lose weight depending on how much they eat; an underweight character will have less health but more stamina, while an overweight character can better absorb damage but with less stamina. Eating and sleeping replenishes cores. The player can bathe to remain clean and visit a barber to change hairstyles; hair grows realistically over time. Weapons require cleaning to maintain their performance. Using a certain type of gun extensively improves weapon handling, reduces recoil, and increases the rate of reloading. The player can take cover, free aim, and target a person or animal. Individual body parts can be targeted to take down targets without killing them. Weapons consist of pistols, revolvers, repeaters, rifles, shotguns, bows, explosives, lassos, mounted Gatling guns, and melee weapons such as knives and tomahawks. The player can use Dead Eye to slow down time and mark targets. Once the targeting sequence ends, they fire to every marked location in a very short space of time. The Dead Eye system upgrades progressively and grants abilities such as targeting fatal points. When the player commits a crime, witnesses alert the law; the player can stop them to avoid repercussions. Law enforcers investigate once alerted. When the player is caught, a bounty is set on their head; as they commit more crimes, their bounty grows higher and more lawmen will be sent to hunt them. If the player escapes the law, bounty hunters track them down. After committing enough crime, the U.S. Marshals will be sent to the player's location. To escape law enforcement, they can evade the wanted vicinity, hide from pursuers, or kill them. The bounty will remain on their head, lawmen and civilians will be more vigilant, and regions where crimes have been committed will be on lockdown. When caught by lawmen, the player can surrender if they are unarmed and on foot. They can remove their bounty by paying it off or spending time in jail. Synopsis Setting and characters The world of Red Dead Redemption 2 spans five fictitious U.S. states: New Hanover, Ambarino, and Lemoyne are located to the immediate north and east of New Austin and West Elizabeth, which return from Red Dead Redemption. Ambarino is a mountain wilderness, with the largest settlement being the Wapiti Native American reservation; New Hanover encompasses a sweeping valley and woody foothills that feature the cattle town of Valentine, the riverside Van Horn Trading Post, and the coal town of Annesburg; and Lemoyne is composed of bayous and plantations resembling the Southeastern United States, and is home to the Southern town of Rhodes, the Creole village of Lagras, and the former French colony of Saint Denis, analogous to New Orleans. West Elizabeth consists of wide plains, dense forests, and the prosperous port town of Blackwater; the region is expanded from the original game with a northern portion containing the mountain resort town of Strawberry. New Austin is an arid desert region on the border with Mexico and centered on the frontier towns of Armadillo and Tumbleweed, featured in the original game. Parts of New Austin and West Elizabeth were redesigned to reflect the earlier time: Blackwater is under development, while Armadillo is a ghost town as a result of a cholera outbreak. The player controls Arthur Morgan (Roger Clark), an enforcer and veteran member of the Van der Linde gang, led by Dutch van der Linde (Benjamin Byron Davis), a charismatic anarchist who extols personal freedom and decries the encroaching march of modern civilization. The gang includes Dutch's best friend and co-leader Hosea Matthews (Curzon Dobell), John Marston (Rob Wiethoff) and his partner Abigail Roberts (Cali Elizabeth Moore) and son Jack Marston (Marissa Buccianti and Ted Sutherland), the lazy Uncle (John O'Creagh and James McBride), gunslingers Bill Williamson (Steve J. Palmer), Sean MacGuire (Michael Mellamphy), Javier Escuella (Gabriel Sloyer), and Micah Bell (Peter Blomquist), Black-Native American hunter Charles Smith (Noshir Dalal), and housewife-turned-gunslinger Sadie Adler (Alex McKenna). The gang's criminal acts bring them into conflict with wealthy oil magnate Leviticus Cornwall (John Rue), who recruits the Pinkerton Detective Agency, led by Andrew Milton (John Hickok) and his subordinate Edgar Ross (Jim Bentley), to hunt them down. The gang encounter Italian Mafia boss Angelo Bronte (Jim Pirri), controversial governor Colonel Alberto Fussar (Alfredo Narciso), and Dutch's nemesis Colm O'Driscoll (Andrew Berg), and become entangled with the warring Gray and Braithwaite families, who are rumored to be hoarding Civil War gold. Arthur helps Rains Fall (Graham Greene) and his son Eagle Flies (Jeremiah Bitsui), both members of the Native American Wapiti tribe whose land is targeted by the U.S. Army. Plot After a botched ferry heist in 1899, the Van der Linde gang are forced to leave their substantial money stash and flee Blackwater. Realizing the progress of civilization is ending the time of outlaws, they decide to gain enough money to escape the law and retire. They rob a train owned by Cornwall, who hires Pinkertons to apprehend them. The gang perform jobs to earn money, as Dutch continually promises the next heist will be their last. Following a shootout with Cornwall's men in Valentine, the gang relocate to Lemoyne, where they work simultaneously for the Grays and Braithwaites in an attempt to turn them against each other. However, the families double-cross them: the Grays kill Sean during an ambush in Rhodes, while the Braithwaites kidnap and sell Jack to Bronte. The gang retaliate and destroy both families before retrieving Jack from Bronte, who offers them leads on work but eventually double-crosses them. Dutch kidnaps and feeds him to an alligator as revenge, which disturbs Arthur. The gang rob a bank in Saint Denis, but the Pinkertons intervene, killing Hosea and arresting John. Dutch, Arthur, Bill, Javier, and Micah escape the city via a ship heading to Cuba. A torrential storm sinks the ship, and the men wash ashore on an island, Guarma, where they become embroiled in a war between tyrannical sugar plantation owner Fussar and the enslaved local population. After helping the revolutionaries kill Fussar, the group secure transport back to the United States and reunite with the rest of the gang, though they are soon assaulted by Pinkertons whom they repel. Dutch, paranoid that a gang member is working as an informant, obsesses over one last heist. He doubts Arthur's loyalty after he disobeys him by liberating John earlier than planned, naming Micah his top lieutenant in Arthur's place. Arthur becomes concerned Dutch is no longer the man he knew, as he is becoming insular, abandons their ideals, and murders Cornwall. Faced with his mortality after being diagnosed with tuberculosis, Arthur reflects on his actions and how to protect the gang, telling John to run away with Abigail and Jack and openly defying Dutch by aiding the local Native American people. Several gang members become disenchanted and leave, while Dutch and Micah arrange one final heist of an Army payroll train. Arthur's faith in Dutch is shattered when he abandons Arthur to the Army, leaves John for dead, and refuses to rescue a kidnapped Abigail. Arthur and Sadie rescue Abigail from Milton, who names Micah as the Pinkertons' informer before Abigail kills him. Arthur returns to camp and openly accuses Micah of betrayal. Dutch, Bill, Javier, and Micah turn on Arthur and a newly returned John, but the standoff is broken when Pinkertons attack. The player can choose to have Arthur aid John's escape by delaying the Pinkertons or return to the camp to recover the gang's money. Micah ambushes Arthur, and Dutch intervenes in their fight. Arthur convinces Dutch to abandon Micah and leave. If the player has high honor, Arthur succumbs to his injuries and disease while watching the sunrise; if the player has low honor, Micah executes him. Eight years later, in 1907, John and his family are trying to lead honest lives. They find work at a ranch where John reveals his combat experience against bandits threatening the ranch. Believing John is unwilling to give up his old ways, Abigail leaves with Jack. John takes a loan from the bank to purchase a ranch. He works with Uncle, Sadie, and Charles to build a new home, and proposes to Abigail on her return. Learning Micah is still alive and formed his own gang, John, Sadie, and Charles assault his camp. They find Dutch, who shoots Micah after a tense standoff and leaves in silence, allowing John to kill Micah and claim the gang's Blackwater stash to pay his debt. John marries Abigail and they start a new life on their ranch alongside Jack and Uncle, as Sadie and Charles leave for other pursuits. Mid-credits scenes show the fates of other surviving gang members. Edgar Ross tracks down Micah's killer, which leads him to John's ranch, foreshadowing the events of Red Dead Redemption. Development Preliminary work on Red Dead Redemption 2 began shortly following the release of the original game, Red Dead Redemption (2010). Rockstar San Diego, the studio behind the original game, had a rough outline of the game by mid-2011, and by late 2012, rough scripts of the game had been completed. When Rockstar Games realized a group of distinct studios would not necessarily work, it co-opted all of its studios into one large team, presented simply as Rockstar Games, to facilitate development between 1,600 people; a total of around 2,000 people worked on the game. Analyst estimations place the game's combined development and marketing budget between and , which would make it one of the most expensive video games to develop. While the main theme of the original game was to protect family at all costs, Red Dead Redemption 2 tells the story of a family's breakdown in the Van der Linde gang. The team was interested in exploring the story of why the gang fell apart, as frequently mentioned in the first game. Rockstar's Vice President of Creativity Dan Houser was inspired by film and literature when writing the game, though he avoided contemporary works to avoid being accused of stealing ideas. The team was not specifically inspired by film or art but rather real locations. They sought to create an accurate reflection of the time, with people and locations: the citizens feature a contrast between rich and poor, while the locales contrast between civilization and wilderness. Houser viewed the game as historical fiction, opting to allude to historical events instead of retelling them due to their unpleasantness. Red Dead Redemption 2s recording sessions began in 2013. Rockstar wanted a diverse cast of characters within the Van der Linde gang. The writers put particular focus on the individual stories behind each character, exploring their life before the gang and their reasons for remaining with the group. Several characters were cut from the game during development as their personalities failed to add to the narrative. The actors sometimes improvised some additional lines, but mostly remained faithful to the script. The team decided the player would control one character in Red Dead Redemption 2, as opposed to the three protagonists in Rockstar's previous title Grand Theft Auto V (2013), to follow the character more personally and understand how the events impact him. They felt a single character is more appropriate for the narrative structure of a Western. Red Dead Redemption 2 is the first game from Rockstar built specifically for the PlayStation 4 and Xbox One. Rockstar had tested these consoles' technical capabilities when porting Grand Theft Auto V, initially released on the PlayStation 3 and Xbox 360, to them. Once the team had defined what limitations were sustainable, they found the areas requiring the most focus. One of Rockstar's goals with Red Dead Redemption 2s gameplay was to make the player feel as though they are living in a world, instead of playing missions and watching cutscenes. A method used to achieve this was through the gang's moving camp, where the player can interact with other characters. The team ensured the characters maintained the same personality and mood from cutscene to gameplay to make the world feel more alive and realistic. Woody Jackson, who worked with Rockstar on the original game and Grand Theft Auto V, returned to compose Red Dead Redemption 2s original score. Red Dead Redemption 2 has three different types of score: narrative, which is heard during the missions in the game's story; interactive, when the player is roaming the open world or in multiplayer; and environmental, which includes campfire singing songs or a character playing music in the world. The game's music regularly reacts according to the player's decisions in the world. Jackson purchased several instruments from the Wrecking Crew featured on classic cowboy films. In total, over 110 musicians worked on the music for the game. Daniel Lanois produced the original vocal tracks for the game, collaborating with artists such as D'Angelo, Willie Nelson, Rhiannon Giddens, and Josh Homme. Director of music and audio Ivan Pavlovich engaged saxophone player Colin Stetson, experimental band Senyawa, and musician Arca to work on the score. Rockstar Games first teased Red Dead Redemption 2 on October 16–17, 2016, before the official announcement on October 18, 2016. Originally due for release in the second half of 2017, the game was delayed twice: first to Q1/Q2 2018, and later to October 26, 2018. According to Rockstar, the game required extra development time for "polish". To spur pre-order sales, Rockstar collaborated with several retail outlets to provide special edition versions of the game. A companion app, released alongside the game for Android and iOS devices, acts as a second screen wherein the player can view in-game items such as catalogs, journals, and a real-time mini-map. The game was released for Windows on November 5, 2019, and was a launch title for Stadia when the service launched on November 19, 2019. The Windows version has visual and technical improvements. Reception Critical response Red Dead Redemption 2 received "universal acclaim" from critics, according to review aggregator Metacritic. It is one of the highest-rated games on Metacritic, and the highest-rated PlayStation 4 and Xbox One game alongside Rockstar's Grand Theft Auto V. Reviewers praised the story, characters, open world, graphics, music, and level of detail. Matt Bertz of Game Informer described the game as "the biggest and most cohesive adventure Rockstar Games has ever created", and GamesRadars David Meikleham felt it "represents the current pinnacle of video game design". Keza MacDonald of The Guardian declared it "a landmark game" and "a new high water-mark for lifelike video game worlds"; IGNs Luke Reilly named it "one of the greatest games of the modern age". Peter Suderman, writing for The New York Times, considered Red Dead Redemption 2 as an example of video games as a work of art, comparing the game's abilities to "[tell] individual stories against the backdrop of national and cultural identity, deconstructing their genres while advancing the form" to the current state of film and television with similar works like The Godfather and The Sopranos. Regarding its narrative, Meikleham of GamesRadar called Red Dead Redemption 2 "perhaps the boldest triple-A game ever made", praising the unpredictability of the narrative and comparing its epilogue to The Last of Us (2013). The Guardians MacDonald praised the story's twists, applauding the writers' ability to feed smaller stories into the overall narrative. Nick Plessas of Electronic Gaming Monthly (EGM) noted the best stories "are to be found in the margins", discovered and written by the player. Game Informers Bertz felt the narrative rarely suffered from repetition, an impressive feat considering the game's scope, though expressed desire for more passive, quiet moments. Conversely, GameSpots Kallie Plagge was frustrated by the predictability later in the narrative though acknowledged its importance to Arthur's story. Alex Navarro of Giant Bomb felt the narrative suffered in its clichéd Native American portrayal and side missions. Some reviewers commented on the game's slow opening hours and its lengthy epilogue. EGMs Plessas found the journey of redemption for Arthur "far more redeeming" than John's in Red Dead Redemption, noting his sins heightened his sympathy for the character. Conversely, Eurogamers Martin Robinson considered Arthur less compelling than Marston, resulting in a confusing narrative. GameSpots Plagge felt the new characters contributed to the story's quality and Mike Williams of USgamer wrote they "feel like actual people" due to their varied personalities. IGNs Reilly praised the cultural variety and avoidance of caricatures, and Giant Bombs Navarro noted the characters possess humanity often lacking in other Rockstar games, particularly in the thoughtful portrayal of Arthur's internal conflicts. MacDonald of The Guardian found the performances led to more believable characters. Polygons Chris Plante found the political commentary shone when focusing on the Braithwaite and Gray families but considered the portrayal of Native American characters insensitive and confusing. Eirik Gumeny, writing for Polygon, praised the realistic and unfiltered depiction of tuberculosis, including the misguided and hostile reactions from others. Several critics considered Red Dead Redemption 2s open world among the greatest in video games; EGMs Plessas said it "pushes industry boundaries in both size and detail", and The Guardians MacDonald praised the imitation of real American landscapes. IGNs Reilly considered the world "broader, more beautiful, and more varied" than its predecessor's, due in part to how each environment feels alive. GameSpots Plagge felt compelled to explore the open world due to its variety, reactivity, and surprises. GamesRadars Meikleham called Red Dead Redemption 2 "the best looking video game of all time" with some of the most impressive lighting and weather systems, and USgamers Williams considered it one of the best-looking on PlayStation 4 and Xbox One. IGNs Reilly praised the lighting engine, facial animation, and level of granular detail. Game Informers Bertz lauded the attention to detail and found the world felt more alive due to "an unrivaled dynamic weather system, ambient sound effects, and the most ambitious ecology of flora and fauna ever seen in games". Several reviewers lauded the level of detail in all aspects of gameplay—EGMs Plessas felt the attention to detail led to deeper immersion—though some found the sheer amount of realism restricted opportunities and unnecessarily prolonged some animations. IGNs Reilly felt Arthur's movement did not feel cumbersome despite being "heavier" than Grand Theft Auto Vs protagonists, and found the intimate battles more exciting. Polygons Plante considered the conversation options limited but still an improvement over the violence of other action games. Eurogamers Robinson voiced frustration at the lack of freedom in some story missions. Some reviewers criticized the controls and found its button layout and user interface inconsistent and confusing. Red Dead Redemption 2s Windows release received "universal acclaim" according to Metacritic; it is one of the highest-rated PC games. PCGamesNs Sam White thought the graphics improvements made the open world "[look] the best it ever has". Destructoids Carter praised the addition of the Photo Mode. Sam Machkovech of Ars Technica felt the cutscenes animations did not scale well to higher frame rates but considered the gameplay far superior to console. Rock, Paper, Shotguns Matthew Castle lauded the adapted controls, particularly when painting targets in Dead Eye, though felt they took longer to familiarize oneself with. PC Gamers James Davenport found the first-person perspective superior on the Windows version due to the responsiveness of the mouse but noted the game crashed several times; Jeuxvideo.coms Jean-Kléber Lauret echoed similar criticisms, observing the graphical and technical enhancements required advanced hardware. Polygons Samit Sarkar criticized the port's technical issues and declared it unplayable at the time. One week after release, PCMags Tony Polanco said the technical issues had been mostly solved. Accolades Red Dead Redemption 2 won over 175 Game of the Year awards, receiving wins at the Australian Games Awards, Brazil Game Awards, Fun & Serious Game Festival, and Italian Video Game Awards, and from outlets such as 4Players, AusGamers, Complex, Digital Trends, Edge, Electronic Gaming Monthly, Gamereactor, GameSpot, The Guardian, Hot Press, news.com.au, The Telegraph, USgamer, and Vulture. On Metacritic, Red Dead Redemption 2 was the highest-rated game of 2018. The game was named among the best games of the 2010s by Entertainment.ie, The Hollywood Reporter, Metacritic, National Post, NME, Stuff, Thrillist, VG247, and Wired UK. At The Game Awards 2018, the game received eight nominations and won four awards: Best Audio Design, Best Narrative, Best Score/Music, and Best Performance for Clark as Arthur. At the 6th SXSW Gaming Awards, Red Dead Redemption 2 was named Trending Game of the Year and won Excellence in SFX and Technical Achievement. The game earned eight nominations at the 22nd Annual D.I.C.E. Awards, seven at the 19th Game Developers Choice Awards, and six at the 15th British Academy Games Awards. Sales Since the previous installment in the series was among the highest-reviewed and best-selling games of the seventh generation of video game consoles, many analysts believed Red Dead Redemption 2 would be one of the highest-selling games of 2018 and have a great effect on other game sales during the fourth quarter. After the game's announcement in October 2016, analyst Ben Schacter of Macquarie Research estimated it would sell 12 million copies in its first quarter, while analysts at Cowen and Company gave a "conservative" estimate of 15 million sales. In July 2018, industry analyst Mat Piscatella predicted Red Dead Redemption 2 would be the best-selling game of 2018, outselling other blockbuster titles such as Battlefield V, Call of Duty: Black Ops 4, and Fallout 76; some industry commentators noted frequent franchises like Assassin's Creed and Call of Duty were launching their 2018 entries—Odyssey and Black Ops 4, respectively—earlier than usual, predicting an avoidance of competition with Red Dead Redemption 2. Shortly before release in October 2018, Schacter estimated the game would sell 15 million copies in its first quarter, though noted investor expectations were at 20million copies; Michael Pachter of Wedbush Securities predicted 25 million. Michael Olson of Piper Jaffray projected revenue between and in the first three days, while Doug Creutz of Cowen Inc. estimated between and . Red Dead Redemption 2 had the largest opening weekend in entertainment history, making over in revenue in three days, and over 17million copies shipped in two weeks, exceeding the lifetime sales of Red Dead Redemption. Additionally, Red Dead Redemption 2 was the second-highest-grossing entertainment launch (behind Grand Theft Auto V and set records for largest pre-orders, first-day sales, and three-day sales on the PlayStation Network. The share price for Rockstar's parent company, Take-Two Interactive, rose nine percent in the week after release. VentureBeats Dean Takahashi noted the game likely broke-even in its first week and, based on analyst estimates, would begin to earn a profit by December 2018. The game shipped 23million copies in 2018, generating in revenue, and sales reached 29million in 2019, 36million in 2020, 43million in 2021, 50million in 2022, 61million in 2023, and over 67million by June 2024. By dollar sales, it was the best-selling game of the latter half of the 2010s, and the decade's seventh-best-selling overall. It is among the best-selling video games. In the United States, Red Dead Redemption 2 was the second-best-selling game of October 2018, behind Call of Duty: Black Ops 4. It was the nation's best-selling-game in November, third-best-selling in December, and overall best-selling of the year. In 2019, it maintained its placement in the nation's top charts, and was the twelfth-best-selling game of the year. It remained in the charts for the first half of 2020. In the United Kingdom, Red Dead Redemption 2 was the best-selling retail game in its first week of release and the second-fastest-selling game of 2018 (behind FIFA 19). The opening week physical sales doubled its predecessor's, with 68 percent of sales from the PlayStation 4 version. Red Dead Redemption 2 was the third-fastest-selling non-FIFA game released in its generation, behind Call of Duty: Black Ops III and Call of Duty: Advanced Warfare. In the United Kingdom, it was the second-best-selling game in 2018, fifth in 2019, eleventh in 2020, sixth in 2021, and ninth in 2022. Within its first week in Japan, the PlayStation 4 version sold 132,984 copies, placing it at number one on the all-format video game sales chart. In Australia, it was the best-selling game of 2018, and the fifteenth-best-selling of 2020. Worldwide, the Windows version sold 406,000 copies upon launch in November 2019, doubling to over one million after its release on Steam the following month. Red Dead Online The online multiplayer component to Red Dead Redemption 2, titled Red Dead Online, was released as a public beta on November 27, 2018, to players who owned a special edition of the base game, and then progressively opened to all owners. Players customize a character and are free to explore the environment alone or in a group. The game world features events in which up to 32 players can partake individually or with a posse group. As players complete activities throughout the game world, they receive experience points to raise their characters in rank and receive bonuses, thereby progressing in the game. Though Red Dead Online and Red Dead Redemption 2 share assets and gameplay, Rockstar viewed them as separate products with independent trajectories, reflected in its decision to launch the multiplayer title separately. Player progression in the public beta carried over when the beta ended on May 15, 2019. A standalone client for Red Dead Online was released on December 1, 2020, for PlayStation 4, Windows, and Xbox One. Post-release content was added to the game through free title updates. In July 2022, Rockstar announced Red Dead Online would not receive more major updates, instead focusing on smaller missions and the expansion of existing modes as development resources were withdrawn to focus on Grand Theft Auto VI. Controversies In February 2018, online technology publication Trusted Reviews published an article leaking several features due to be included in Red Dead Redemption 2, including a first-person perspective, and a battle royale mode in Red Dead Online. The information was obtained from a leaked document in August 2017, but the site had hesitated to publish the article as the claims were "unsubstantiated" until promotional material validated its legitimacy; the document was sent to other sites at the time. In November 2018, Trusted Reviews replaced the article with an apology, noting the "information was confidential" and should not have been published. In a settlement with Take-Two Interactive, Trusted Reviews agreed to donate () to charities chosen by Take-Two; Rockstar directed the funds be donated to the American Indian College Fund, the American Prairie Reserve, and the First Nations Development Institute. Neither Trusted Reviews nor Take-Two indicated any specific laws had been violated. Several journalists recognized the uniqueness of successful legal action against media outlets; Seth Barton of MCV/Develop called the outcome "an incredible development for games industry journalism" and felt it would result in hesitancy to leak information regarding Rockstar in future. Kotakus Keza MacDonald similarly described the events as "extraordinary" as it likely meant Take-Two argued the information was a trade secret and Trusted Reviews was unable to use a public interest defense; she added "it might prove to be influential" and prevent publications from leaking information in the future, even if obtained legally. Prior to the game's release, Dan Houser stated the team had been working 100-hour weeks "several times in 2018". Many sources interpreted this statement as "crunch time" for the entire development staff of the game, comparable to similar accusations made by wives of Rockstar San Diego employees in regards to the development of the game's predecessor. The following day, Rockstar clarified in a statement the work duration mentioned by Houser only affected the senior writing staff for Red Dead Redemption 2, and the duration had only been the case for three weeks during the entire development. Houser added the company would never expect or force any employee to work as long as was stated, and those staying late at the development studios were powered by their passion for the project. However, other Rockstar employees argued Houser's statements did not give an accurate picture of the "crunch-time culture" at the company many of its employees worked under, which included "mandatory" overtime and years-long periods of crunch. Due to the salary-based nature of employment contracts, many employees were not compensated for their overtime work and instead depended on year-end bonus payments that hinged on the sales performance of the game. Nonetheless, a sentiment echoed across many employee statements was the observation that working conditions had somewhat improved since development on the original Red Dead Redemption. By April 2020, several employees reported the company had made significant changes as a result of the publicity surrounding the work culture, and many were cautiously optimistic about Rockstar's future. In November 2018, YouTuber Shirrako posted several videos of his player character murdering a female suffragette NPC, including feeding her to an alligator and dropping her down a mineshaft. Critics noted the majority of comments on the videos were sexist and misogynistic. Shirrako claimed the actions were apolitical and he did not support the sexist comments but did not wish to censor them. Matt Leonard of GameRevolution called Shirrako's response "plain bullshit", noting he continued to post similar videos encouraging the same behavior. In response, YouTube suspended the channel for violation of their community guidelines, citing its graphic nature for shock purposes and for promoting violence. Shirrako protested the decision, claiming it was hypocritical as in-game violence against men did not receive the same response. YouTube restored the channel and designated an age restriction to the suffragette videos, commenting "the reviewer will be educated on this outcome and on how to avoid repeating this mistake". Some critics questioned if Rockstar was partly to blame for the behavior, as the game does not limit attacks on the suffragette as it does other characters, such as children; scholars Kristine Jørgensen and Torill Elvira Mortensen, writing in Games and Culture, acknowledged this concern, but recognized the responsibility ultimately lay with the player, and limiting attacks could be interpreted as both a political statement from Rockstar and a restriction on the player's freedom of expression. Writing for Public History Weekly, Moritz Hoffman noted the incident reflects a newer issue of open world games: granting freedom without penalties promotes disinhibition. In The Journal of the Gilded Age and Progressive Era, scholars Hilary Jane Locke and Thomas Mackay wrote it "points to a sharp contrast between the game's portrayal of Progressive Era politics ... and how some players have responded to its depictions thereof". Securitas AB, the parent company of the modern-day Pinkerton agency, issued a cease and desist notice to Take-Two Interactive on December 13, 2018, asserting Red Dead Redemption 2s use of the Pinkerton name and badge imagery was against their trademark and demanded royalties for each copy of the game sold or they would take legal action. Take-Two filed a complaint against Securitas on January 11, 2019, maintaining the Pinkerton name was strongly associated with the Wild West, and its use of the term did not infringe on the Pinkerton trademark. Take-Two sought a summary judgment to declare the use of Pinkerton in the game as allowed fair use. Game Informers Javy Gwaltney agreed with Take-Two's claims, questioning why Securitas had not targeted other works depicting the Pinkerton agency in the past; he felt "the company likely just wants a cut of [the game's] profits". In response to Take-Two's complaint, Pinkerton president Jack Zahran described the game's portrayal of Pinkertons as "baseless" and "inaccurate", noting Pinkerton employees would "have to explain to their young game players why Red Dead Redemption 2 encourages people to murder Pinkertons", but hoped the companies could come to an "amicable solution". By April 2019, Securitas withdrew its claims and Take-Two moved to withdraw its complaint. Legacy Critics agreed Red Dead Redemption 2 was among the best games of the eighth generation of video game consoles. GQs White described it as "a generation-defining release", and VG247s McKeand named it "a benchmark for other open world games to aspire to". IGN ranked the game as the third-best Xbox One game and eleventh-best PC and PlayStation 4 game. In November 2020, TechRadar listed it among the greatest games of the eighth generation; editor Gerald Lynch felt it set the bar for believable open world games. In December, GamesRadar+ ranked it the fifth-best game of the generation, noting it had already begun to influence the open-world and role-playing genres. Since its release, Red Dead Redemption 2 has been cited as one of the greatest video games ever made. In March 2019, Popular Mechanics ranked it 24th on its list of greatest games. In October, IGN added Red Dead Redemption 2 to its list of top 100 video games, ranked 62nd in 2019 and promoted to 8th in 2021; editor Luke Reilly praised its "uncompromising detail" and wrote it "stands shoulder-to-shoulder with Grand Theft Auto V as one of gaming's greatest open-world achievements". In July 2020, Dylan Haas of Mashable considered the game his second favorite of all time, citing its realism, world, characters, and narrative. In November 2021, GamesRadar+ ranked it 28th on its list of top 50 games, describing it as "one of the best sandbox games ever made". In April 2022, GamingBolts Ravi Sinha ranked Red Dead Redemption 2 the second-best game of all time, citing its characters, narrative, attention to detail, and visual fidelity, naming it Rockstar's "finest work". In September, USA Today ranked it 21st on its list of best games, praising Arthur as "one of the most likable protagonists in games" and describing the world as "the real star of the show". In May 2023, over 200 developers, journalists, and content creators surveyed by GQ ranked Red Dead Redemption 2 the 15th-best game; GQs Sam White and Robert Leedham called it "perhaps the greatest flex in video game history" which set a "benchmark for cinematic storytelling and attention to detail". Producer Eiji Aonuma said open world games like Red Dead Redemption 2 inspired developers of The Legend of Zelda: Tears of the Kingdom (2023). Red Dead Redemption 2 was referenced several times in the South Park episodes "Time to Get Cereal" and "Nobody Got Cereal?" in November 2018. Game footage was used in the first music video for "Old Town Road" by Lil Nas X in December, which scholars saw as the impact of the game's influence on Western culture and country music. In April 2022, Joe Meizies won Virtual Photographer of the Year at the London Games Festival for his virtual photography in Red Dead Redemption 2. Tombstone Redemption, a fan event organized by Kenney Palkow, was held in Tombstone, Arizona, on July 29–30, 2023, with an estimated 10,000 attendees, including fourteen cast members from the series. Tombstone was redressed to resemble the in-game Blackwater. The event returned the following year as Black Hills Redemption, held in Deadwood, South Dakota, on June 21–23, with twenty actors present as guests. In July 2021, a study published by the University of Exeter and Truro and Penwith College found Red Dead Redemption 2 players had an increased understanding of ecology and animal behavior; players were able to identify three more animals on average than other gamers. In late 2021, University of Tennessee professor Tore Olsson started teaching "Red Dead America", a class about United States history in the late nineteenth and early twentieth centuries, including the frontier myth, Jim Crow laws, settler colonialism, and women's suffrage, inspired by the lack of academic discourse surrounding the game's history. Olsson found the class attracted larger enrolments than other history subjects. He published a book about the topic, titled Red Dead's History, in August 2024; the audiobook is narrated by Roger Clark. Notes References External links 2018 video games Action-adventure games Cultural depictions of the Mafia Euphoria (software) games Fiction about bank robbery Fiction about infectious diseases Fiction about train robbery The Game Awards winners Game Developers Choice Award winners Golden Joystick Award winners Ku Klux Klan in popular culture Multiplayer and single-player video games Open-world video games PlayStation 4 games PlayStation 4 Pro enhanced games Revisionist Westerns Rockstar Advanced Game Engine games Rockstar Games games Stadia games Take-Two Interactive games Video game prequels Video games developed in Canada Video games developed in India Video games developed in the United Kingdom Video games developed in the United States Video games produced by Dan Houser Video games scored by Woody Jackson Video games set in 1899 Video games set in 1907 Video games set in the American frontier Video games set in the Caribbean Video games set on fictional islands Video games with time manipulation Video games written by Dan Houser Western (genre) video games Windows games Works about atonement Xbox One games Xbox One X enhanced games
Red Dead Redemption 2
Biology
9,150
76,998,238
https://en.wikipedia.org/wiki/Consent%20or%20pay
Consent-or-pay, also called pay-or-okay, is a compliance tactic used by certain companies, most notably Meta, to drive up the rates at which users consent to data processing under the European Union's General Data Protection Regulation (GDPR). It consists of presenting the user with a tracking consent notice, but only allowing a binary choice: either the user consents to the data processing, or they are required to pay to use the service, which is otherwise free to use if data processing is consented to. The tactic has been criticised by privacy advocates and non-governmental organisations such as NOYB and Wikimedia Europe, which claim that it is illegal under the GDPR. On 17 April 2024, the European Data Protection Board released a non-binding opinion stating that in most cases, consent-or-pay models do not constitute valid consent within the meaning of the GDPR. Background Under the GDPR, the processing of a natural person's personal data is only allowed under six lawful bases: consent, contractual necessity, legal obligation under EU or member state law, public interest, protection of vital interest of an individual, and the processor's legitimate interest. When the GDPR first came into force in 2018, Meta justified its processing of personal data by claiming that its terms of use constitute a contract under which the user consented to the processing of personal data. However, this was challenged by Max Schrems, an Austrian privacy activist, who successfully argued that contractual necessity was not a valid basis of data processing when it comes to personalised advertising. In response to this ruling, Meta changed its lawful basis for personal data processing from contractual necessity to legitimate interest, which was also found not to be a valid basis. Meta then changed its lawful basis to consent, but chose to implement it in a way where users who consented to personalised advertising could use the service for free, while those who did not were required to pay a monthly subscription fee to continue using the service. Critics of this consent model have called it "pay-or-okay", claiming that the monthly fee is disproportional and that users are not able to withdraw their consent to tracking as easily as it is given, which the GDPR requires. Massimiliano Gelmi, a data protection lawyer at NOYB, has stated that "The law is clear, withdrawing consent must be as easy as giving it in the first place. It is painfully obvious that paying €251,88 per year to withdraw consent is not as easy as clicking an 'Okay' button to accept the tracking." On 17 April 2024, the European Data Protection Board released a non-binding opinion stating that in most cases, consent-or-pay models do not constitute valid consent within the meaning of the GDPR. European Commission investigation On 1 July 2024, the European Commission announced that it had opened an investigation against Meta under the provisions of the Digital Markets Act (DMA), with the preliminary findings claiming that Meta's approach was not in compliance with the DMA, an assertion that Meta has disputed. Other users Although Meta has faced most of the scrutiny and criticism regarding the use of consent-or-pay, other companies have also utilised the tactic. The Austrian Data Protection Authority (Datenschutzbehörde) has found that Der Standard, a German-language newspaper, has acted unlawfully by using consent-or-pay on its site, while others, including Der Spiegel, Die Zeit, Heise, the Frankfurter Allgemeine Zeitung, the Kronen Zeitung, and T-Online, have been accused of doing the same. References Information privacy Data protection
Consent or pay
Engineering
758
6,596
https://en.wikipedia.org/wiki/Computer%20vision
Computer vision tasks include methods for acquiring, processing, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the form of decisions. "Understanding" in this context signifies the transformation of visual images (the input to the retina) into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. The scientific discipline of computer vision is concerned with the theory behind artificial systems that extract information from images. Image data can take many forms, such as video sequences, views from multiple cameras, multi-dimensional data from a 3D scanner, 3D point clouds from LiDaR sensors, or medical scanning devices. The technological discipline of computer vision seeks to apply its theories and models to the construction of computer vision systems. Subdisciplines of computer vision include scene reconstruction, object detection, event detection, activity recognition, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, visual servoing, 3D scene modeling, and image restoration. Definition Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. "Computer vision is concerned with the automatic extraction, analysis, and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding." As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems. Machine vision refers to a systems engineering discipline, especially in the context of factory automation. In more recent times, the terms computer vision and machine vision have converged to a greater degree. History In the late 1960s, computer vision began at universities that were pioneering artificial intelligence. It was meant to mimic the human visual system as a stepping stone to endowing robots with intelligent behavior. In 1966, it was believed that this could be achieved through an undergraduate summer project, by attaching a camera to a computer and having it "describe what it saw". What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation. The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, and contour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields. By the 1990s, some of the previous research topics became more active than others. Research in projective 3-D reconstructions led to better understanding of camera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry. This led to methods for sparse 3-D reconstructions of scenes from multiple images. Progress was made on the dense stereo correspondence problem and further multi-view stereo techniques. At the same time, variations of graph cut were used to solve image segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering. Recent work has seen the resurgence of feature-based methods used in conjunction with machine learning techniques and complex optimization frameworks. The advancement of Deep Learning techniques has brought further life to the field of computer vision. The accuracy of deep learning algorithms on several benchmark computer vision data sets for tasks ranging from classification, segmentation and optical flow has surpassed prior methods. Related fields Solid-state physics Solid-state physics is another field that is closely related to computer vision. Most computer vision systems rely on image sensors, which detect electromagnetic radiation, which is typically in the form of either visible, infrared or ultraviolet light. The sensors are designed using quantum physics. The process by which light interacts with surfaces is explained using physics. Physics explains the behavior of optics which are a core part of most imaging systems. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Also, various measurement problems in physics can be addressed using computer vision, for example, motion in fluids. Neurobiology Neurobiology has greatly influenced the development of computer vision algorithms. Over the last century, there has been an extensive study of eyes, neurons, and brain structures devoted to the processing of visual stimuli in both humans and various animals. This has led to a coarse yet convoluted description of how natural vision systems operate in order to solve certain vision-related tasks. These results have led to a sub-field within computer vision where artificial systems are designed to mimic the processing and behavior of biological systems at different levels of complexity. Also, some of the learning-based methods developed within computer vision (e.g. neural net and deep learning based image and feature analysis and classification) have their background in neurobiology. The Neocognitron, a neural network developed in the 1970s by Kunihiko Fukushima, is an early example of computer vision taking direct inspiration from neurobiology, specifically the primary visual cortex. Some strands of computer vision research are closely related to the study of biological vision—indeed, just as many strands of AI research are closely tied with research into human intelligence and the use of stored knowledge to interpret, integrate, and utilize visual information. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals. Computer vision, on the other hand, develops and describes the algorithms implemented in software and hardware behind artificial vision systems. An interdisciplinary exchange between biological and computer vision has proven fruitful for both fields. Signal processing Yet another field related to computer vision is signal processing. Many methods for processing one-variable signals, typically temporal signals, can be extended in a natural way to the processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images, there are many methods developed within computer vision that have no counterpart in the processing of one-variable signals. Together with the multi-dimensionality of the signal, this defines a subfield in signal processing as a part of computer vision. Robotic navigation Robot navigation sometimes deals with autonomous path planning or deliberation for robotic systems to navigate through an environment. A detailed understanding of these environments is required to navigate through them. Information about the environment could be provided by a computer vision system, acting as a vision sensor and providing high-level information about the environment and the robot Visual computing Other fields Besides the above-mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based on statistics, optimization or geometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance. Computer vision is also used in fashion eCommerce, inventory management, patent search, furniture, and the beauty industry. Distinctions The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences, and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented. In image processing, the input is an image and the output is an image as well, whereas in computer vision, an image or a video is taken as an input and the output could be an enhanced image, an understanding of the content of an image or even behavior of a computer system based on such understanding. Computer graphics produces image data from 3D models, and computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines, e.g., as explored in augmented reality. The following characterizations appear relevant but should not be taken as universally accepted: Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither requires assumptions nor produces interpretations about the image content. Computer vision includes 3D analysis from 2D images. This analyzes the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image. Machine vision is the process of applying a range of technologies and methods to provide imaging-based automatic inspection, process control, and robot guidance in industrial applications. Machine vision tends to focus on applications, mainly in manufacturing, e.g., vision-based robots and systems for vision-based inspection, measurement, or picking (such as bin picking). This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that external conditions such as lighting can be and are often more controlled in machine vision than they are in general computer vision, which can enable the use of different algorithms. There is also a field called imaging which primarily focuses on the process of producing images, but sometimes also deals with the processing and analysis of images. For example, medical imaging includes substantial work on the analysis of image data in medical applications. Progress in convolutional neural networks (CNNs) has improved the accurate detection of disease in medical images, particularly in cardiology, pathology, dermatology, and radiology. Finally, pattern recognition is a field that uses various methods to extract information from signals in general, mainly based on statistical approaches and artificial neural networks. A significant part of this field is devoted to applying these methods to image data. Photogrammetry also overlaps with computer vision, e.g., stereophotogrammetry vs. computer stereo vision. Applications Applications range from tasks such as industrial machine vision systems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications. In many computer-vision applications, computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for: Automatic inspection, e.g., in manufacturing applications; Assisting humans in identification tasks, e.g., a species identification system; Controlling processes, e.g., an industrial robot; Detecting events, e.g., for visual surveillance or people counting, e.g., in the restaurant industry; Interaction, e.g., as the input to a device for computer-human interaction; monitoring agricultural crops, e.g. an open-source vision transformers model has been developed to help farmers automatically detect strawberry diseases with 98.4% accuracy. Modeling objects or environments, e.g., medical image analysis or topographical modeling; Navigation, e.g., by an autonomous vehicle or mobile robot; Organizing information, e.g., for indexing databases of images and image sequences. Tracking surfaces or planes in 3D coordinates for allowing Augmented Reality experiences. Medicine One of the most prominent application fields is medical computer vision, or medical image processing, characterized by the extraction of information from image data to diagnose a patient. An example of this is the detection of tumours, arteriosclerosis or other malign changes, and a variety of dental pathologies; measurements of organ dimensions, blood flow, etc. are another example. It also supports medical research by providing new information: e.g., about the structure of the brain or the quality of medical treatments. Applications of computer vision in the medical area also include enhancement of images interpreted by humans—ultrasonic images or X-ray images, for example—to reduce the influence of noise. Machine vision A second application area in computer vision is in industry, sometimes called machine vision, where information is extracted for the purpose of supporting a production process. One example is quality control where details or final products are being automatically inspected in order to find defects. One of the most prevalent fields for such inspection is the Wafer industry in which every single Wafer is being measured and inspected for inaccuracies or defects to prevent a computer chip from coming to market in an unusable manner. Another example is a measurement of the position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in the agricultural processes to remove undesirable foodstuff from bulk material, a process called optical sorting. Military Military applications are probably one of the largest areas of computer vision. The obvious examples are the detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene that can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability. Autonomous vehicles One of the newer application areas is autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars, or trucks), aerial vehicles, and unmanned aerial vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g., for knowing where they are or mapping their environment (SLAM), for detecting obstacles. It can also be used for detecting certain task-specific events, e.g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, cameras and LiDAR sensors in vehicles, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g., NASA's Curiosity and CNSA's Yutu-2 rover. Tactile feedback Materials such as rubber and silicon are being used to create sensors that allow for applications such as detecting microundulations and calibrating robotic hands. Rubber can be used in order to create a mold that can be placed over a finger, inside of this mold would be multiple strain gauges. The finger mold and sensors could then be placed on top of a small sheet of rubber containing an array of rubber pins. A user can then wear the finger mold and trace a surface. A computer can then read the data from the strain gauges and measure if one or more of the pins are being pushed upward. If a pin is being pushed upward then the computer can recognize this as an imperfection in the surface. This sort of technology is useful in order to receive accurate data on imperfections on a very large surface. Another variation of this finger mold sensor are sensors that contain a camera suspended in silicon. The silicon forms a dome around the outside of the camera and embedded in the silicon are point markers that are equally spaced. These cameras can then be placed on devices such as robotic hands in order to allow the computer to receive highly accurate tactile data. Other application areas include: Support of visual effects creation for cinema and broadcast, e.g., camera tracking (match moving). Surveillance. Driver drowsiness detection Tracking and counting organisms in the biological sciences Typical tasks Each of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Recognition The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of recognition problem are described in the literature. Object recognition (also called object classification)one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene. Blippar, Google Goggles, and LikeThat provide stand-alone programs that illustrate this functionality. Identificationan individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or the identification of a specific vehicle. Detectionthe image data are scanned for specific objects along with their locations. Examples include the detection of an obstacle in the car's field of view and possible abnormal cells or tissues in medical images or the detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation. Currently, the best algorithms for such tasks are based on convolutional neural networks. An illustration of their capabilities is given by the ImageNet Large Scale Visual Recognition Challenge; this is a benchmark in object classification and detection, with millions of images and 1000 object classes used in the competition. Performance of convolutional neural networks on the ImageNet tests is now close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on the stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters (an increasingly common phenomenon with modern digital cameras). By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained classes, such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this with ease. Several specialized tasks based on recognition exist, such as: Content-based image retrievalfinding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative to a target image (give me all images similar to image X) by utilizing reverse image search techniques, or in terms of high-level search criteria given as text input (give me all images which contain many houses, are taken during winter and have no cars in them). Pose estimationestimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation or picking parts from a bin. Optical character recognition (OCR)identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII). A related task is reading of 2D codes such as data matrix and QR codes. Facial recognition a technology that enables the matching of faces in digital images or video frames to a face database, which is now widely used for mobile phone facelock, smart door locking, etc. Emotion recognitiona subset of facial recognition, emotion recognition refers to the process of classifying human emotions. Psychologists caution, however, that internal emotions cannot be reliably detected from faces. Shape Recognition Technology (SRT) in people counter systems differentiating human beings (head and shoulder patterns) from objects. Human activity recognition - deals with recognizing the activity from a series of video frames, such as, if the person is picking up an object or walking. Motion analysis Several tasks relate to motion estimation, where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene or even of the camera that produces the images. Examples of such tasks are: Egomotiondetermining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera. Trackingfollowing the movements of a (usually) smaller set of interest points or objects (e.g., vehicles, objects, humans or other organisms) in the image sequence. This has vast industry applications as most high-running machinery can be monitored in this way. Optical flowto determine, for each point in the image, how that point is moving relative to the image plane, i.e., its apparent motion. This motion is a result of both how the corresponding 3D point is moving in the scene and how the camera is moving relative to the scene. Scene reconstruction Given one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case, the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model. The advent of 3D imaging not requiring motion or scanning, and related processing algorithms is enabling rapid advances in this field. Grid-based 3D sensing can be used to acquire 3D images from multiple angles. Algorithms are now available to stitch multiple 3D images together into point clouds and 3D models. Image restoration Image restoration comes into the picture when the original image is degraded or damaged due to some external factors like lens wrong positioning, transmission interference, low lighting or motion blurs, etc., which is referred to as noise. When the images are degraded or damaged, the information to be extracted from them also gets damaged. Therefore, we need to recover or restore the image as it was intended to be. The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters, such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look to distinguish them from noise. By first analyzing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches. An example in this field is inpainting. System methods The organization of a computer vision system is highly application-dependent. Some systems are stand-alone applications that solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on whether its functionality is pre-specified or if some part of it can be learned or modified during operation. Many functions are unique to the application. There are, however, typical functions that are found in many computer vision systems. Image acquisition – A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images or colour images) but can also be related to various physical measures, such as depth, absorption or reflectance of sonic or electromagnetic waves, or magnetic resonance imaging. Pre-processing – Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data in order to ensure that it satisfies certain assumptions implied by the method. Examples are: Re-sampling to ensure that the image coordinate system is correct. Noise reduction to ensure that sensor noise does not introduce false information. Contrast enhancement to ensure that relevant information can be detected. Scale space representation to enhance image structures at locally appropriate scales. Feature extraction – Image features at various levels of complexity are extracted from the image data. Typical examples of such features are: Lines, edges and ridges. Localized interest points such as corners, blobs or points. More complex features may be related to texture, shape, or motion. Detection/segmentation – At some point in the processing, a decision is made about which image points or regions of the image are relevant for further processing. Examples are: Selection of a specific set of interest points. Segmentation of one or multiple image regions that contain a specific object of interest. Segmentation of image into nested scene architecture comprising foreground, object groups, single objects or salient object parts (also referred to as spatial-taxon scene hierarchy), while the visual salience is often implemented as spatial and temporal attention. Segmentation or co-segmentation of one or multiple videos into a series of per-frame foreground masks while maintaining its temporal semantic continuity. High-level processing – At this step, the input is typically a small set of data, for example, a set of points or an image region, which is assumed to contain a specific object. The remaining processing deals with, for example: Verification that the data satisfies model-based and application-specific assumptions. Estimation of application-specific parameters, such as object pose or object size. Image recognition – classifying a detected object into different categories. Image registration – comparing and combining two different views of the same object. Decision making Making the final decision required for the application, for example: Pass/fail on automatic inspection applications. Match/no-match in recognition applications. Flag for further human review in medical, military, security and recognition applications. Image-understanding systems Image-understanding systems (IUS) include three levels of abstraction as follows: low level includes image primitives such as edges, texture elements, or regions; intermediate level includes boundaries, surfaces and volumes; and high level includes objects, scenes, or events. Many of these requirements are entirely topics for further research. The representational requirements in the designing of IUS for these levels are: representation of prototypical concepts, concept organization, spatial knowledge, temporal knowledge, scaling, and description by comparison and differentiation. While inference refers to the process of deriving new, not explicitly represented facts from currently known facts, control refers to the process that selects which of the many inference, search, and matching techniques should be applied at a particular stage of processing. Inference and control requirements for IUS are: search and hypothesis activation, matching and hypothesis testing, generation and use of expectations, change and focus of attention, certainty and strength of belief, inference and goal satisfaction. Hardware There are many kinds of computer vision systems; however, all of them contain these basic elements: a power source, at least one image acquisition device (camera, ccd, etc.), a processor, and control and communication cables or some kind of wireless interconnection mechanism. In addition, a practical vision system contains software, as well as a display in order to monitor the system. Vision systems for inner spaces, as most industrial ones, contain an illumination system and may be placed in a controlled environment. Furthermore, a completed system includes many accessories, such as camera supports, cables, and connectors. Most computer vision systems use visible-light cameras passively viewing a scene at frame rates of at most 60 frames per second (usually far slower). A few computer vision systems use image-acquisition hardware with active illumination or something other than visible light or both, such as structured-light 3D scanners, thermographic cameras, hyperspectral imagers, radar imaging, lidar scanners, magnetic resonance images, side-scan sonar, synthetic aperture sonar, etc. Such hardware captures "images" that are then processed often using the same computer vision algorithms used to process visible-light images. While traditional broadcast and consumer video systems operate at a rate of 30 frames per second, advances in digital signal processing and consumer graphics hardware has made high-speed image acquisition, processing, and display possible for real-time systems on the order of hundreds to thousands of frames per second. For applications in robotics, fast, real-time video systems are critically important and often can simplify the processing needed for certain algorithms. When combined with a high-speed projector, fast image acquisition allows 3D measurement and feature tracking to be realized. Egocentric vision systems are composed of a wearable camera that automatically take pictures from a first-person perspective. As of 2016, vision processing units are emerging as a new class of processors to complement CPUs and graphics processing units (GPUs) in this role. See also Chessboard detection Computational imaging Computational photography Computer audition Egocentric vision Machine vision glossary Space mapping Teknomo–Fernandez algorithm Vision science Visual agnosia Visual perception Visual system Lists Outline of computer vision List of emerging technologies Outline of artificial intelligence References Further reading External links USC Iris computer vision conference list Computer vision papers on the web – a complete list of papers of the most relevant computer vision conferences. Computer Vision Online – news, source code, datasets and job offers related to computer vision CVonline – Bob Fisher's Compendium of Computer Vision. British Machine Vision Association – supporting computer vision research within the UK via the BMVC and MIUA conferences, Annals of the BMVA (open-source journal), BMVA Summer School and one-day meetings Computer Vision Container, Joe Hoeller GitHub: Widely adopted open-source container for GPU accelerated computer vision applications. Used by researchers, universities, private companies, as well as the U.S. Gov't. Image processing Packaging machinery Articles containing video clips
Computer vision
Engineering
6,599
28,756
https://en.wikipedia.org/wiki/Stereochemistry
Stereochemistry, a subdiscipline of chemistry, studies the spatial arrangement of atoms that form the structure of molecules and their manipulation. The study of stereochemistry focuses on the relationships between stereoisomers, which are defined as having the same molecular formula and sequence of bonded atoms (constitution) but differing in the geometric positioning of the atoms in space. For this reason, it is also known as 3D chemistry—the prefix "stereo-" means "three-dimensionality". Stereochemistry applies to all kinds of compounds and ions, organic and inorganic species alike. Stereochemistry affects biological, physical, and supramolecular chemistry. Stereochemistry reactivity of the molecules in question (dynamic stereochemistry). History In 1815, Jean-Baptiste Biot's observation of optical activity marked the beginning of organic stereochemistry history. He observed that organic molecules were able to rotate the plane of polarized light in a solution or in the gaseous phase. Despite Biot's discoveries, Louis Pasteur is commonly described as the first stereochemist, having observed in 1842 that salts of tartaric acid collected from wine production vessels could rotate the plane of polarized light, but that salts from other sources did not. This was the only physical property that differed between the two types of tartrate salts, which is due to optical isomerism. In 1874, Jacobus Henricus van 't Hoff and Joseph Le Bel explained optical activity in terms of the tetrahedral arrangement of the atoms bound to carbon. Kekulé explored tetrahedral models earlier, in 1862, but never published his work; Emanuele Paternò probably knew of these but was the first to draw and discuss three dimensional structures, such as of 1,2-dibromoethane in the Giornale di Scienze Naturali ed Economiche in 1869. The term "chiral" was introduced by Lord Kelvin in 1904. Arthur Robertson Cushny, a Scottish Pharmacologist, first provided a clear example in 1908 of a bioactivity difference between enantiomers of a chiral molecule viz. (-)-Adrenaline is two times more potent than the (±)- form as a vasoconstrictor and in 1926 laid the foundation for chiral pharmacology/stereo-pharmacology (biological relations of optically isomeric substances). Later in 1966, the Cahn-Ingold-Prelog nomenclature or Sequence rule was devised to assign absolute configuration to stereogenic/chiral center (R- and S- notation) and extended to be applied across olefinic bonds (E- and Z- notation). Significance Cahn–Ingold–Prelog priority rules are part of a system for describing a molecule's stereochemistry. They rank the atoms around a stereocenter in a standard way, allowing unambiguous descriptions of their relative positions in the molecule. A Fischer projection is a simplified way to depict the stereochemistry around a stereocenter. Thalidomide example Stereochemistry has important applications in the field of medicine, particularly pharmaceuticals. An often cited example of the importance of stereochemistry relates to the thalidomide disaster. Thalidomide is a pharmaceutical drug, first prepared in 1957 in Germany, prescribed for treating morning sickness in pregnant women. The drug was discovered to be teratogenic, causing serious genetic damage to early embryonic growth and development, leading to limb deformation in babies. Several proposed mechanisms of teratogenicity involve different biological functions for the (R)- and (S)-thalidomide enantiomers. In the human body, however, thalidomide undergoes racemization: even if only one of the two enantiomers is administered as a drug, the other enantiomer is produced as a result of metabolism. Accordingly, it is incorrect to state that one stereoisomer is safe while the other is teratogenic. Thalidomide is currently used for the treatment of other diseases, notably cancer and leprosy. Strict regulations and controls have been implemented to avoid its use by pregnant women and prevent developmental deformities. This disaster was a driving force behind requiring strict testing of drugs before making them available to the public. In yet another example, the drug ibuprofen can exist as (R)- and (S)-isomers. Only the (S)-ibuprofen is active in reducing inflammation and pain. Types Atropisomers Atropisomerism derives from the inability to rotate about a bond, such as due to steric hindrance between functional groups on two sp2-hybridized carbon atoms. Usually atropisomers are chiral, and as such they are a form of axial chirality. Atropisomerism can be described as conformational isomerism Cis-Trans isomers Cis-Trans isomers are often associated alkene double bonds. {| |- | align=center | |       | align=center | |- | align=center | cis-pent-2-ene |       | align=center | trans-pent-2-ene |} The more general E/Z nomenclature refers to the concept of cis/trans isomerism, and is especially useful for more complex compounds. {| |- | align=center | |       | align=center | |- | align=center | (Z)-1-Bromo-1,2-dichloroethene |       | align=center | (E)-1-Bromo-1,2-dichloroethene |} Diastereomers Diastereomers are non-superposable, non-identical stereoisomers. A common example of diastereomerism is when two compounds differ from each other by the (R)/(S) absolute configuration at some, but not all corresponding stereocenters. Epimers are diastereomers that differ at exactly one such position. cis/trans isomerism is another type of diastereomeric relationship. Example: The below pair can also be classified as epimers. Enantiomers Enantiomers are pairs of non-superposable mirror images. Each member of the pair has a distinct R. Epimers Epimers are a subcategory of diastereomers that differ in absolute configuration configurations at only one corresponding stereocenter. They are commonly found in sugar chemistry, where two sugars can differ by the configuration of a single carbon atom. Example: D-glucose and D-galactose are epimers, differing only at the C-4 position in their structure. (see sugar numbering) See also Alkane stereochemistry Chiral resolution, which often involves crystallization Chirality (chemistry) (R/S, d/l) Chiral switch Skeletal formula, which describes how stereochemistry is denoted in skeletal formulae. Solid-state chemistry VSEPR theory Nuclear Overhauser effect, a method in nuclear magnetic resonance spectroscopy (NMR) employed to elucidate the stereochemistry of organic molecules References Chemistry Jacobus Henricus van 't Hoff 1874 in science
Stereochemistry
Physics,Chemistry
1,492
56,291,446
https://en.wikipedia.org/wiki/Geobacter%20argillaceus
Geobacter argillaceus is a non-spore-forming and motile bacterium from the genus Geobacter which has been isolated from kaolin clay. References Bacteria described in 2007 Thermodesulfobacteriota
Geobacter argillaceus
Biology
49
77,327,804
https://en.wikipedia.org/wiki/NGC%207716
NGC 7716 is an intermediate spiral galaxy located in the constellation Pisces. Its speed relative to the cosmic microwave background is 2,201 ± 26 km/s, which corresponds to a Hubble distance of 32.5 ± 2.3 Mpc (∼106 million ly). NGC 7716 was discovered by British astronomer John Herschel in 1831. The luminosity class of NGC 7716 is II and it has a broad HI line. According to the SIMBAD database, NGC 7716 is a candidate galaxy for the active galaxy classification. To date, twelve non-redshift measurements give a distance of 32.442 ± 5.854 Mpc (∼106 million ly), which is within the Hubble distance values. NGC 7716 group NGC 7716 is a member of a galaxy group of the same name. NGC 7716 group contains 5 members. The other galaxies in this group are NGC 7714, NGC 7715, UGC 12690 and UGC 12709. See also List of NGC objects (7001–7840) External links NGC 7716 at NASA/IPAC NGC 7716 at SIMBAD NGC 7716 at LEDA References 7716 Discoveries by John Herschel Unbarred spiral galaxies Active galaxies 12702 71883 Pisces (constellation)
NGC 7716
Astronomy
270
52,856,410
https://en.wikipedia.org/wiki/V%C3%A5g
A våg (plural våger) or vog is an old Scandinavian unit of mass. The standardized landsvåg, which was introduced in Norway with the new system of weights and measures in 1875, corresponded to three bismerpund, or . The våg was used in Eastern Norway, Western Norway, and Northern Norway, but it varied in weight. Previously, it was often reckoned as 72 marks or approximately . In Sunnmøre the våg was equivalent to three lispund or about , but in Sunnhordland it was reckoned as three spann or 90 marks; that is, about . References Further reading Språkrådet: Åtte potter rømme, fire merker smør – Om gammalt mål og gammal vekt I. Språknytt 4 (2006). Units of mass Obsolete units of measurement
Våg
Physics,Mathematics
179
31,471,329
https://en.wikipedia.org/wiki/Frequency-dependent%20foraging%20by%20pollinators
Frequency-dependent foraging is defined as the tendency of an individual to selectively forage on a certain species or morph based on its relative frequency within a population. Specifically for pollinators, this refers to the tendency to visit a particular floral morph or plant species based on its frequency within the local plant community, even if nectar rewards are equivalent amongst different morphs. Pollinators that forage in a frequency-dependent manner will exhibit flower constancy for a certain morph, but the preferred floral type will be dependent on its frequency. Additionally, frequency-dependent foraging differs from density-dependent foraging as the latter considers the absolute number of certain morphs per unit area as a factor influencing pollinator choice. Although density of a morph will be related to its frequency, common morphs are still preferred when overall plant densities are high. Background Floral traits, such as corolla color, flower shape, size and scent, appear to have evolved primarily for the purpose of attracting pollinators and many pollinators have learned to associate these floral signals with the reward that is present there. As pollinators are essential in the process of pollen transfer (and therefore, reproductive success) of many angiosperms, visitation behavior will impose frequency-dependent selection on the flower morphs that they visit. If pollinators selectively visit a particular morph, this will cause this morph to increase in frequency, and may ultimately lead to the fixation of this phenotype, known as directional selection. Alternatively, if rare morphs are preferred, this should promote phenotypic diversity, known as balancing or stabilizing selection. Interest in frequency-dependent selection dates back to the time of Charles Darwin, who predicted that insects should demonstrate flower constancy and puzzled over the occurrence of deceptive orchid species. This phenomenon received little attention until the 1970s when Donald Levin suggested that one of the most important factors determining pollinator visitation behavior is the floral trait's frequency in the population relative to other floral elements. Since this time, attention has focussed on understanding how obligately pollinated, unrewarding species can persist as they offer pollinators no incentive to visit. Much less research has been conducted on frequency-dependent foraging on rewarding species, but experiments using bumblebees have illustrated that frequency likely plays a role in reproductive success of flowering plants. Experimental evidence Researchers studying frequency-dependent visitation behavior seek to understand if pollinator preference is strong enough to induce fixation of traits or to maintain floral polymorphisms observed in natural populations. Laboratory experiments use artificial flowers to test how pollinator preference varies with frequency. Typical experiments use two or more colored discs or artificial flowers (to represent flower morphs) that are arranged in various patterns and frequencies. It is predicted that if pollinators do not exhibit frequency-dependent foraging, morph preference will not correlate with the relative frequency of that morph. Instead, this preference may depend on some frequency-independent quality, such as an innate attraction toward a certain color. Bumblebees Laboratory experiments Frequency-dependent foraging has most often been observed and studied in bumblebees (Bombus) as they tend to forage for long periods of time without becoming satiated, making them ideal experimental subjects. Simple experiments using two morphs have revealed that after visiting many flowers (more than 100) bumblebees tend to prefer to visit the common morph when rewards associated with both morphs are equal. This pattern is consistent for a variety of nectar concentrations. An exception to this pattern occurs when one morph contains variable amounts of nectar. This reward variability tends to cause the strength of the observed frequency dependence to decrease. However, when both rare and common morphs are unrewarding, bumblebees tend to reverse their behavioral pattern and demonstrate rare morph preference. Even though these experiments demonstrate that bumblebees forage in a frequency-dependent manner, the strength of this response can be asymmetric for different colors. For example, experiments using blue and yellow discs to represent corolla colors demonstrated that, although bumblebees preferentially foraged on the most common morph when rewards were present, the threshold for switching to the common morph was different for both colors. Bumblebees exhibit an innate preference for blue corollas, as this color is very conspicuous to bees against green-colored backgrounds. It was observed that in order for bees to switch from blue flowers to yellow, the yellow-to-blue ratio had to be much higher than the ratio of blue-to-yellow flowers that were required for the opposite switch. In other words, bees would forage on blue flowers until morphs of this colour reached relatively lower frequencies compared to yellow flowers. However, this preference for blue was not as pronounced when both morphs contained high levels of nectar. Therefore, frequency-dependent preferences must be considered along-with frequency-independent preferences to truly understand the visitation behavior of pollinators. Additionally, when density of equally rewarding color morphs were manipulated, bumblebees still preferred to forage on the common morphs, even at high densities. Field experiments Experiments conducted in the field have yielded mixed results. Some studies have demonstrated that bumblebees prefer the relatively common corolla color, but in other studies there did not appear to be any observable pattern of bee visitation behavior. This discrepancy between laboratory and field studies may be due to the fact that laboratory studies use highly contrasting corolla colors and it is likely that color polymorphisms in the wild are not this distinct, making frequency-dependence weaker in natural settings. Additionally, in natural populations multiple traits that are attractive to pollinators may be genetically correlated with one another (pleiotropy), so looking at pollinator response to a single trait in isolation may not be appropriate under these circumstances. Also, frequency-dependent foraging is not apparent until many flowers have been visited (more than 100). Therefore, considering morph frequency within localized patches of flowers in natural settings may not be sufficient. Instead, morph frequency may need to be calculated over large spatial ranges to determine the extent to which pollinators are foraging in a frequency-dependent manner. Other insects Although studies of frequency-dependent foraging in other pollinator groups seems to be rare, at least one study has demonstrated that butterflies prefer to visit common corolla shapes. This observation was based on reduced seed set of rare morphs in field studies. Mechanisms Positive frequency-dependent foraging Foraging on common morphs will be beneficial if these common morphs are associated with a higher reward than rare morphs. However, if rare morphs have similar nectar quality, skipping over these equally rewarding flowers appears to be inconsistent with optimal foraging theory. Several hypotheses have been proposed to suggest how this visitation pattern is maintained. Search image hypothesis It has been observed that predators tend to select the most common morph in a population or species. The search image hypothesis proposes that an individual's sensory system becomes better able to detect a specific prey phenotype after recent experience with that same phenotype. It is clear that plant-pollinator interactions differ from predator-prey relationships, as it is beneficial to both the plant and animal for the pollinator to locate the plant. However, it has been suggested that cognitive constraints on short-term memory capabilities may limit pollinators from identifying and handling more than one floral type at a time, making plant-pollinator relationships theoretically similar to predator-prey relationships in regards to the ability to identify food sources. Although plant traits that have evolved to attract pollinators are not cryptic, corolla colors can be more or less conspicuous with the background and pollinators that are more efficient at detecting a particular morph will minimize their search time. Studies have demonstrated that the degree of frequency-dependence increases with the number of flowers visited, which suggests this is a learned response that develops gradually. Search rate hypotheses Alternative mechanisms, such as the optimal search rate hypothesis and the stare duration hypothesis both propose that there is a tradeoff between search time and the probability of detecting prey. It has been demonstrated that when both density and frequency were manipulated, the strength of the preference for the common morph does not weaken with increased overall density, even when colors that are not innately preferred are the common morph. These results are consistent with both of these search time hypotheses, as bees tend to decrease their speed travelling between flowers when density is high, and therefore, may be more efficient at recognizing less conspicuous yellow flowers at lower speeds. Switching attention hypothesis Studies on other organisms have provided evidence that foraging can occur in long runs, but this preference develops after only visiting a few morphs. When presented with two equally rewarding morphs, it has been demonstrated that an organism may select to exclusively forage on one morph for a variable amount of time, and then switch to the alternative morph and repetitively forage on this morph. Under this switching attention hypothesis, selectively foraging on the common morph can occur without invoking a learned response, as the probability of visiting a particular morph first increases as the relative frequency of that morph increases. In other words, it is likely pollinators will select common morphs first due to chance since they are more common and will continue to forage on these morphs during foraging bouts. Negative frequency-dependent foraging Pollinators appear to forage in a negative frequency-dependent manner when flowers do not provide nectar rewards, likely to avoid unrewarding morphs. This behaviour results in disassortative mating between different morph types. However, it seems likely that deceptive species would have low reproductive success as pollinators would learn to avoid areas where only unrewarding species are present. Naive pollinators One hypothesis as to how unrewarding species can persist in the population is that they only receive visits from naive pollinators. As pollinators do not appear to be able to distinguish between rewarding and unrewarding flowers prior to landing, they need to make test visits so they can learn to avoid particular morph types. When a preferred rewarding morph type becomes locally depleted, pollinators may be initially attracted to unrewarding morphs if these morphs exploit signals that are innately attractive or closely mimic rewarding species. However, under this hypothesis, the pollinator should learn to associate this morph with no reward and consequently avoid it on future foraging bouts. Negative frequency-dependent selection A different hypothesis does not assume that only naive pollinators visit deceptive species. Instead, the negative reinforcement associated with visiting an unrewarding flower is assumed to be stored in short-term memory. This causes the pollinator to go to a different morph type on its next visit. In other words, if deceptive species were to occur at a low enough frequency that pollinators do not encounter them very often, it is unlikely they will have the opportunity to relocate this information to their long-term memory. Studies have shown that the number of flowers of an unrewarding morph type that are sampled depends on the frequency of those morphs within a population. For example, many species of obligately animal-pollinated, deceptive orchids that co-occur with rewarding flowers are only reproductively successful when they occur at low frequencies. These two hypotheses are not mutually exclusive, in that morph populations that are visited by naive pollinators are also likely to be found at low frequencies relative to rewarding morphs. Implications Regardless of the mechanism, pollinators foraging in a frequency-dependent manner on common morphs will lead to assortative mating between similar phenotypes. Additionally, rare morphs may be at a disadvantage if reproductive success is correlated with number of pollinator visits, and this may lead to higher rates of selfing and ultimately inbreeding depression, in self-compatible plants. The potential for a decrease in genetic diversity due to assortative mating can have negative implications. Climate change In response to climate change, plants may begin to flower earlier in the season due to regional aridification and a rise in mean global temperature. However, reproductive success of flowering plants that are obligately pollinated ultimately depends on a corresponding change in the timing of pollinator visitors. The earliest bloomers of any species will be rare since the majority of conspecific plants have not yet flowered. Since many pollinators prefer to forage on common phenotypes, the flowers that bloom earliest in the season may be skipped. This may lead to a constraint on plant flowering evolution and the inability of flowering plants to adapt to changing environmental conditions. Hybrid zones Additionally, positive frequency-dependent foraging may help maintain hybrid zones between closely related species. Hybrid zones generally contain a wide variety of phenotypes, including novel or extremely rare morphs. Since certain pollinators tend to prefer common morphs, there is a low probability that they will visit rare morphs in the hybrid zone, thus keeping gene flow between species relatively low. See also Flower constancy Pollination Frequency-dependent selection References Pollination Behavioral ecology
Frequency-dependent foraging by pollinators
Biology
2,694
77,421,081
https://en.wikipedia.org/wiki/NGC%205394
NGC 5394 is a barred spiral galaxy located in the constellation Canes Venatici. Its speed relative to the cosmic microwave background is 3,639 ± 14 km/s, which corresponds to a Hubble distance of 53.7 ± 3.8 Mpc (∼175 million ly). NGC 5394 was discovered by German-British astronomer William Herschel in 1787. The luminosity class of NGC 5394 is II and it has a broad HI line. It also contains regions of ionized hydrogen. It is also a Luminous Infrared Galaxy (LIRG). To date, one non-redshift-based measurement gives a distance of approximately 32,900 Mpc (∼107 million ly). This value is far outside the Hubble distance values. Note that it is with the average value of independent measurements, when they exist, that the NASA/IPAC database calculates the diameter of a galaxy. One supernova has been observed in NGC 5394: SN 2020aaxs (type Ib, mag. 17). Arp 84 NGC 5394 and NGC 5395 are a pair of gravitationally interacting galaxies that appear in Halton Arp's Atlas of Peculiar Galaxies under the designation Arp 84. Arp noted that NGC 5495 is a spiral with a high surface luminosity companion at the end of one of its arms. NGC 5395 group According to A.M. Garcia, NGC 5394 is part of a group of galaxies that has at least five members, the NGC 5395 group. The other galaxies are NGC 5341, NGC 5351, NGC 5395 and UGC 8806. See also NGC 646, another interacting galaxy with a similar shape List of NGC objects (5001–6000) External links NGC 5394 at NASA/IPAC NGC 5394 at SIMBAD NGC 5394 at LEDA NGC 5394 at SEDS NGC 5394 (DSS2) at WikiSky NGC 5394 (SDSS) at WikiSky NGC 5394 (GALEX) at WikiSky References Barred spiral galaxies Interacting galaxies 084 Canes Venatici Discoveries by William Herschel 049739 08898 IRAS catalogue objects ? 5394
NGC 5394
Astronomy
474
23,617,480
https://en.wikipedia.org/wiki/Salicylate%20poisoning
Salicylate poisoning, also known as aspirin poisoning, is the acute or chronic poisoning with a salicylate such as aspirin. The classic symptoms are ringing in the ears, nausea, abdominal pain, and a fast breathing rate. Early on, these may be subtle, while larger doses may result in fever. Complications can include swelling of the brain or lungs, seizures, low blood sugar, or cardiac arrest. While usually due to aspirin, other possible causes include oil of wintergreen and bismuth subsalicylate. Excess doses can be either on purpose or accidental. Small amounts of oil of wintergreen can be toxic. Diagnosis is generally based on repeated blood tests measuring aspirin levels and blood gases. While a type of graph has been created to try to assist with diagnosis, its general use is not recommended. In overdose maximum blood levels may not occur for more than 12 hours. Efforts to prevent poisoning include child-resistant packaging and a lower number of pills per package. Treatment may include activated charcoal, intravenous sodium bicarbonate with dextrose and potassium chloride, and dialysis. Giving dextrose may be useful even if the blood sugar is normal. Dialysis is recommended in those with kidney failure, decreased level of consciousness, blood pH less than 7.2, or high blood salicylate levels. If a person requires intubation, a fast respiratory rate may be required. The toxic effects of salicylates have been described since at least 1877. In 2004, more than 20,000 cases with 43 deaths were reported in the United States. About 1% of those with an acute overdose die, while chronic overdoses may have severe outcomes. Older people are at higher risks of toxicity for any given dose. Signs and symptoms Salicylate toxicity has potentially serious consequences, sometimes leading to significant morbidity and death. Patients with mild intoxication frequently have nausea and vomiting, abdominal pain, lethargy, ringing in the ears, and dizziness. More significant signs and symptoms occur in more severe poisonings and include high body temperature, fast breathing rate, respiratory alkalosis, metabolic acidosis, low blood potassium, low blood glucose, hallucinations, confusion, seizure, cerebral edema, and coma. The most common cause of death following an aspirin overdose is cardiopulmonary arrest usually due to pulmonary edema. High doses of salicylate can cause salicylate-induced tinnitus. Severity The severity of toxicity depends on the amount of aspirin taken. Pathophysiology High levels of salicylates stimulate peripheral chemoreceptors and the central respiratory centers in the medulla causing increased ventilation and respiratory alkalosis. The increased pH secondary to hyperventilation with respiratory alkalosis causes an increase in lipolysis and ketogenesis which causes the production of lactate and organic keto-acids (such as beta-hydroxybutyrate). The accumulation of these organic acids can cause an acidosis with an increased anion gap as well as a decreased buffering capacity of the body. Salicylate toxicity also causes an uncoupling of oxidative phosphorylation and a decrease in citric acid cycle activity in the mitochondria. This decrease in aerobic production of adenosine triphosphate (ATP) is accompanied by an increase in anaerobic production of ATP through glycolysis which leads to glycogen depletion and hypoglycemia. The inefficient ATP production through anaerobic metabolism causes the body to shift to a catabolic predominant mode for energy production which consists of increased oxygen consumption, increased heat production (often manifesting as sweating), liver glycogen utilization and increased carbon dioxide production. This increased catabolism accompanied by hyperventilation can lead to severe insensible water losses, dehydration and hypernatremia. Acute aspirin or salicylates overdose or poisoning can cause initial respiratory alkalosis though metabolic acidosis ensues thereafter. The acid-base, fluid, and electrolyte abnormalities observed in salicylate toxicity can be grouped into three broad phases: Phase I is characterized by hyperventilation resulting from direct respiratory center stimulation, leading to respiratory alkalosis and compensatory alkaluria. Potassium and sodium bicarbonate are excreted in the urine. This phase may last as long as 12 hours. Phase II is characterized by paradoxic aciduria in the presence of continued respiratory alkalosis occurs when sufficient potassium has been lost from the kidneys. This phase may begin within hours and may last 12–24 hours. Phase III is characterized by dehydration, hypokalemia, and progressive metabolic acidosis. This phase may begin 4–6 hours after ingestion in a young infant or 24 hours or more after ingestion in an adolescent or adult. Diagnosis The acutely toxic dose of aspirin is generally considered greater than 150 mg per kg of body mass. Moderate toxicity occurs at doses up to 300 mg/kg, severe toxicity occurs between 300 and 500 mg/kg, and a potentially lethal dose is greater than 500 mg/kg. Chronic toxicity may occur following doses of 100 mg/kg per day for two or more days. Monitoring of biochemical parameters such as electrolytes and solutes, liver and kidney function, urinalysis, and complete blood count is undertaken along with frequent checking of salicylate and blood sugar levels. Arterial blood gas assessments typically find respiratory alkalosis early in the course of the overdose due to hyperstimulation of the respiratory center, and may be the only finding in a mild overdose. An anion-gap metabolic acidosis occurs later in the course of the overdose, especially if it is a moderate to severe overdose, due to the increase in protons (acidic contents) in the blood. The diagnosis of poisoning usually involves measurement of plasma salicylate, the active metabolite of aspirin, by automated spectrophotometric methods. Plasma salicylate levels generally range from 30–100 mg/L (3–10 mg/dL) after usual therapeutic doses, 50–300 mg/L in patients taking high doses, and 700–1400 mg/L following acute overdose. Patients may undergo repeated testing until their peak plasma salicylate level can be estimated. Optimally, plasma levels should be assessed four hours after ingestion and then every two hours after that to allow calculation of the maximum level, which can then be used as a guide to the degree of toxicity expected. Patients may also be treated according to their individual symptoms. Prevention Efforts to prevent poisoning include child-resistant packaging and a lower number of pills per package. Treatment There is no antidote for salicylate poisoning. Initial treatment of an overdose involves resuscitation measures such as maintaining an adequate airway and adequate circulation followed by gastric decontamination by administering activated charcoal, which adsorbs the salicylate in the gastrointestinal tract. Stomach pumping is no longer routinely used in the treatment of poisonings, but is sometimes considered if the patient has ingested a potentially lethal amount less than one hour before presentation. Inducing vomiting with syrup of ipecac is not recommended. Repeated doses of activated charcoal have been proposed to be beneficial in cases of salicylate poisoning, especially in ingestion of enteric coated and extended release salicylic acid formulations which are able to remain in the gastrointestinal (GI) tract for longer periods of time. Repeated doses of activated charcoal are also useful to re-adsorb salicylates in the GI tract that may have desorbed from the previous administration of activated charcoal. The initial dose of activated charcoal is most useful if given within 2 hours of initial ingestion. Contraindications to the use of activated charcoal include altered mental status (due to the risk of aspiration), GI bleeding (often due to salicylates) or poor gastric motility. Whole bowel irrigation using the laxative polyethylene glycol can be useful to induce the gastrointestinal elimination of salicylates, particularly if there is partial or diminished response to activated charcoal. Alkalinization of the urine and plasma, by giving a bolus of sodium bicarbonate then adding sodium bicarbonate to maintenance fluids, is an effective method to increase the clearance of salicylates from the body. Alkalinization of the urine causes salicylates to be trapped in renal tubules in their ionized form and then readily excreted in the urine. Alkalinization of the urine increases urinary salicylate excretion by 18 fold. Alkalinization of the plasma decreases the lipid soluble form of salicylates facilitating movement out of the central nervous system. Oral sodium bicarbonate is contra-indicated in salicylate toxicity as it can cause dissociation of salicylate tablets in the GI tract and subsequent increased absorption. Intravenous fluids Intravenous fluids containing dextrose such as dextrose 5% in water (D5W) are recommended to keep a urinary output between 1 and 1.5 millilitres per kilogram per hour. Sodium bicarbonate is given in a significant aspirin overdose (salicylate level greater than 35 mg/dL 6 hours after ingestion) regardless of the serum pH, as it enhances elimination of aspirin in the urine. It is given until a urine pH between 7.5 and 8.0 is achieved. Dialysis Hemodialysis can be used to enhance the removal of salicylate from the blood, usually in those who are severely poisoned. Examples of severe poisoning include people with high salicylate blood levels: 7.25 mmol/L (100 mg/dL) in acute ingestions or 40 mg/dL in chronic ingestions, significant neurotoxicity (agitation, coma, convulsions), kidney failure, pulmonary edema, or cardiovascular instability. Hemodialysis also has the advantage of restoring electrolyte and acid-base abnormalities while removing salicylate. Salicylic acid has a small size (low molecular mass), has a low volume of distribution (is more water soluble), has low tissue binding and is largely free (and not protein bound) at toxic levels in the body; all of which make it easily removable from the body by hemodialysis. Indication for dialysis: Salicylate level higher than 90 mg/dL (6.5 mmol/L) Severe acid–base imbalance Severe cardiac toxicity Acute respiratory distress syndrome Cerebral involvement / neurological signs and symptoms Rising serum salicylate level despite alkalinization/multidose activated charcoal, or people in which standard approaches to treatment have failed Unable to tolerate fluids with fluid overload Epidemiology Acute salicylate toxicity usually occurs after an intentional ingestion by younger adults, often with a history of psychiatric disease or previous overdose, whereas chronic toxicity usually occurs in older adults who experience inadvertent overdose while ingesting salicylates therapeutically over longer periods of time. During the latter part of the 20th century, the number of poisonings from salicylates declined, mainly because of the increased popularity of other over-the-counter analgesics such as paracetamol (acetaminophen). Fifty-two deaths involving single-ingredient aspirin were reported in the United States in 2000; however, in all but three of these cases, the reason for the ingestion of lethal doses was intentional—predominantly suicidal. History Aspirin poisoning has been cited as a possible driver of the high mortality rate during the 1918 flu pandemic, which killed 50 to 100 million people. See also NSAID hypersensitivity reactions Reye syndrome Salicylate sensitivity References External links Aspirin Poisoning by drugs, medicaments and biological substances Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate
Salicylate poisoning
Environmental_science
2,522
12,536,291
https://en.wikipedia.org/wiki/Electrical%20Engineering%20Students%27%20European%20Association
The Electrical Engineering STudents' European assoCiation (EESTEC) is a nonprofit apolitical and non-governmental student organization for Electrical Engineering and Computer Science (EECS) students at universities, institutes and schools of technology in Europe awarding an engineering degree. As of March 2020, there were 48 current locations in EESTEC from 24 countries, although several other locations were active in EESTEC over the years. As a pre-professional organization, EESTEC puts a strong emphasis on the development of a general skillset, with soft-skill growth added to the mastery of the academic and professional skillset of the field. The organization aims to promote and develop international contacts and the exchange of ideas among EECS students through professional workshops, cultural student exchanges and publications. EESTEC was founded in 1986 in Eindhoven, Netherlands. The official seat moved several times until finally moving to Zurich, Switzerland in 2021, its current location. History Pre-EESTEC Discussions about the need for an international organization for electrical engineering students are dated back to 1958, when students from France and West Germany met in an attempt to form such a group. It was not, however, until five years later that a European association, called EURIELEC, was formed. Despite the success in its early years, EURIELEC was dissolved in 1972. Various attempts were made over the next twelve years, but no organization was able to form a sustainable structure to reconnect electrical engineering students in Europe. In 1984, the boards of three Dutch student guilds from ETV (Delft), Thor (Eindhoven) and Scintilla (Enschede), decided to try and reignite the interest of other European student associations in renewing the international student activities. They exchanged ideas with professional organizations such as IEEE, EUREL and SEFI, and in January 1985 wrote the first letter to all former EURIELEC member universities, inviting them to take part in a new international annual conference for electrical engineering students, which they later named EESTIC (Electrical Engineering STudents International Conference). The early years The inaugural gathering was held in Eindhoven, The Netherlands, between April 27 and May 3, 1986, and was attended by 50 students from 33 different cities in 17 different countries (Austria, Belgium, Czechoslovakia, England, Finland, France, Hungary, Italy, The Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland, West Germany and Yugoslavia). A meeting during a visit to the Peace Palace in The Hague on the last day of the conference is considered to be the founding of EESTEC. The delegates agreed on a list of 14 clauses, describing the structure and function of the newly formed organization. The official seat was assigned to Ghent, Belgium, and the concept of NatComs (National Committees) was introduced, as a single point of contact for each country. The resolutions included a change of the name to EESTEC (Electrical Engineering STudents European Conference), as with very minor exceptions, the involved countries were European. The first EESTEC newsletter was printed later that year. The following years saw a growth in the organization, as countries like Greece, Denmark, Bulgaria, Israel, Malta, GDR, Russia, and Romania were accepted and became active. Successful conferences induced a surge in student exchanges and workshops that filled the year with activity. Some key changes were made to the statutes during the 4th conference in Budapest in 1989. Although the EESTEC acronym was kept, the full name was changed to Electrical Engineering STudents European association to reflect the year-round activities. The annual meeting was also renamed from "Conference" to "Congress", and the first international board was elected, consisting of a chairman (Peter Zijlema from The Netherlands) and one vice-chairman (Pawel Karlowski from Poland). The structure of the international board was changed a year later in Zurich, as the second board was elected, with Filip Van Den Bossche from Belgium as chairman, and two vice-chairmen: Sigmar Lampe from West Germany, and Peter Stieger from Austria. Two new members for the third international board were elected during the following Congress in Vienna in 1991: Zsolt Berend from Hungary as chairman, and Yoed Nehoran from Israel as vice-chairman. Sigmar Lampe was reconfirmed and kept his vice-chairman position from the prior year. Just before the close of the Vienna Congress, the official seat of EESTEC was moved to Budapest, Hungary. Other noteworthy milestones EESTEC was officially incorporated in 1995, and the official seat was moved from Budapest to Zurich to facilitate international financial transactions, though it was moved again seven years later from Zurich to Delft, as part of an attempt to obtain financial support from the European Union. Also in 1995, the NatComs were eliminated, giving each LC (Local Committee) direct representation at the EESTEC activities, and its own levels of engagement. Alumni relations functions were added to the organization in 1998. Official collaboration with IEEE began in 2003, when a memorandum of understanding regarding joint international events was signed between the two organizations. Although the logo prepared for the conference in Nova Gorica, Yugoslavia was adopted as the EESTEC logo back in 1987, it was only recognized as the organization's official logo in 1996. In 2007, the EESTEC logo was recolored and simplified, removing the background grid and centering the S, to make the official logo what it is today. Hosts of the EESTEC congress EESTEC locations Aim The primary aim of EESTEC is to promote and develop international contacts between students and professionals. The exchange of ideas and experience among Electrical Engineering and Computer Science students is made possible through the activities of the association. EESTEC is also promoting international career and job opportunities for students. Based on Strategic Plan 2018 – 2023, EESTEC's aim is: The aim is to promote and develop international contacts and the exchange of ideas among the students of EECS. The Association shall try to achieve its aim through the following principal activities: Professional workshops on topics in the field of EECS Cultural student exchanges Publication and distribution of articles on technical subjects Other activities directed at achieving the aim Organizational structure EESTEC is composed of the following bodies: General Assembly The general assembly (GA) is the convergence of all congress participants, with the right to make governing decisions regarding EESTEC. It serves as the supreme decision-making body of the association and convenes at least once during the Congress, where each attending LC is granted one vote. The Board of the Association The EESTEC Board takes care of the administration of EESTEC throughout the year and is elected by the GA. There are five positions on the board: Chairperson, Treasurer, Vice-Chairperson for External Affairs, Vice-Chairperson for Internal Affairs, and Vice-Chairperson for Administrative Affairs. The concept of a Board of the Association was first introduced during the annual congress in 1989, with only a chairperson and a vice-chairperson. The composition of the board kept evolving through the years, first adding a second vice-chairperson (1990), then a treasurer (1995), and finally a webmaster (2001), a position which was subsequently transformed into the vice-chairperson for publications & administration (2004). In 2016, based on archetypes of every position, instead of the position of vice-chairperson for publications & administration, the position of Vice-Chairperson of Administrative Affairs is introduced. Supervisory Board The Supervisory Board (SB) is an independent advisory committee responsible for overseeing the work and financial actions taken by the Board. The Supervisory Board reports its observations to the Congress general assembly. The term Supervisory Board was introduced during the Autumn Congress in Essen in 2019. The previous term was Oversight Committee. International Bureau The international bureau (IB) serves as the record keeper of EESTEC and is responsible for archiving all the material data of the organization. Spring and Autumn Congress Organizing Committee The congress organizing committee (COC) is appointed at the end of the Congress and is tasked to organize the following Congress, including the responsibility to host the attendees, preside over the general meetings, and issue the congress guide and congress report. There are Spring Congress Organizing Committee (SCOC) and Autumn Congress Organizing Committee (ACOC). On each, the next is elected by the GA. Membership As a supraorganization, the membership in EESTEC is open to electrical engineering and computer science student groups in universities, institutes, and schools of technology. Most of the groups seeking membership are existing student organizations in such programs, yet in many cases, students form a new group specifically to gain membership in EESTEC. There are three gradual engagement levels in EESTEC: Observer is a status granted by the Board of the Association for a student group that is interested in EESTEC. It is the first step in joining EESTEC as an official member. JLC (Junior Local Committee) status can be granted by the Board of the Association after the Observer is legally recognized by its University. LC (Local Committee) is a full member of EESTEC when JLC successfully organizes at least one official EESTEC Event. This level requires continuous activity in the organization. Activities Workshops The Electrical Engineering and Computer Science aspects of EESTEC are expressed through workshops, the most important activity of the association. During a project week, lectures are presented by specialists from the industry and universities. Discussions are also held in small group sessions. Topics for workshops are mainly chosen based on technologies in Electrical Engineering and Computer Science, economics or soft-skills. An EESTEC Workshop is a professional event combined with social activities. Exchanges During an exchange student can visit another city for a week. During these multilateral meetings the participants gain awareness of the foreign university life, industry and cultural aspects. Lykeion Lykeion was an EESTEC online portal, connecting students, companies, and universities directly. Its aim is to give students the opportunity to easily search for internships, jobs, Bachelor, Master and PhD study programs. Training EESTEC has its own training system aiming at developing the skills of its members. For this, members are trained in becoming EESTEC Trainers. ECM (EESTEC Chairpersons' Meeting) ECM was a meeting where the chairpersons of all LCs, JLCs, and Observers participate to share experiences, contribute to the future development of EESTEC, and be coached on how to run a Local Committee. ECM lasts for five days, offering working sessions and discussions about organizing events and cooperation on the international level. The meeting has around 50 participants. The first one happened in 2006 in Istanbul and the last one was held in Sarajevo, in 2015. ECM was succeeded by the Autumn Congress. Congress EESTEC has an annual meeting with approximately 110–150 Electrical Engineering and Computer Science students representing their local EESTEC group. It's the most important event in the year. Its main purpose is the discussion of current internal and external affairs, as well as the goals and plans for the upcoming year. An important part of the Congress is the election of the new board and bodies of the organization. Aside from the general meeting, workshops and training sessions are also held. Since 2016, the 1st Autumn Congress was held in Belgrade with an idea to more closely work on the education of members by providing as many working sessions. From 2017, when the first Spring Congress was organized in Ljubljana, Spring Congress was bigger Congress with the main part – the election of the new Board. The Board-in-Office would follow the work of the Board-Elect from Spring until the end of Autumn Congress. Open Day Since 2009 an important part of Congress has been the Open Day event. It's a fair where students represent their own city and give information on the Master and PhD study programs at their university. Companies have their own booth with information on jobs, internships, and technology, which they are offering. See also EUREL (Convention of National Associations of Electrical Engineers of Europe) IEEE (Institute of Electrical and Electronics Engineers) EURIELEC (European Association of Electrical Engineering Students) Notes References External links Electrical engineering organizations Engineering education European student organizations Student organizations established in 1986 1986 establishments in the Netherlands
Electrical Engineering Students' European Association
Engineering
2,496
43,178,976
https://en.wikipedia.org/wiki/Lanthanum%20aluminate
Lanthanum aluminate is an inorganic compound with the formula LaAlO3, often abbreviated as LAO. It is an optically transparent ceramic oxide with a distorted perovskite structure. Properties Crystalline LaAlO3 has a relatively high relative dielectric constant of ~25. LAO's crystal structure is a rhombohedral distorted perovskite with a pseudocubic lattice parameter of 3.787 angstroms at room temperature (although one source claims the lattice parameter is 3.82). Polished single crystal LAO surfaces show twin defects visible to the naked eye. Uses Epitaxial thin films Epitaxially grown thin films of LAO can serve various purposes for correlated electrons heterostructures and devices. LAO is sometimes used as an epitaxial insulator between two conductive layers. Epitaxial LAO films can be grown by several methods, most commonly by pulsed laser deposition (PLD) and molecular beam epitaxy (MBE). LAO-STO interfaces The most important and common use for epitaxial LAO is at the lanthanum aluminate-strontium titanate interface. In 2004, it was discovered that when 4 or more unit cells of LAO are epitaxially grown on strontium titanate (SrTiO3, STO), a conductive 2-dimensional layer is formed at their interface. Individually, LaAlO3 and SrTiO3 are non-magnetic insulators, yet LaAlO3/SrTiO3 interfaces exhibit electrical conductivity, superconductivity, ferromagnetism, large negative in-plane magnetoresistance, and giant persistent photoconductivity. The study of how these properties emerge at the LaAlO3/SrTiO3 interface is a growing area of research in condensed matter physics. Substrates Single crystals of lanthanum aluminate are commercially available as a substrate for the epitaxial growth of perovskites, and particularly for cuprate superconductors. Non-epitaxial thin films Thin films of lanthanum aluminate were considered as candidate materials for high-k dielectrics in the early-mid 2000s. Despite their attractive relative dielectric constant of ~25, they were not stable enough in contact with silicon at the relevant temperatures (~1000 °C). See also Lanthanum aluminate-strontium titanate interface High-k dielectrics LSAT (oxide) Perovskite structure References Lanthanum compounds Aluminates Inorganic compounds Perovskites
Lanthanum aluminate
Chemistry
533
72,048
https://en.wikipedia.org/wiki/X-ray%20fluorescence
X-ray fluorescence (XRF) is the emission of characteristic "secondary" (or fluorescent) X-rays from a material that has been excited by being bombarded with high-energy X-rays or gamma rays. The phenomenon is widely used for elemental analysis and chemical analysis, particularly in the investigation of metals, glass, ceramics and building materials, and for research in geochemistry, forensic science, archaeology and art objects such as paintings. Underlying physics When materials are exposed to short-wavelength X-rays or to gamma rays, ionization of their component atoms may take place. Ionization consists of the ejection of one or more electrons from the atom, and may occur if the atom is exposed to radiation with an energy greater than its ionization energy. X-rays and gamma rays can be energetic enough to expel tightly held electrons from the inner orbitals of the atom. The removal of an electron in this way makes the electronic structure of the atom unstable, and electrons in higher orbitals "fall" into the lower orbital to fill the hole left behind. In falling, energy is released in the form of a photon, the energy of which is equal to the energy difference of the two orbitals involved. Thus, the material emits radiation, which has energy characteristic of the atoms present. The term fluorescence is applied to phenomena in which the absorption of radiation of a specific energy results in the re-emission of radiation of a different energy (generally lower). Characteristic radiation Each element has electronic orbitals of characteristic energy. Following removal of an inner electron by an energetic photon provided by a primary radiation source, an electron from an outer shell drops into its place. There are a limited number of ways in which this can happen, as shown in Figure 1. The main transitions are given names: an L→K transition is traditionally called Kα, an M→K transition is called Kβ, an M→L transition is called Lα, and so on. Each of these transitions yields a fluorescent photon with a characteristic energy equal to the difference in energy of the initial and final orbital. The wavelength of this fluorescent radiation can be calculated from Planck's Law: The fluorescent radiation can be analysed either by sorting the energies of the photons (energy-dispersive analysis) or by separating the wavelengths of the radiation (wavelength-dispersive analysis). Once sorted, the intensity of each characteristic radiation is directly related to the amount of each element in the material. This is the basis of a powerful technique in analytical chemistry. Figure 2 shows the typical form of the sharp fluorescent spectral lines obtained in the wavelength-dispersive method (see Moseley's law). Primary radiation sources In order to excite the atoms, a source of radiation is required, with sufficient energy to expel tightly held inner electrons. Conventional X-ray generators, based on electron bombardment of a heavy metal (i.e. tungsten or rhodium) target are most commonly used, because their output can readily be "tuned" for the application, and because higher power can be deployed relative to other techniques. X-ray generators in the range 20–60 kV are used, which allow excitation of a broad range of atoms. The continuous spectrum consists of "bremsstrahlung" radiation: radiation produced when high-energy electrons passing through the tube are progressively decelerated by the material of the tube anode (the "target"). A typical tube output spectrum is shown in Figure 3. For portable XRF spectrometers, copper target is usually bombared with high energy electrons, that are produced either by impact laser or by pyroelectric crystals. Alternatively, gamma ray sources, based on radioactive isotopes (such as 109Cd, 57Co, 55Fe, 238Pu and 241Am) can be used without the need for an elaborate power supply, allowing for easier use in small, portable instruments. When the energy source is a synchrotron or the X-rays are focused by an optic like a polycapillary, the X-ray beam can be very small and very intense. As a result, atomic information on the sub-micrometer scale can be obtained. Dispersion In energy-dispersive analysis, the fluorescent X-rays emitted by the material sample are directed into a solid-state detector which produces a "continuous" distribution of pulses, the voltages of which are proportional to the incoming photon energies. This signal is processed by a multichannel analyzer (MCA) which produces an accumulating digital spectrum that can be processed to obtain analytical data. In wavelength-dispersive analysis, the fluorescent X-rays emitted by the sample are directed into a diffraction grating-based monochromator. The diffraction grating used is usually a single crystal. By varying the angle of incidence and take-off on the crystal, a small X-ray wavelength range can be selected. The wavelength obtained is given by Bragg's law: where d is the spacing of atomic layers parallel to the crystal surface. Detection In energy-dispersive analysis, dispersion and detection are a single operation, as already mentioned above. Proportional counters or various types of solid-state detectors (PIN diode, Si(Li), Ge(Li), silicon drift detector SDD) are used. They all share the same detection principle: An incoming X-ray photon ionizes a large number of detector atoms with the amount of charge produced being proportional to the energy of the incoming photon. The charge is then collected and the process repeats itself for the next photon. Detector speed is obviously critical, as all charge carriers measured have to come from the same photon to measure the photon energy correctly (peak length discrimination is used to eliminate events that seem to have been produced by two X-ray photons arriving almost simultaneously). The spectrum is then built up by dividing the energy spectrum into discrete bins and counting the number of pulses registered within each energy bin. EDXRF detector types vary in resolution, speed and the means of cooling (a low number of free charge carriers is critical in the solid state detectors): proportional counters with resolutions of several hundred eV cover the low end of the performance spectrum, followed by PIN diode detectors, while the Si(Li), Ge(Li) and SDDs occupy the high end of the performance scale. In wavelength-dispersive analysis, the single-wavelength radiation produced by the monochromator is passed into a chamber containing a gas that is ionized by the X-ray photons. A central electrode is charged at (typically) +1700 V with respect to the conducting chamber walls, and each photon triggers a pulse-like cascade of current across this field. The signal is amplified and transformed into an accumulating digital count. These counts are then processed to obtain analytical data. X-ray intensity The fluorescence process is inefficient, and the secondary radiation is much weaker than the primary beam. Furthermore, the secondary radiation from lighter elements is of relatively low energy (long wavelength) and has low penetrating power, and is severely attenuated if the beam passes through air for any distance. Because of this, for high-performance analysis, the path from tube to sample to detector is maintained under vacuum (around 10 Pa residual pressure). This means in practice that most of the working parts of the instrument have to be located in a large vacuum chamber. The problems of maintaining moving parts in vacuum, and of rapidly introducing and withdrawing the sample without losing vacuum, pose major challenges for the design of the instrument. For less demanding applications, or when the sample is damaged by a vacuum (e.g. a volatile sample), a helium-swept X-ray chamber can be substituted, with some loss of low-Z (Z = atomic number) intensities. Chemical analysis The use of a primary X-ray beam to excite fluorescent radiation from the sample was first proposed by Glocker and Schreiber in 1928. Today, the method is used as a non-destructive analytical technique, and as a process control tool in many extractive and processing industries. In principle, the lightest element that can be analysed is beryllium (Z = 4), but due to instrumental limitations and low X-ray yields for the light elements, it is often difficult to quantify elements lighter than sodium (Z = 11), unless background corrections and very comprehensive inter-element corrections are made. Energy dispersive spectrometry In energy-dispersive spectrometers (EDX or EDS), the detector allows the determination of the energy of the photon when it is detected. Detectors historically have been based on silicon semiconductors, in the form of lithium-drifted silicon crystals, or high-purity silicon wafers. Si(Li) detectors These consist essentially of a 3–5 mm thick silicon junction type p-i-n diode (same as PIN diode) with a bias of −1000 V across it. The lithium-drifted centre part forms the non-conducting i-layer, where Li compensates the residual acceptors which would otherwise make the layer p-type. When an X-ray photon passes through, it causes a swarm of electron-hole pairs to form, and this causes a voltage pulse. To obtain sufficiently low conductivity, the detector must be maintained at low temperature, and liquid-nitrogen cooling must be used for the best resolution. With some loss of resolution, the much more convenient Peltier cooling can be employed. Wafer detectors More recently, high-purity silicon wafers with low conductivity have become routinely available. Cooled by the Peltier effect, this provides a cheap and convenient detector, although the liquid nitrogen cooled Si(Li) detector still has the best resolution (i.e. ability to distinguish different photon energies). Amplifiers The pulses generated by the detector are processed by pulse-shaping amplifiers. It takes time for the amplifier to shape the pulse for optimum resolution, and there is therefore a trade-off between resolution and count-rate: long processing time for good resolution results in "pulse pile-up" in which the pulses from successive photons overlap. Multi-photon events are, however, typically more drawn out in time (photons did not arrive exactly at the same time) than single photon events and pulse-length discrimination can thus be used to filter most of these out. Even so, a small number of pile-up peaks will remain and pile-up correction should be built into the software in applications that require trace analysis. To make the most efficient use of the detector, the tube current should be reduced to keep multi-photon events (before discrimination) at a reasonable level, e.g. 5–20%. Processing Considerable computer power is dedicated to correcting for pulse-pile up and for extraction of data from poorly resolved spectra. These elaborate correction processes tend to be based on empirical relationships that may change with time, so that continuous vigilance is required in order to obtain chemical data of adequate precision. Digital pulse processors are widely used in high performance nuclear instrumentation. They are able to effectively reduce pile-up and base line shifts, allowing for easier processing. A low pass filter is integrated, improving the signal to noise ratio. The Digital Pulse Processor requires a significant amount of energy to run, but it provides precise results. Usage EDX spectrometers are different from WDX spectrometers in that they are smaller, simpler in design and have fewer engineered parts, however the accuracy and resolution of EDX spectrometers are lower than for WDX. EDX spectrometers can also use miniature X-ray tubes or gamma sources, which makes them cheaper and allows miniaturization and portability. This type of instrument is commonly used for portable quality control screening applications, such as testing toys for lead (Pb) content, sorting scrap metals, and measuring the lead content of residential paint. On the other hand, the low resolution and problems with low count rate and long dead-time makes them inferior for high-precision analysis. They are, however, very effective for high-speed, multi-elemental analysis. Field Portable XRF analysers currently on the market weigh less than 2 kg, and have limits of detection on the order of 2 parts per million of lead (Pb) in pure sand. Using a Scanning Electron Microscope and using EDX, studies have been broadened to organic based samples such as biological samples and polymers. Wavelength dispersive spectrometry In wavelength dispersive spectrometers (WDX or WDS), the photons are separated by diffraction on a single crystal before being detected. Although wavelength dispersive spectrometers are occasionally used to scan a wide range of wavelengths, producing a spectrum plot as in EDS, they are usually set up to make measurements only at the wavelength of the emission lines of the elements of interest. This is achieved in two different ways: "Simultaneous" spectrometers have a number of "channels" dedicated to analysis of a single element, each consisting of a fixed-geometry crystal monochromator, a detector, and processing electronics. This allows a number of elements to be measured simultaneously, and in the case of high-powered instruments, complete high-precision analyses can be obtained in under 30 s. Another advantage of this arrangement is that the fixed-geometry monochromators have no continuously moving parts, and so are very reliable. Reliability is important in production environments where instruments are expected to work without interruption for months at a time. Disadvantages of simultaneous spectrometers include relatively high cost for complex analyses, since each channel used is expensive. The number of elements that can be measured is limited to 15–20, because of space limitations on the number of monochromators that can be crowded around the fluorescing sample. The need to accommodate multiple monochromators means that a rather open arrangement around the sample is required, leading to relatively long tube-sample-crystal distances, which leads to lower detected intensities and more scattering. The instrument is inflexible, because if a new element is to be measured, a new measurement channel has to be bought and installed. "Sequential" spectrometers have a single variable-geometry monochromator (but usually with an arrangement for selecting from a choice of crystals), a single detector assembly (but usually with more than one detector arranged in tandem), and a single electronic pack. The instrument is programmed to move through a sequence of wavelengths, in each case selecting the appropriate X-ray tube power, the appropriate crystal, and the appropriate detector arrangement. The length of the measurement program is essentially unlimited, so this arrangement is very flexible. Because there is only one monochromator, the tube-sample-crystal distances can be kept very short, resulting in minimal loss of detected intensity. The obvious disadvantage is relatively long analysis time, particularly when many elements are being analysed, not only because the elements are measured in sequence, but also because a certain amount of time is taken in readjusting the monochromator geometry between measurements. Furthermore, the frenzied activity of the monochromator during an analysis program is a challenge for mechanical reliability. However, modern sequential instruments can achieve reliability almost as good as that of simultaneous instruments, even in continuous-usage applications. Sample preparation In order to keep the geometry of the tube-sample-detector assembly constant, the sample is normally prepared as a flat disc, typically of diameter 20–50 mm. This is located at a standardized, small distance from the tube window. Because the X-ray intensity follows an inverse-square law, the tolerances for this placement and for the flatness of the surface must be very tight in order to maintain a repeatable X-ray flux. Ways of obtaining sample discs vary: metals may be machined to shape, minerals may be finely ground and pressed into a tablet, and glasses may be cast to the required shape. A further reason for obtaining a flat and representative sample surface is that the secondary X-rays from lighter elements often only emit from the top few micrometres of the sample. In order to further reduce the effect of surface irregularities, the sample is usually spun at 5–20 rpm. It is necessary to ensure that the sample is sufficiently thick to absorb the entire primary beam. For higher-Z materials, a few millimetres thickness is adequate, but for a light-element matrix such as coal, a thickness of 30–40 mm is needed. Monochromators The common feature of monochromators is the maintenance of a symmetrical geometry between the sample, the crystal and the detector. In this geometry the Bragg diffraction condition is obtained. The X-ray emission lines are very narrow (see figure 2), so the angles must be defined with considerable precision. This is achieved in two ways: Flat crystal with Söller collimators A Söller collimator is a stack of parallel metal plates, spaced a few tenths of a millimeter apart. To improve angular resolution, one must lengthen the collimator, and/or reduce the plate spacing. This arrangement has the advantage of simplicity and relatively low cost, but the collimators reduce intensity and increase scattering, and reduce the area of sample and crystal that can be "seen". The simplicity of the geometry is especially useful for variable-geometry monochromators. Curved crystal with slits The Rowland circle geometry ensures that the slits are both in focus, but in order for the Bragg condition to be met at all points, the crystal must first be bent to a radius of 2R (where R is the radius of the Rowland circle), then ground to a radius of R. This arrangement allows higher intensities (typically 8-fold) with higher resolution (typically 4-fold) and lower background. However, the mechanics of keeping Rowland circle geometry in a variable-angle monochromator is extremely difficult. In the case of fixed-angle monochromators (for use in simultaneous spectrometers), crystals bent to a logarithmic spiral shape give the best focusing performance. The manufacture of curved crystals to acceptable tolerances increases their price considerably. Crystal materials An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and let their spacing be noted by d. William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ.(Fig.7) The desirable characteristics of a diffraction crystal are: High diffraction intensity High dispersion Narrow diffracted peak width High peak-to-background Absence of interfering elements Low thermal coefficient of expansion Stability in air and on exposure to X-rays Ready availability Low cost Crystals with simple structures tend to give the best diffraction performance. Crystals containing heavy atoms can diffract well, but also fluoresce more in the higher energy region, causing interference. Crystals that are water-soluble, volatile or organic tend to give poor stability. Commonly used crystal materials include LiF (lithium fluoride), ADP (ammonium dihydrogen phosphate), Ge (germanium), Si (silicon), graphite, InSb (indium antimonide), PE (tetrakis-(hydroxymethyl)-methane, also known as pentaerythritol), KAP (potassium hydrogen phthalate), RbAP (rubidium hydrogen phthalate) and TlAP (thallium(I) hydrogen phthalate). In addition, there is an increasing use of "layered synthetic microstructures" (LSMs), which are "sandwich" structured materials comprising successive thick layers of low atomic number matrix, and monatomic layers of a heavy element. These can in principle be custom-manufactured to diffract any desired long wavelength, and are used extensively for elements in the range Li to Mg. In scientific methods that use X-ray/neutron or electron diffraction the before mentioned planes of a diffraction can be doubled to display higher order reflections. The given planes, resulting from Miller indices, can be calculated for a single crystal. The resulting values for h, k and l are then called Laue indices. So a single crystal can be variable in the way, that many reflection configurations of that crystal can be used to reflect different energy ranges. The Germanium (Ge111) crystal, for example, can also be used as a Ge333, Ge444 and more. For that reason the corresponding indices used for a particular experimental setup always get noted behind the crystal material(e.g. Ge111, Ge444) Notice, that the Ge222 configuration is forbidden due to diffraction rules stating, that all allowed reflections must be with all odd or all even Miller indices that, combined, result in , where is the order of reflection. Elemental analysis lines The spectral lines used for elemental analysis of chemicals are selected on the basis of intensity, accessibility by the instrument, and lack of line overlaps. Typical lines used, and their wavelengths, are as follows: Other lines are often used, depending on the type of sample and equipment available. Structural analysis lines X-ray diffraction (XRD) is still the most used method for structural analysis of chemical compounds. Yet, with increasing detail on the relation of -line spectra and the surrounding chemical environment of the ionized metal atom, measurements of the so-called valence-to-core (V2C) energy region become increasingly viable. Scientists noted that after ionization of 3d-transition metal atom, the -line intensities and energies shift with oxidation state of the metal and with the species of ligand(s). Spin states in a compound tend to affect this kind of measurement. This means, that by intense study of these spectral lines, one can obtain several crucial pieces of information from a sample. Especially, if there are references that have been studied in detail and can be used to make out differences. The information collected from this kind of measurement include: Oxidation state of the central metal atom in a compound (shifts of -mainline in low-spin complexes) Spin states of transition metal complexes (general shape of - and -mainlines) Structural electronic configuration around the central metal atom (determine intensity, broadening, tailing and piloting of - and -lines) These measurements are mostly done at synchrotron facilities, although a number of so-called "in-lab"-spectrometers have been developed and used for pre-beamtime (time at a synchrotron) measurements. Detectors Detectors used for wavelength dispersive spectrometry need to have high pulse processing speeds in order to cope with the very high photon count rates that can be obtained. In addition, they need sufficient energy resolution to allow filtering-out of background noise and spurious photons from the primary beam or from crystal fluorescence. There are four common types of detector: gas flow proportional counters sealed gas detectors scintillation counters semiconductor detectors Gas flow proportional counters are used mainly for detection of longer wavelengths. Gas flows through it continuously. Where there are multiple detectors, the gas is passed through them in series, then led to waste. The gas is usually 90% argon, 10% methane ("P10"), although the argon may be replaced with neon or helium where very long wavelengths (over 5 nm) are to be detected. The argon is ionised by incoming X-ray photons, and the electric field multiplies this charge into a measurable pulse. The methane suppresses the formation of fluorescent photons caused by recombination of the argon ions with stray electrons. The anode wire is typically tungsten or nichrome of 20–60 μm diameter. Since the pulse strength obtained is essentially proportional to the ratio of the detector chamber diameter to the wire diameter, a fine wire is needed, but it must also be strong enough to be maintained under tension so that it remains precisely straight and concentric with the detector. The window needs to be conductive, thin enough to transmit the X-rays effectively, but thick and strong enough to minimize diffusion of the detector gas into the high vacuum of the monochromator chamber. Materials often used are beryllium metal, aluminised PET film and aluminised polypropylene. Ultra-thin windows (down to 1 μm) for use with low-penetration long wavelengths are very expensive. The pulses are sorted electronically by "pulse height selection" in order to isolate those pulses deriving from the secondary X-ray photons being counted. Sealed gas detectors are similar to the gas flow proportional counter, except that the gas does not flow through it. The gas is usually krypton or xenon at a few atmospheres pressure. They are applied usually to wavelengths in the 0.15–0.6 nm range. They are applicable in principle to longer wavelengths, but are limited by the problem of manufacturing a thin window capable of withstanding the high pressure difference. Scintillation counters consist of a scintillating crystal (typically of sodium iodide doped with thallium) attached to a photomultiplier. The crystal produces a group of scintillations for each photon absorbed, the number being proportional to the photon energy. This translates into a pulse from the photomultiplier of voltage proportional to the photon energy. The crystal must be protected with a relatively thick aluminium/beryllium foil window, which limits the use of the detector to wavelengths below 0.25 nm. Scintillation counters are often connected in series with a gas flow proportional counter: the latter is provided with an outlet window opposite the inlet, to which the scintillation counter is attached. This arrangement is particularly used in sequential spectrometers. Semiconductor detectors can be used in theory, and their applications are increasing as their technology improves, but historically their use for WDX has been restricted by their slow response (see EDX). Extracting analytical results At first sight, the translation of X-ray photon count-rates into elemental concentrations would appear to be straightforward: WDX separates the X-ray lines efficiently, and the rate of generation of secondary photons is proportional to the element concentration. However, the number of photons leaving the sample is also affected by the physical properties of the sample: so-called "matrix effects". These fall broadly into three categories: X-ray absorption X-ray enhancement sample macroscopic effects All elements absorb X-rays to some extent. Each element has a characteristic absorption spectrum which consists of a "saw-tooth" succession of fringes, each step-change of which has wavelength close to an emission line of the element. Absorption attenuates the secondary X-rays leaving the sample. For example, the mass absorption coefficient of silicon at the wavelength of the aluminium Kα line is 50 m2/kg, whereas that of iron is 377 m2/kg. This means that fluorescent X-rays generated by a given concentration of aluminium in a matrix of iron are absorbed about seven times more (that is 377/50) compared with the fluorescent X-rays generated by the same concentration of aluminium, but in a silicon matrix. That would lead to about one seventh of the count rate, once the X-rays are detected. Fortunately, mass absorption coefficients are well known and can be calculated. However, to calculate the absorption for a multi-element sample, the composition must be known. For analysis of an unknown sample, an iterative procedure is therefore used. To derive the mass absorption accurately, data for the concentration of elements not measured by XRF may be needed, and various strategies are employed to estimate these. As an example, in cement analysis, the concentration of oxygen (which is not measured) is calculated by assuming that all other elements are present as standard oxides. Enhancement occurs where the secondary X-rays emitted by a heavier element are sufficiently energetic to stimulate additional secondary emission from a lighter element. This phenomenon can also be modelled, and corrections can be made provided that the full matrix composition can be deduced. Sample macroscopic effects consist of effects of inhomogeneities of the sample, and unrepresentative conditions at its surface. Samples are ideally homogeneous and isotropic, but they often deviate from this ideal. Mixtures of multiple crystalline components in mineral powders can result in absorption effects that deviate from those calculable from theory. When a powder is pressed into a tablet, the finer minerals concentrate at the surface. Spherical grains tend to migrate to the surface more than do angular grains. In machined metals, the softer components of an alloy tend to smear across the surface. Considerable care and ingenuity are required to minimize these effects. Because they are artifacts of the method of sample preparation, these effects can not be compensated by theoretical corrections, and must be "calibrated in". This means that the calibration materials and the unknowns must be compositionally and mechanically similar, and a given calibration is applicable only to a limited range of materials. Glasses most closely approach the ideal of homogeneity and isotropy, and for accurate work, minerals are usually prepared by dissolving them in a borate glass, and casting them into a flat disc or "bead". Prepared in this form, a virtually universal calibration is applicable. Further corrections that are often employed include background correction and line overlap correction. The background signal in an XRF spectrum derives primarily from scattering of primary beam photons by the sample surface. Scattering varies with the sample mass absorption, being greatest when mean atomic number is low. When measuring trace amounts of an element, or when measuring on a variable light matrix, background correction becomes necessary. This is really only feasible on a sequential spectrometer. Line overlap is a common problem, bearing in mind that the spectrum of a complex mineral can contain several hundred measurable lines. Sometimes it can be overcome by measuring a less-intense, but overlap-free line, but in certain instances a correction is inevitable. For instance, the Kα is the only usable line for measuring sodium, and it overlaps the zinc Lβ (L2-M4) line. Thus zinc, if present, must be analysed in order to properly correct the sodium value. Other spectroscopic methods using the same principle It is also possible to create a characteristic secondary X-ray emission using other incident radiation to excite the sample: electron beam: electron microprobe; ion beam: particle induced X-ray emission (PIXE). When radiated by an X-ray beam, the sample also emits other radiations that can be used for analysis: electrons ejected by the photoelectric effect: X-ray photoelectron spectroscopy (XPS), also called electron spectroscopy for chemical analysis (ESCA) The de-excitation also ejects Auger electrons, but Auger electron spectroscopy (AES) normally uses an electron beam as the probe. Confocal microscopy X-ray fluorescence imaging is a newer technique that allows control over depth, in addition to horizontal and vertical aiming, for example, when analysing buried layers in a painting. Instrument qualification A 2001 review, addresses the application of portable instrumentation from QA/QC perspectives. It provides a guide to the development of a set of SOPs if regulatory compliance guidelines are not available. See also , resonant fluorescence of gamma rays Notes References Beckhoff, B., Kanngießer, B., Langhoff, N., Wedell, R., Wolff, H., Handbook of Practical X-Ray Fluorescence Analysis, Springer, 2006, Bertin, E. P., Principles and Practice of X-ray Spectrometric Analysis, Kluwer Academic / Plenum Publishers, Buhrke, V. E., Jenkins, R., Smith, D. K., A Practical Guide for the Preparation of Specimens for XRF and XRD Analysis, Wiley, 1998, Jenkins, R., X-ray Fluorescence Spectrometry, Wiley, Jenkins, R., De Vries, J. L., Practical X-ray Spectrometry, Springer-Verlag, 1973, Jenkins, R., R.W. Gould, R. W., Gedcke, D., Quantitative X-ray Spectrometry, Marcel Dekker, electronic-book electronic- Van Grieken, R. E., Markowicz, A. A., Handbook of X-Ray Spectrometry 2nd ed.; Marcel Dekker Inc.: New York, 2002; Vol. 29; External links Atomic physics Molecular physics X-ray spectroscopy Fluoro Scientific techniques Fluorescence
X-ray fluorescence
Physics,Chemistry
6,883
9,678,616
https://en.wikipedia.org/wiki/Octamer%20transcription%20factor
Octamer transcription factors are a family of transcription factors which binds to the "ATTTGCAT" DNA sequence. Their DNA-binding domain is a POU domain. There are eight Octamer proteins in humans (Oct1–11), which have been renamed according to the different classes of POU domain. Octamer-3/4, also known as POU5F1, is one of the Yamanaka factors, which are critical for the maintenance and self-renewal of embryonic stem cells. On the other hand, Oct-1 and Oct-2 are widely expressed in adult tissues. Oct-7, 8 and 9, also known as "brain factors", are predominantly expressed in the central nervous system during embryonic development. Oct-6 expression is confined to embryonic stem cells and the developing nervous system and skin, while Oct-11 is also involved in skin differentiation. Human Oct proteins Oct-1 - Oct-2 - Oct-3/4 – Oct-6 – Oct-7 – Oct-8 – Oct-9 – Oct-11 – References External links POU-domain proteins Protein families
Octamer transcription factor
Biology
234
20,812,383
https://en.wikipedia.org/wiki/Escucha%20Formation
The Escucha Formation is a geological formation in La Rioja and Teruel provinces of northeastern Spain whose strata date back to the late Aptian to middle Albian stages of the Early Cretaceous. Dinosaur remains are among the fossils that have been recovered from the formation. The approximately thick formation underlies the Utrillas Formation and overlies Castrillo de la Reina, Benassal & Oliete Formations. The Escucha Formation comprises siltstones, mudstones, sandstones, coal, siltstones and amber, in which several fossil insects were found. The formation was deposited in a variety of continental to paralic (deltaic) environments. Fossil content The Escucha Formation has provided the following fossils, among others: Dinosaurs Other dinosaurs Allosauroidea indet. Iguanodontia indet. Titanosauriformes indet. Reptiles Aragochersis lignitesta Toremys cassiopeia Trachydermochelys sp. Testudines indet. Anteophthalmosuchus escuchae Goniopholididae indet. Hulkepholis plotos Fish Chondrichthyes indet. Osteichthyes indet. Crustaceans Cretagourretia salasi Joeranina tausi Insects Other arthropods Alavia neli Archaelagonops alavensis Mesozygiella dunlopi Cretogarypinus zaragozai Ithioreolpium alavensis Alavesiaphis margaritae Hispanocader lisae Alavametra popovi Iberovelia quisquilia Glaesivelia pulcherrima Gymnopollisthrips maior G. minor Manicapsocidus enigmaticus Archaeatropos alavensis Preempheria antiqua Empheropsocus arilloi E. margineglabrus Morazatermes krishnai Cantabritermes simplex Autrigonoforceps iberica Hispanelcana alavensis H. arilloi H. lopezvallei Libanophron sugaar Hippocoon basajauni Burmaphron iratxoak B. jentilak B. sorginak Tagsmiphron olentzero Elasmophron mari Ectenobythus iberiensis Liztor pilosus Cretepyris martini Zophepyris alavaensis Ampulicomorpha perialla Microcostaphron parvus Archephedrus stolamissus Protorhyssalopsis perrichoti Spathiopteryx alavarommopsis Serphites lamiak Aposerphites angustus Archaeromma hispanicum Galloromma alavaensis Alavaromma orchamum Tithonoscelio resinalis Bruescelio platycephalus Electroteleiopsis hebdomas Alavascelio delvallei Amissascelio temporarius Perimoscelio tyrbastes Perimoscelio confector Proterosceliopsis masneri Juxtascelio interitus Iberopria perialla Iberoevania roblesi Cretevania alonsoi Iberomaimetsha nihtmara Iberomaimetsha rasnitsyni Valaa delclosi Megalava truncata Eosyntexis parva Cretasonoma corinformibus Penarhytus tenebris Prosolierius parvus Cretakarenni hispanicus Rhizophtoma longus Darwinylus marcosi Mediumiuga sinespinis Archiaustroconops alavensis Leptoconops (Leptoconops) zherikhini Gerontodacus skalskii Chimeromyia alava Chimeromyina concilia Euliphora grimaldii Alavesia subiasi Tethepomima holomma Tethepomyia buruhandi Lysistrata emerita Hegalari minor Hegalari antzinako Alavamanota hispanica Allocotocera xavieri Cretohaplusia ortunoi Eltxo cretaceus Helius (Helius) alavensis Helius (Helius) spiralis Espanoderus barbarae Alavaraphidia imperterrita Baissoptera cretaceoelectra Cretokatianna bucculenta Sphyrotheciscus senectus Archeallacma dolichopoda Katiannasminthurus xenopygus Pseudosminthurides stoechus Burmisotoma spinulifera Protoisotoma autrigoniensis Proisotoma communis Proleptochelia tenuissima Electrotanais monolithus Alavatanais carabe Alavatanais margulisae Eurotanais terminator Spinomegops arcanus Strieremaeus minguezae Iberofoveopsis miguelesi Hispanothrips utrillensis Aragonitermes teruelensis Aragonimantis aenigma Mymaropsis turolensis Serphites silban Galloromma turolensis Alavaromma orchamum Cretevania rubusensis Cretevania alcalai Cretevania montoyai Actenobius magneoculus Arra legalovi Leptoconops (Leptoconops) zherikhini Atriculicoides sanjusti Atriculicoides hispanicus Gerontodacus skalskii Microphorites utrillensis Litoleptis fossilis Burmazelmira grimaldii Aragomantispa lacerata Spinomegops aragonensis Ametroproctus valeriae Cretaceobodes martinezae Trhypochthonius lopezvallei Invertebrates Ostracoda indet. Bivalvia indet. Gastropoda indet. Flora Pinophyta Angiospermae indet. Correlation See also List of dinosaur-bearing rock formations Monte Grande Formation, Albian formation of the Cantabrian Basin Caranguejeira Conglomerate, Aptian to Cenomanian formation of the Lusitanian Basin Baltic, Burmese, Dominican and Mexican amber References Bibliography Further reading L. Alcalá, E. Espílez, L. Mampel, J. I. Kirkland, M. Ortiga, D. Rubio, A. González, D. Ayala, A. Cobos, R. Royo-Torres, F. Gascó and M. D. Pesquero. 2012. A new Lower Cretaceous vertebrate bonebed near Arino (Tereul, Aragon, Spain); found and managed in a joint collaboration between a mining company and a palaeontological park. Geoheritage 4:275-286 J. I. Canudo, A. Cobos, C. Martín-Closas, X. Murelaga, X. Pereda-Suberbiola, R. Royo-Torres, J. I. Ruiz-Omeñaca and L. M. Sender. 2005. Sobre la presencia de dinosaurios ornitópodos en la Formación Escucha (Cretácico Inferior, Albiense): redescubierto “Iguanodon” en Utrillas (Teruel) [On the presence of ornithopod dinosaurs in the Escucha Formation (Lower Cretaceous, Albian): redescribing “Iguanodon” in Utrillas (Teruel)]. Fundamental 6:51-56 E. Peñalver, D. A. Grimaldi, and X. Delclos. 2006. Early Cretaceous spider web with its prey. Science 312:1761 Geologic formations of Spain Cretaceous Spain Lower Cretaceous Series of Europe Albian Stage Aptian Stage Siltstone formations Mudstone formations Sandstone formations Coal formations Coal in Spain Formations Deltaic deposits Paleontology in Spain Formations Formations Formations
Escucha Formation
Physics
1,673
15,340,084
https://en.wikipedia.org/wiki/Time%20formatting%20and%20storage%20bugs
In computer science, data type limitations and software bugs can cause errors in time and date calculation or display. These are most commonly manifestations of arithmetic overflow, but can also be the result of other issues. The best-known consequence of this type is the Y2K problem, but many other milestone dates or times exist that have caused or will cause problems depending on various programming deficiencies. Year 1975 On 5 January 1975, the 12-bit field that had been used for dates in the TOPS-10 operating system for DEC PDP-10 computers overflowed, in a bug known as "DATE75". The field value was calculated by taking the number of years since 1964, multiplying by 12, adding the number of months since January, multiplying by 31, and adding the number of days since the start of the month; putting into this gives 4 January 1975, which is therefore the latest encodable date. The "DATE-75" patch pushed the last encodable date to 1 February 2052, making the overflow date 2 February 2052, by using 3 spare bits from other fields in the file system's metadata, but this sometimes caused problems with software that used those bits for its own purposes. Some software may have supported using one additional bit for the date but had issues with additional bits, which could have resulted in some bugs on 9 January 1986. Year 1978 The Digital Equipment Corporation OS/8 operating system for the PDP-8 computer used only three bits for the year, representing the years 1970 to 1977. This was recognized when the COS-310 operating system was developed, and dates were recorded differently. Year 1993 Multiple Sierra Entertainment games released for the Classic Mac OS started to freeze when running on 18 September 1993. An issue in the Mac version of Sierra's Creative Interpreter (Mac SCI) would cause the game to "lock-up" when attempting to handle a delay due to a problem involving an overflow. Mac SCI would attempt to use the date to determine how long a delay should last by getting the current time in seconds since 1 January 1904, the Macintosh epoch , and dividing by 12 hours. The division was processed by the Motorola 68000 and would not occur if an overflow was detected because of the division, but the Mac SCI would continue on regardless as if the division had occurred, eventually resulting in a delay of one second being treated as a delay of 18 hours and so on. Sierra released a patch called MCDATE that resolved the problem for almost 14 years . Year 1997 In Apollo Computer's Domain/OS operating system, absolute time was stored as a signed 48-bit integer representing the number of 4-microsecond units since 1 January 1980. This value overflowed on 2 November 1997, rendering unpatched systems unusable. Year 1999 In the last few months before the year 2000, two other date-related milestones occurred that received less publicity than the then-impending Y2K problem. First GPS rollover GPS dates are expressed as a week number and a day-of-week number, with the week number transmitted as a ten-bit value. This means that every 1,024 weeks (about 19.6 years) after Sunday 6 January 1980, (the GPS epoch), the date resets again to that date; this happened for the first time at 23:59:47 on 21 August 1999, the second time at 23:59:42 UTC on 6 April 2019, and will happen again on 20 November 2038. To address this concern, modernised GPS navigation messages use a 13-bit field, which only repeats every 8,192 weeks (157 years), and will not return to zero until the year 2137. 9/9/99 Many legacy programs or data sets used "9/9/99" as a rogue value to indicate either an unresolved date or as a terminator to indicate no further data was in the set. This caused many systems to crash upon the arrival of the actual date this represents: 9 September 1999. Year 2000 Two-digit year representations The term year 2000 problem, or simply Y2K, refers to potential computer errors related to the formatting and storage of calendar data for dates in and after the year 2000. Many programs represented four-digit years with only the final two digits, making the year 2000 indistinguishable from 1900. Computer systems' inability to distinguish dates correctly had the potential to bring down worldwide infrastructures for computer reliant industries. For applications required to calculate the birth year (or another past year), such an algorithm has long been used to overcome the year 1900 problem, but it has failed to recognise people over 100 years old. Year 2001 Systems that used a string of nine digits to record the time as seconds since the Unix epoch had issues reporting times beyond the one-billionth second after the epoch on 9 September 2001 at 01:46:40 (the "billennium"). Problems were not widespread. Year 2007 Sierra Entertainment games for the Classic Mac OS that were patched with the MCDATE program or released afterwards with the patch built in would begin to freeze on 28 May 2007. As with the Year 1993 problem, this was due to an issue in the Mac SCI when attempting to use the date to determine how long a delay should last. Programs with the MCDATE patch freeze because the Mac SCI takes the current number of seconds since the Macintosh epoch of 1 January 1904, subtracts 432,000,000 seconds from that, and then divides by 12 hours through the Motorola 68000, to then determine how long delays should last. On 28 May 2007, the Motorola 68000 again does not divide due to overflow protection, which the Mac SCI ignores. Year 2010 Some systems had problems once the year rolled over to 2010. This was dubbed by some in the media as the "Y2K+10" or "Y2.01k" problem. The main source of problems was confusion between hexadecimal number encoding and BCD encodings of numbers. The numbers 0 through 9 are encoded in both hexadecimal and BCD as 00 through 09. But the decimal number 10 is encoded in hexadecimal as 0A and in BCD as 10. Thus a BCD 10 interpreted as a hexadecimal encoding erroneously represents the decimal number 16. For example, the SMS protocol uses BCD encoding for dates, so some mobile phone software incorrectly reported dates of messages as 2016 instead of 2010. Windows Mobile was the first software reported to have been affected by this glitch; in some cases WM6 changed the date of any incoming SMS message sent after 1 January 2010, from the year 2010 to 2016. Other systems affected include EFTPOS terminals, and the PlayStation 3 (except the Slim model). Sony's PlayStation 3 incorrectly treated 2010 as a leap year, so the non-existent 29 February 2010, was shown on 1 March 2010, causing a program error. The most important such glitch occurred in Germany, where upwards of 20 million bank cards became unusable, and with Citibank Belgium, whose digipass customer identification chips stopped working. Year 2011 Taiwan officially uses the Minguo calendar, which considers the Gregorian year 1912 to be its year 1. Thus, the Gregorian year 2011 is the ROC year 100, its first 3-digit year. This causes the year to appear to be 1911 (Year 0) if 2-digit representations are used. Year 2013 The Deep Impact space probe lost communication with Earth on 11 August 2013, because of a time-tagging problem; the date was stored as an unsigned 32-bit integer counting the number of tenth-seconds since 1 January 2000. Year 2019 Second GPS rollover In 2019, the second GPS week number rollover occurred. Meade computerized telescopes with GPS like the LX200GPS could no longer find their location and thus could not align themselves or locate stellar objects. Meade released firmware version 4.2k with a fix but which also introduced many new bugs; version 4.2l (lowercase L, often confused with uppercase I) was then released to fix that, but had more inexplicable changes. A third party, StarPatch, released a hacked version of firmware version 4.2g at no cost to fix these issues. Japanese calendar transition On 30 April 2019, Emperor Akihito of Japan abdicated in favor of his son Naruhito. As years in Japan are traditionally referred to by era names that correspond to the reign of each emperor, this resulted in a new era name, , following Naruhito's accession to the throne the following day. Because the previous emperor, Hirohito, died 7 January 1989, and Akihito's reign mostly corresponded with the rise in the use of computers, most software had not been tested to ensure correct behavior on an era change, while testing was further complicated by the fact that the new era name was not revealed until 1 April 2019. Therefore, errors were expected from software that did not anticipate a new era. Year 2020 The video games WWE 2K20 and Star Wars Jedi: Fallen Order both crashed on 1 January 2020, when the year rolled over. The glitches could only be circumvented by resetting the year back to 2019 until a patch was released. Additionally, Crystal Reports 8.5 would fail to generate specific reports starting in 2020. Parkeon parking meters in New York City and other locations were unable to accept credit cards as a form of payment starting in 2020. A workaround was implemented, but required each meter to be individually updated. In New York, the meters were not expected to be fixed until 9 January. In Poland, 5,000 cash registers stopped printing the date out properly. Suunto sport smart watches displayed an error in computing weekdays that were presented with a +2 step (e.g. FRI rather than WED, SAT rather than THU). For Suunto Spartan model watches, the bug was fixed with firmware release 2.8.32. Classic Mac OS The control panel in Classic Mac OS versions 6, 7, and 8 only allows the date to be set as high as 31 December 2019, although the system is able to continue to advance time beyond that date. Microsoft Schedule+ The first version of Microsoft Schedule+ as bundled with version 3.0 of the Microsoft Mail email client will refuse to work with years greater than 2020 or beyond, due to the fact that the program was designed to operate within a 100-year time window ranging from 1920 to 2019. As a result, the date can only be set as high as 31 December 2019. Year 2021 Samsung users reported that phones running on the One UI 3.0 update or Android 11 lost access to the battery and charging statistics starting in 2021. Affected devices would not report usage statistics, thus leaving those sections blank. Year 2022 Dates that are stored in the format yymmddHHMM converted to a signed 32-bit integer overflowed on 1 January 2022, since . Notably affected was the malware-scanning component update numbers of Microsoft Exchange, which appear to be used for a mathematical check to determine the latest update. Honda and Acura cars manufactured between 2004 and 2012 containing GPS navigation systems incorrectly displayed the year as 2002. This problem was due to an overflow on the GPS epoch. The issue was resolved on August 17, 2022. Year 2024 Payment card readers at petrol pumps in New Zealand were unable to handle the leap year and were unable to properly dispense gasoline. Video games EA Sports WRC and Theatrhythm Final Bar Line also suffered issues related to the leap year, with the former crashing when trying to load the game and the latter claiming that the save data was corrupted. Both games had to be set to the following day of March 1, 2024 to properly work. In December 2024, a 30-year-old bug was found in all versions of HCL Notes. When the server is started on or after December 13, 2024, an overflow would prevent the mail router from loading its configuration, and so no mail is delivered. Patches were released on the next day for all supported versions. Year 2025 In Japan, some older computer systems using the Japanese calendar that have not been updated still count years according to the Shōwa era. The year 2025 corresponds in those systems to Shōwa 100, which can cause problems if the software assumes two digits for the year. In Spain, all Talgo AVRIL class trains stopped operating on January 1, 2025 due to a date handling bug in the battery charging module, causing delays and cancellations as passengers were relocated in other rolling stock. A bugfix was deployed by the next day, recovering regular service. Year 2028 Some systems store their year as a single-byte offset from 1900, which gives a range of 255 (8 bits) and allows dates up to 2155 to be safely represented. However, not all systems use an unsigned byte: some have been mistakenly coded with a signed byte which only allows a range of 127 years, meaning that the date field in the software will be incorrect after 2027 and can cause unpredictable behaviour. Several pieces of optical-disc software that operate using the ISO 9660 format are affected by this. During the late 1970s, on Data General Nova and Eclipse systems, the World Computer Corporation (doing credit union applications) created a date format with a 16-bit date field, which used seven bits for the year, four bits for the month, and five bits for the day. This allowed dates to be directly comparable using unsigned functions. Some systems, including HP 3000, still use this format, although a patch has been developed by outside consultants. Year 2032 Palm OS uses both signed integers with the 1970 epoch, as well as unsigned integers with the 1904 epoch, for different system functions, such as for system clock, and file dates (see PDB format). While this should result in Palm OS being susceptible to the 2038 problem, Palm OS also uses a 7-bit field for storing the year value, with a different epoch counting from 1904, resulting in a maximum year of 2031 (1904 + 127). Year 2036 The Network Time Protocol has an overflow issue related to the Year 2038 problem, which manifests itself at 06:28:16 UTC on 7 February 2036, rather than 2038. The 64-bit timestamps used by NTP consist of a 32-bit part for seconds and a 32-bit part for fractional second, giving NTP a time scale that rolls over every 2 seconds (136 years) and a theoretical resolution of 2 second (233 picoseconds). NTP uses an epoch of 1 January 1900. The first rollover occurs in 2036, prior to the UNIX year 2038 problem. Year 2038 Unix time rollover The original implementation of the Unix operating system stored system time as a 32-bit signed integer representing the number of seconds past the Unix epoch (1 January 1970, 00:00:00 UTC). This value will roll over after 19 January 2038, 03:14:07 UTC. This problem has been addressed in most modern Unix and Unix-like operating systems by storing system time as a 64-bit signed integer, although individual applications, protocols, and file formats must be changed as well. Windows C runtime library Like the Unix time rollover issue, the 32-bit version of gmtime in the C runtime libraries on Windows has a similar problem. This problem has already manifested in Oracle's Access Manager version 10.1.4.3 for Windows. The Identity Console component sets a cookie containing UI preferences with an expiry of 500,000,000 seconds in the future (approximately 16 years). This is beyond 19 January 2038 and so it throws an exception for certain search activities after 02:20:48 UTC on 17 March 2022 because the gmtime_r() call cannot convert the number provided to a date to write to the cookie. Despite the age of the software (18 June 2009), Oracle issued a patch number 33983548 on 6 April 2022. Third GPS rollover The third GPS week number rollover will occur at 20 November 2038, at 23:59:37 UTC. Year 2040 All Apple Mac computers store time in their real-time clocks (RTCs) and HFS filesystems as an unsigned 32-bit number of seconds since 00:00:00 on 1 January 1904. After 06:28:15 on 6 February 2040, (i.e. seconds from the epoch), this will wrap around to 1904: further to this, HFS+, formerly the default format for most Apple software, is also affected. The replacement Apple File System resolves this issue. ProDOS for the Apple II computers only supports two-digit year numbers. To avoid Y2K issues, Apple issued a technical note stating that the year number was to represent 1940–2039. Software for the platform may incorrectly display dates beginning in 2040, though a third-party effort is underway to update ProDOS and application software to support years up to 4095. Year 2042 On 18 September 2042, the Time of Day Clock (TODC) on the S/370 IBM mainframe and its successors, including the current zSeries, will roll over. Older TODCs were implemented as a 64-bit count of 2 microsecond (0.244 ns) units, and the standard base was 1 January 1900, UT. In July 1999 the extended TODC clock was announced, which extended the clock to the right (that is, the extended bits are less significant than the original bits). The actual resolution depends on the model, but the format is consistent, and will, therefore, roll over after 2 microseconds. The TODC value is accessible to user mode programs and is often used for timing and for generating unique IDs for events. While IBM has defined and implemented a longer (128-bit) hardware format on recent machines, which extends the timer on both ends by at least 8 additional bits, many programs continue to rely on the 64-bit format which remains as an accessible subset of the longer timer. Year 2048 The capacity planning logic in the ERP system SAP S/4HANA supports only finish dates up to 19 January 2048, (24,855 days from 1 January 1980, corresponding to 231 seconds rounded down to full days). This concerns e.g. the production, maintenance and inspection planning. Year 2069 According to the Single UNIX Specification for parsing two-digit years using , "values in the range [69,99] shall refer to years 1969 to 1999 inclusive and values in the range [00,68] shall refer to years 2000 to 2068 inclusive", meaning that, when parsed by , the two-digit year "69" would be interpreted as 1969 rather than 2069. Year 2079 Days 32,768 and 65,536 Programs that store dates as the number of days since an arbitrary date (or epoch) are vulnerable to roll-over or wrap-around effects if the values are not wide enough to allow the date values to span a large enough time range expected for the application. Signed 16-bit binary values roll over after 32,768 (2) days from the epoch date, producing negative values. Some mainframe systems experienced software failures because they had encoded dates as the number of days since 1 January 1900, which produced unexpected negative day numbers on the roll-over date of 18 September 1989. Similarly, unsigned 16-bit binary days counts overflow after 65,536 (2) days, which are truncated to zero values. For software using an epoch of 1 January 1900, this will occur on 6 June 2079. Year 2080 Some (if not all) Nokia phones that run Series 40 (such as the Nokia X2-00) only support dates up to 31 December 2079, and thus will be unable to display dates after this. One workaround is to use the year 1996, 2024 or 2052 in lieu of 2080 (as compatible leap years) to display the correct day of the week, date and month on the main screen. Systems storing the year as a two-digit value 00..99 internally only, like many RTCs, may roll over from 31 December 2079, to the IBM PC and DOS epoch of 1980-01-01. Year 2100 DOS and Windows file date API and conversion functions (such as INT 21h/AH=2Ah) officially support dates up to 31 December 2099, only (even though the underlying FAT filesystem would theoretically support dates up to 2107). Hence, DOS-based operating systems, as well as applications that convert other formats to the FAT/DOS format, may show unexpected behavior starting 1 January 2100. Likewise, the Nintendo DS and GameCube, as well as the Sony PlayStation 4, only allow users to set dates up to the year 2099. In the case of the Nintendo DS, the system will not advance time beyond 31 December 2099, whereas the GameCube and PS4 will still roll over into 2100 and beyond, even though users of those game consoles cannot manually input the date and time that far out. 2100 is not a leap year Another problem will emerge at the end of 28 February 2100, since 2100 is not a leap year. As many common implementations of the leap year algorithm are incomplete or are simplified, they may erroneously assume 2100 to be a leap year, causing the date to roll over from 28 February 2100 to 29 February 2100, instead of 1 March 2100. The DS3231 hardware RTC has the 2100 year problem, because it uses 2-digit to store the year. Year 2106 Many existing file formats, communications protocols, and application interfaces employ a variant of the Unix date format, storing the number of seconds since the Unix Epoch (midnight UTC, 1 January 1970) as an unsigned 32-bit binary integer. This value will roll over on 7 February 2106 at 06:28:16 UTC. That is, at this time the number of seconds since 1 January 1970 is FFFF FFFF in hex. This storage representation problem is independent of programs that internally store and operate on system times as 64-bit signed integer values. Year 2108 The date timestamps stored in FAT filesystems, originally introduced with 86-DOS 0.42 in 1981 and carried over into MS-DOS, PC DOS, DR-DOS etc., will overflow at the end of 31 December 2107. The last modification date stamp (and with DELWATCH 2.0+ also the file deletion date stamp, and since DOS 7.0+ optionally also the last access date stamp and creation date stamp), are stored in the directory entry with the year represented as an unsigned seven bit number (0–127), relative to 1980, and thereby unable to indicate any dates in the year 2108 and beyond. The API functions defined to retrieve these dates officially only support dates up to 31 December 2099. This will also affect the ZIP archive file format, as it uses FAT file modification timestamps internally. Year 2137 GPS dates are expressed as a week number and a day-of-week number, with the week number initially using a ten-bit value and modernised GPS navigation messages using a 13-bit field. Ten-bit systems would roll over every 1024 weeks (about 19.6 years) after Sunday 6 January 1980 (the GPS epoch), and 13-bit systems roll over every 8192 weeks. Thirteen-bit systems will roll over to zero in 2137. Year 2248 RISC OS stores dates as centiseconds (hundredths of a second) since 1 January 1900 in five bytes (40 bits). These timestamps are used internally and exposed in file metadata (load and exec addresses). This value will overflow on 3 June 2248 at 06:57:57.75 UTC. Year 2262 Some high-resolution timekeeping systems count nanoseconds since using a 64-bit signed integer, which will overflow on at 23:47:16 UTC. The Go programming language's API is one example. Other examples include the Timestamp object in Python pandas, the chrono class in C++ when set to nanosecond precision, and the QEMU timers. Year 2286 Systems that use a string of length 10 characters to record Unix time may have problems reporting times beyond 20 November 2286, at 17:46:39 UTC, ten billion seconds after the Unix epoch. Year 2446 In ext4, the default file system for many Linux distributions, the bottom two bits of {a,c,m}time_extra are used to extend the {a,c,m}time fields, deferring the year 2038 problem to the year 2446. Within this "extra" 32-bit field, the lower two bits are used to extend the seconds field to a signed 34-bit integer; the upper 30 bits are used to provide nanosecond timestamp accuracy. Therefore, timestamps will not overflow until May 2446. Years 4000, 8000, etc. On time scales of thousands of years, the Gregorian calendar falls behind the astronomical seasons. This is because the Earth's speed of rotation is gradually slowing down, which makes each day slightly longer over time (see tidal acceleration and leap second) while the year maintains a more uniform duration. In the 19th century, Sir John Herschel proposed a modification to the Gregorian calendar with 969 leap days every 4,000 years, instead of 970 leap days that the Gregorian calendar would insert over the same period. This would reduce the average year to 365.24225 days. Herschel's proposal would make the year 4000, and multiples thereof, common instead of leap. While this modification has often been proposed since, it has never been officially adopted. While most software (including Excel, JavaScript and R) currently recognizes 4000 and 8000 as leap years (as they are divisible by 400), SAS has adopted the "4000 year rule". Thus, with the current software, date conversions between SAS and other software will go out of sync after 28 February 4000. Year 4501 Microsoft Outlook uses the date 1 January 4501 as a placeholder for "none" or "empty". Year 10,000 The year 10,000 will be the first Gregorian year with five digits. All future years that are powers of 10, as well as dates before the 10th millennium BC, face similar encoding problems. Examples This problem can be seen in the spreadsheet program Microsoft Excel as of 2023, which stores dates as the number of days since 31 December 1899 (day 1 is 1 January 1900) with a fictional leap day in 1900 if using the default 1900 date system. Alternatively, if using the 1904 date system, the date is stored as the number of days since 1 January 1904 (day 1 is 2 January 1904), and there is no leap year problem. The maximum supported date for calculation is 31 December 9999. Years 29,228 and 30,828 In the C# programming language, or any language that uses .NET, the DateTime structure stores absolute timestamps as the number of tenth-microseconds (10−7 s, known as "ticks") since midnight UTC on 1 January 1 AD in the proleptic Gregorian calendar, which will overflow a signed 64-bit integer on 14 September 29,228 at 02:48:05.4775808 UTC. Many of Microsoft's applications and services have 100-nanosecond resolution for timekeeping, which will all face similar issues, such as Power Automate's TIME data type and the TimeSpan parameter in various Windows PowerShell commands. However, dates past 31 December 9999 at 23:59:59.9999999 UTC are considered "unsupported" and time operations will intentionally fail when calculations would otherwise result in dates later than 9999. Similarly, in the Windows operating systems, the FILETIME structure stores the number of 100-nanosecond ticks since as a signed 64-bit integer. This value will overflow on 14 September 30,828 at 02:48:05 UTC, after which Windows will not accept dates beyond this day and will display "invalid system time" errors in NTFS. Years 32,768 and 65,536 Programs that process years as 16-bit values may encounter problems dealing with either the year 32,768 or 65,536, depending on whether the value is treated as a signed or unsigned integer. For the year 32,768 problem, years after 32,767 may be interpreted as negative numbers, beginning with −32,768 which may be displayed as 32,768 BC. The year 65,536 problem is more likely to manifest itself by the year 65,536 showing up as year 0. Year 33,658 Static library archives built by the ar Unix command store timestamps as an ASCII string containing a decimal number of seconds past the Unix epoch (1 January 1970, 00:00:00 UTC), with a limit of 12 ASCII characters. This value will roll over on 01:46:40 UTC on . Year 100,000 The year 100,000 will be the first Gregorian year with six digits. Year 275,760 JavaScript's Date API stores dates as the number of milliseconds since 1 January 1970. Dates have a range of ±100,000,000 days from the epoch, meaning that programs written in JavaScript using the Date API cannot store dates past 13 September, AD 275,760. Year 292,277,026,596 Systems that store Unix time in seconds using signed 64-bit integers will theoretically be able to represent dates and times up until 15:30:08 UTC on Sunday, 4 December, AD 292,277,026,596. This year is so far in the future (well beyond the likely lifespan of the Earth, the Sun, humanity, and even past some predictions of the lifetime of the universe) that it is mainly referenced as a matter of theoretical interest, joke, or an indication that earlier versions like the year 2038 problem cannot be truly "solved" forever. Relative time overflow Microsoft In Microsoft Windows 7, Windows Server 2003, Windows Server 2008, and Windows Vista, TCP connection start information was stored in hundredths of a second, using a 32-bit unsigned integer, which caused TCP connections to fail after 497 days. Microsoft Windows 95 and Windows 98 had a problem with rollovers in a virtual device driver, VTDAPI.VXD, which used unsigned 32-bit integers to measure system runtime in milliseconds; this value would overflow after 49.7 days, causing systems to freeze. Until version 6.0, Microsoft's .NET platform had a bug that caused threadpool hill-climbing to fail periodically after 49.7 days due to an overflow while handling milliseconds since startup. Boeing The Boeing 787 aircraft has had at least two software issues related to time storage. In 2015, an error was reported where system runtime was stored in hundredths of a second, using a signed 32-bit integer; this value would overflow after 248 days, after which the onboard generator control systems would crash, causing the aircraft to lose power. In 2020, the Federal Aviation Administration issued an airworthiness directive requiring 787 operators to power their aircraft down completely before reaching 51 days of uptime, since otherwise systems will begin to display misleading data. Arduino The Arduino platform provides relative time via the millis() function. This function returns an unsigned 32-bit integer representing "milliseconds since startup", which will roll over every 49 days. By default, this is the only timing source available in the platform and programs need to take special care to handle rollovers. Internally, millis() is based on counting timer interrupts. Certain powersave modes disable interrupts and therefore stop the counter from advancing during sleep. Historic year problems Also for historic years there might be problems when handling historic events, for example: Year zero problem Year 1000 problem Year 1582 problem Year 1900 problem See also Software bug Heisenbug Long Now Foundation References
Time formatting and storage bugs
Physics
6,695
26,482,400
https://en.wikipedia.org/wiki/Tedatioxetine
Tedatioxetine (developmental code name Lu AA24530) is an experimental antidepressant that was discovered by scientists at Lundbeck; in 2007 Lundbeck and Takeda entered into a partnership that included tedatioxetine but was focused on another, more advanced Lundbeck drug candidate, vortioxetine. Tedatioxetine is reported to act as a triple reuptake inhibitor (serotonin > norepinephrine > dopamine) and 5-HT2A, 5-HT2C, 5-HT3, and α1A-adrenergic receptor antagonist. As of 2009, it was in phase II clinical trials for major depressive disorder, but there have been no updates since then, and as of August 2013 it was no longer displayed on Lundbeck's product pipeline. On May 10, 2016, all work on tedatioxetine stopped. A Chinese patent shows that there has been interest in this compound outside of Lundbeck. See also MIN-117 TGBA01AD Vilazodone Vortioxetine References External links Lu AA24530 shows positive results in major depressive disorder phase II study 4-Piperidinyl compounds Abandoned drugs Antidepressants Thioethers
Tedatioxetine
Chemistry
266
12,176,442
https://en.wikipedia.org/wiki/Chiapan%20climbing%20rat
The Chiapan climbing rat (Tylomys bullaris) is a species of rodent in the family Cricetidae. It is found only in Mexico. The species is known from only one location in Tuxtla Gutiérrez, Chiapas. The habitat in the region is being converted to agricultural and urban use, which is likely causing critical declines in numbers of T. bullaris. References Citations Sources Musser, G. G. and M. D. Carleton. 2005. Superfamily Muroidea. pp. 894–1531 in Mammal Species of the World a Taxonomic and Geographic Reference. D. E. Wilson and D. M. Reeder eds. Johns Hopkins University Press, Baltimore. Tylomys EDGE species Mammals described in 1901 Taxonomy articles created by Polbot
Chiapan climbing rat
Biology
160
3,196,182
https://en.wikipedia.org/wiki/Scavenger%20%28chemistry%29
A scavenger in chemistry is a chemical substance added to a mixture in order to remove or de-activate impurities and unwanted reaction products, for example oxygen, to make sure that they will not cause any unfavorable reactions. Their use is wide-ranged: In atmospheric chemistry, the most common scavenger is the hydroxyl radical, a short-lived radical produced photolytically in the atmosphere. It is the most important oxidant for carbon monoxide, methane and other hydrocarbons, sulfur dioxide, hydrogen sulfide, and most of other contaminants, removing them from the atmosphere. In molecular laser isotope separation, methane is used as a scavenger gas for fluorine atoms. Hydrazine and ascorbic acid are used as oxygen scavenger corrosion inhibitors. Tocopherol and naringenin are bioactive free radical scavengers that act as antioxidants; synthetic catalytic scavengers are their synthetic counterparts Organotin compounds are used in polymer manufacture as hydrochloric acid scavengers. Oxygen scavengers or oxygen absorbers are small sachets or self adhesive labels that are placed inside modified atmosphere packs to help extend product life (notably cooked meats) and help improve product appearance. They work by absorbing any oxygen left in the pack by oxidation of the iron powder contained in the sachet/label. Glutathione in the body scavenges oxidizing free radicals and peroxides and as a thiol nucleophile, attacks dangerous alkylating electrophiles, which may be exogenous toxins or produced in the course of metabolism (e.g. NAPQI from paracetamol). References Process chemicals
Scavenger (chemistry)
Chemistry
363
41,550,219
https://en.wikipedia.org/wiki/Politics%20of%20the%20International%20Space%20Station
The politics of the International Space Station have been affected by superpower rivalries, international treaties, and funding arrangements. The Cold War was an early factor, overtaken in recent years by the United States' distrust of China. The station has an international crew, with the use of their time, and that of equipment on the station, being governed by treaties between participant nations. Usage of crew and hardware There is no fixed percentage of ownership for the whole space station. Rather, Article 5 of the IGA sets forth that each partner shall retain jurisdiction and control over the elements it registers and over personnel in or on the Space Station who are its nationals. Therefore, for each ISS module only one partner retains sole ownership. Still, the agreements to use the space station facilities are more complex. The station is composed of two sides: the Russian Orbital Segment (ROS) and U.S. Orbital Segment (USOS). Russian Orbital Segment (mostly Russian ownership, except the Zarya module) Zarya: first component of the Space Station, storage, USSR/Russia-built, U.S.-funded (hence U.S.-owned) Zvezda: the functional centre of the Russian portion, living quarters, Russia-owned Poisk: airlock, docking, Russia-owned Rassvet: storage, docking Nauka: multipurpose laboratory Prichal: docking, Russia-owned Pirs (Deorbited) U.S. Orbital Segment (mixed U.S. and international ownership) Columbus laboratory: 51% for ESA, 46.7% for NASA and 2.3% for CSA. Kibō laboratory: Japanese module, 51% for JAXA, 46.7% for NASA and 2.3% for CSA. Destiny laboratory: 97.7% for NASA and 2.3% for CSA. Crew time, electrical power and rights to purchase supporting services (such as data upload & download and communications) are divided 76.6% for NASA, 12.8% for JAXA, 8.3% for ESA, and 2.3% for CSA. History In 1972, a milestone was reached in co-operation between the United States and the Soviet Union in space with the Apollo-Soyuz Test Project. The project occurred during a period of détente between the two superpowers, and led in July 1975 to Soyuz 19 docking with an Apollo spacecraft. From 1978 to 1987, the USSR's Interkosmos program included allied Warsaw Pact countries and countries which were not Soviet allies, such as India, Syria, and France, in crewed and uncrewed missions to Space stations Salyut 6 and 7. In 1986, the USSR extended its co-operation to a dozen countries in the Mir program. From 1994 to 1998, NASA Space Shuttles and crew visited Mir in the Shuttle–Mir program. In 1998, assembly of the space station began. On 28 January 1998, the Space Station Intergovernmental Agreement (IGA) was signed. This governs ownership of modules, station usage by participant nations, and responsibilities for station resupply. The signatories were the United States of America, Russia, Japan, Canada, and eleven member states of the European Space Agency (Belgium, Denmark, France, Germany, Italy, The Netherlands, Norway, Spain, Sweden, Switzerland, and the United Kingdom). With the exception of the United Kingdom, all of the signatories went on to contribute to the Space Station project. A second layer of agreements was then achieved, four memoranda of understanding between NASA and ESA, CSA, RKA and JAXA. These agreements are then further split, such as for the contractual obligations between nations, and trading of partners' rights and obligations. Use of the Russian Orbital Segment is also negotiated at this level. In 2010, the ESA announced that European countries which were not already part of the program would be allowed access to the station in a three-year trial period. In March 2012, a meeting in Quebec City between the leaders of the space agencies of Canada, Japan, Russia, the United States, and involved European nations resulted in a renewed pledge to maintain the space station until at least 2020. NASA reports to be still committed to the principles of the mission but also to use the station in new ways, which were not elaborated. CSA President Steve MacLean stated his belief that the station's Canadarm will continue to function properly until 2028, alluding to Canada's likely extension of its involvement beyond 2020. On 28 March 2015, Russian sources announced that Roscosmos and NASA had agreed to collaborate on the development of a replacement for the current ISS. Igor Komarov, the head of Russia's Roscosmos, made the announcement with NASA administrator Charles Bolden at his side. In a statement provided to SpaceNews on 28 March, NASA spokesman David Weaver said the agency appreciated the Russian commitment to extending the ISS, but did not confirm any plans for a future space station. On 30 September 2015, Boeing's contract with NASA as prime contractor for the ISS was extended to 30 September 2020. Part of Boeing's services under the contract related to extending the station's primary structural hardware past 2020 to the end of 2028. There have also been suggestions in the commercial space industry that the station could be converted to commercial operations after it is retired by government entities. In July 2018, the Space Frontier Act of 2018 was intended to extend operations of the ISS to 2030. This bill was unanimously approved in the Senate, but failed to pass in the U.S. House. In September 2018, the Leading Human Spaceflight Act was introduced with the intent to extend operations of the ISS to 2030, and was confirmed in December 2018. On 12 April 2021, at a meeting with Russian President Vladimir Putin, then-Deputy Prime Minister Yury Borisov announced he had decided that Russia might withdraw from the ISS programme in 2025. According to Russian authorities, the timeframe of the station’s operations has expired and its condition leaves much to be desired. In January 2022, NASA announced a planned date of January 2031 to de-orbit the ISS and direct any remnants into a remote area of the South Pacific Ocean. On 24 February 2022, NASA said that American and Russian astronauts currently aboard the ISS would continue normal operations despite the 2022 Russian invasion of Ukraine. British Prime Minister Boris Johnson commented on the current status of cooperation, saying "I have been broadly in favour of continuing artistic and scientific collaboration, but in the current circumstances it's hard to see how even those can continue as normal." On the same day, Roscosmos Director General Dmitry Rogozin insinuated that Russian withdrawal could cause the International Space Station to de-orbit due to lack of reboost capabilities, writing in a series of tweets, "If you block cooperation with us, who will save the ISS from an unguided de-orbit to impact on the territory of the US or Europe? There's also the chance of impact of the 500-ton construction in India or China. Do you want to threaten them with such a prospect? The ISS doesn't fly over Russia, so all the risk is yours. Are you ready for it?" (The last claim is not true, as the ISS' orbital inclination of 51.66° allows it to overfly the latitude of Saratov.) Rogozin later tweeted that normal relations between ISS partners could only be restored once sanctions have been lifted, and indicated that Roscosmos would submit proposals to the Russian government on ending cooperation. NASA stated that, if necessary, US corporation Northrop Grumman has offered a reboost capability that would keep the ISS in orbit. On 26 July 2022, Borisov, who had become head of Roscosmos, submitted to Putin his plans for withdrawal from the programme after 2024. However, Robyn Gatens, the NASA official in charge of space station operations, responded that NASA had not received any formal notices from Roscosmos concerning withdrawal plans. The United States Congress, in its CHIPS and Science Act signed by President Joe Biden on 9 August, approved extending NASA's funding for the ISS through 2030. On 21 September 2022, Borisov stated that Russia was "highly likely" to continue to participate in the ISS programme until 2028, stating that it would be challenging to start up crewed space missions after such a long pause. By nation Brazil Brazil joined the ISS as a partner of the United States and this included a contract with NASA to supply hardware to the Space Station. In return, NASA would provide Brazil with access to NASA ISS facilities on-orbit, as well as a flight opportunity for one Brazilian astronaut during the course of the ISS program. However, due to cost issues, the subcontractor Embraer was unable to provide the promised ExPrESS pallet, and Brazil left the program in 2007. Regardless, the first Brazilian astronaut, Marcos Pontes, was sent to ISS in April 2006 for a short stay during the Expedition 13 where he realized the Missão Centenário. This was Brazil's first space traveler and he returned to Earth safely. Pontes trained on the Space Shuttle and Soyuz, but ended up going up with the Russians, although he did work at the U.S. Johnson Space Center after returning to Earth. China China is not an ISS partner, and no Chinese nationals have been aboard. China has its own contemporary human space program, China Manned Space Program, and has carried out co-operation and exchanges with countries such as Russia and Germany in human and robotic space projects. China launched its first experimental space station, Tiangong 1, in September 2011, and has officially initiated the permanently crewed Chinese space station project since 2021. In 2007, Chinese vice-minister of science and technology Li Xueyong said that China would like to participate in the ISS. In 2010, ESA Director-General Jean-Jacques Dordain stated his agency was ready to propose to the other 4 partners that China be invited to join the partnership, but that this needs to be a collective decision by all the current partners. While ESA is open to China's inclusion, the US is against it. US concerns over the transfer of technology that could be used for military purposes echo similar concerns over Russia's participation prior to its membership. Concerns over Russian involvement were overcome and NASA became solely dependent upon Russian crew capsules when its shuttles were grounded after the Columbia accident in 2003, and again after its retirement in 2011. The Chinese government has voiced a belief that international exchanges and co-operation in the field of aerospace engineering should be intensified on the basis of mutual benefit, peaceful use and common development. China's crewed Shenzhou spacecraft use an APAS docking system, developed after a 1994–1995 deal for the transfer of Russian Soyuz spacecraft technology. Included in the agreement was training, provision of Soyuz capsules, life support systems, docking systems, and space suits. American observers comment that Shenzhou spacecraft could dock at the ISS if it became politically feasible, whilst Chinese engineers say work would still be required on the rendezvous system. Shenzhou 7 passed within about 45 kilometres of the ISS. American co-operation with China in space is limited, though efforts have been made by both sides to improve relations, but in 2011 new American legislation further strengthened legal barriers to co-operation, preventing NASA co-operation with China or Chinese owned companies, even the expenditure of funds used to host Chinese visitors at NASA facilities, unless specifically authorized by new laws, at the same time China, Europe and Russia have a co-operative relationship in several space exploration projects. Between 2007 and 2011, the space agencies of Europe, Russia and China carried out the ground-based preparations in the Mars500 project, which complement the ISS-based preparations for a human mission to Mars. On 28 April 2021 China launched the first part of an 11 series module space station named Tiangong Space Station. The Tianhe module was launched from the Wenchang Space Launch Site on a Long March 5B rocket, which contains only living quarters for crew members. The entire space station when constructed will require 10 additional launches between years 2021 through to 2022. India In spring 2025, Indian astronaut Shubhanshu Shukla will pilot the Axiom-4 mission to mark India's first human presence on theISS. This mission is part of a bilateral initiative between India and the US, aiming to prepare for India's Gaganyaan programme with rigorous training and scientific experiments aboard the ISS. ISRO chairman K. Sivan announced in 2019 that India will not join the International Space Station programme and will instead build a 20 tonne space station on its own. Its completion is expected to be in 2035. South Korea South Korea sent an astronaut to the International Space Station in cooperation with Russia, is on the verge of fulfilling its orbital launch capability (soon to be followed by a lunar orbiter) and plans to strengthen the commercialisation of space assets, particularly satellite imagery. Italy Italy has a contract with NASA to provide services to the station, and also takes part in the program directly via its membership in ESA. Japan Japan is an International Space Station partner and contributes the Japanese Experiment Module "Kibo". It has also provided cargo transport on the HTV cargo vehicle and is currently developing its successor, HTV-X, with consideration of utilizing a variant for future lunar gateway missions. See also Outer Space Treaty Space advocacy Space law Space policy References International Space Station Astropolitics
Politics of the International Space Station
Astronomy
2,761
2,976,061
https://en.wikipedia.org/wiki/Harnack%27s%20inequality
In mathematics, Harnack's inequality is an inequality relating the values of a positive harmonic function at two points, introduced by . Harnack's inequality is used to prove Harnack's theorem about the convergence of sequences of harmonic functions. , and generalized Harnack's inequality to solutions of elliptic or parabolic partial differential equations. Such results can be used to show the interior regularity of weak solutions. Perelman's solution of the Poincaré conjecture uses a version of the Harnack inequality, found by , for the Ricci flow. The statement Harnack's inequality applies to a non-negative function f defined on a closed ball in Rn with radius R and centre x0. It states that, if f is continuous on the closed ball and harmonic on its interior, then for every point x with |x − x0| = r < R, In the plane R2 (n = 2) the inequality can be written: For general domains in the inequality can be stated as follows: If is a bounded domain with , then there is a constant such that for every twice differentiable, harmonic and nonnegative function . The constant is independent of ; it depends only on the domains and . Proof of Harnack's inequality in a ball By Poisson's formula where ωn − 1 is the area of the unit sphere in Rn and r = |x − x0|. Since the kernel in the integrand satisfies Harnack's inequality follows by substituting this inequality in the above integral and using the fact that the average of a harmonic function over a sphere equals its value at the center of the sphere: Elliptic partial differential equations For elliptic partial differential equations, Harnack's inequality states that the supremum of a positive solution in some connected open region is bounded by some constant times the infimum, possibly with an added term containing a functional norm of the data: The constant depends on the ellipticity of the equation and the connected open region. Parabolic partial differential equations There is a version of Harnack's inequality for linear parabolic PDEs such as heat equation. Let be a smooth (bounded) domain in and consider the linear elliptic operator with smooth and bounded coefficients and a positive definite matrix . Suppose that is a solution of in such that Let be compactly contained in and choose . Then there exists a constant C > 0 (depending only on K, , , and the coefficients of ) such that, for each , See also Harnack's theorem References Kassmann, Moritz (2007), "Harnack Inequalities: An Introduction" Boundary Value Problems 2007:081415, doi: 10.1155/2007/81415, MR 2291922 L. C. Evans (1998), Partial differential equations. American Mathematical Society, USA. For elliptic PDEs see Theorem 5, p. 334 and for parabolic PDEs see Theorem 10, p. 370. Harmonic functions Inequalities
Harnack's inequality
Mathematics
621
35,610,975
https://en.wikipedia.org/wiki/Coordination%20cage
Coordination cages are three-dimensional ordered structures in solution that act as hosts in host–guest chemistry. They are self-assembled in solution from organometallic precursors, and often rely solely on noncovalent interactions rather than covalent bonds. Coordinate bonds are useful in such supramolecular self-assembly because of their versatile geometries. However, there is controversy over calling coordinate bonds noncovalent, as they are typically strong bonds and have covalent character. The combination of a coordination cage and a guest is a type of inclusion compound. Coordination complexes can be used as "nano-laboratories" for synthesis, and to isolate interesting intermediates. The inclusion complexes of a guest inside a coordination cage show intriguing chemistry as well; often, the properties of the cage will change depending on the guest. Coordination complexes are molecular moieties, so they are distinct from clathrates and metal-organic frameworks. History Chemists have long been interested in mimicking chemical processes in nature. Coordination cages quickly became a hot topic as they can be made by self-assembly, a tool of chemistry in nature. The conceptualization of a closed-surface molecule capable of incorporating a guest was described by Donald Cram in 1985. Early cages were synthesized from bottom-up. Makoto Fujita introduced self-assembling cages, which are less tedious to prepare. These cages arise from the condensation of square planar complexes using polypodal ligands. Approaches to assembly There are five main methodologies to create coordination cages. In directional bonding, also called edge-directed self-assembly, polyhedra are designed using a stoichiometric ratio of ligand to metal precursor. The symmetry interaction method involves combining naked metal ions with multibranched chelating ligands. This results in highly symmetric cages. The molecular paneling method, also called the face-directed method, was the method developed by Fujita. Here, rigid ligands act as 'panels' and coordination complexes join them together to create the shape. In the figure at left, the yellow triangles represent panel ligands, and the blue dots are metal complexes. The ligands of the complex itself helps enforce the final geometry. In the weak link method, a hemilabile ligand is used: a weak metal-heteroatom bond is the 'weak link.' The formation of the complexes is driven by favorable π-π interactions between the spacers and the ligands, as well as the chelation of the metal. The metals used in the assembly must be available to perform further in the final structure, without compromising the cage structure. The initial structure is referred to as 'condensed.' In the condensed structure, the weak M-X bond can be selectively replaced by introducing an ancillary ligand with a higher binding affinity, leading to an open cage structure. In the figure to the right, the M is the metal, the orange ellipses are ligands, and the A is the ancillary ligand. For the dimetallic building block method, two pieces are needed: the metal dimer and its nonlinking ligands, and linking ligands. The nonlinking ligands need to be relatively nonlabile, and not too bulky; amidinates, for instance, work well. The linking ligands are either equatorial or axial: equatorial ligands are small polycarboxylato anions, and axial linkers are usually rigid aromatic structures. Axial and equatorial ligands may be used separately or in combination, depending on the desired cage structure. Classification Many varieties of coordination cages exist. In general, coordination cages are either homoleptic or heteroleptic. That is, they assembled either from a single type of ligand or multiple types. Generic coordination cages are often classified just as coordination complexes, with a MxLy formula. Heteroleptic complexes typically form more complex geometries, as illustrated with the following cages: [M16(Lp-Ph)24]32+ and [M12(μ-Lp-Ph)12(μ3-Lmes)4](BF4)24. The former cage is assembled from a 2:3 ratio of metal (M) and ligand (L), where the metal can be copper, zinc, or cadmium. This cage is homoleptic and assembles into a hexadecanuclear framework. The second cage is assembled from a 4:1:4 ratio of MBF4, the ligand Lp-Ph and the ligand Lmes. This cage is heteroleptic and assembles into a dodecanuclear cuboctohedral framework. Four of the triangular faces of this shape are occupied by Lmes, which acts as a triply bridging ligand. The twelve remaining edges are spanned with the edge ligands, Lp-Ph. Ligands are the building blocks of coordination cages, and the choice and ratio of ligands determine the final structure. Due to their highly symmetrical nature, coordination cages are also often referred to by their geometry. The geometry of high-symmetry cages is often that of Platonic or Archimedean solids; sometimes cages are casually referred to by their geometries. Of the named categories of coordination cages, cavitand cages and metalloprisms are some of the more common. Cavitand cages Cavitand cages are formed by linking bowl-shaped organic molecules called cavitands. The two "bowls" are linked with organometallic complexes. In order for a cavitand cage to efficiently self-assemble, the following requirements must be met: The cavitand scaffold must be rigid, the incoming metal complex must impose cis geometry, and there must be enough preorganization in the structure such that the entropic barrier to create the cage can be overcome. The complexes used to assemble cavitand cages are square planar with one η2 ligand; this helps enforce the final geometry. Without cis geometry, only small oligomers will form. Self-assembly also requires a ligand exchange; weakly bound ions such as BF4- and PF6- promote assembly because they leave the complex so it can bind with the nitriles on the rest of the structure. Metalloprisms Metalloprisms are another common type of coordination cage. They can be assembled from planar modules linked with column-like ligands. One illustrative synthesis starts with [(η6-p-cymene)6Ru6(μ3-tpt-κN)2(μ-C6HRO4- κO)3]6+ using the linker of 2,4,6-tri(pyridine-4-yl)-1,3,5-triazine (tpt). Various guest molecules have been encapsulated in the hydrophobic cavity of metallaprisms. A few examples of guests are bioconjugate derivatives, metal complexes, and nitroaromatics. Keplerates Keplerates are cages that are similar to edge-transistive {Cu2} MOFs with A4X3 stoichiometry. In fact, they can be thought of as metal-organic polyhedra. These cages are quite different than the types previously discussed as they are much larger, and contain many cavities. Complexes with large diameters can be desirable as target guest molecules are becoming more large and complex. These cages have multiple shells, like an onion. Secondary building units such as dinuclear {Cu2} acetate species are used as building blocks. In the cage above, the outer shell is a cuboctohedron; its structure comes from two adjacent benzoate moieties from the m-BTEB ligand. The third benzoate is attached to the inner shell. The {Cu2} units in the inner sphere adapt several different orientations. The labile complexes in the inner sphere allow binding of large target guests on the nanometer scale. Building a complex of this size that is still soluble is a challenge. Interactions Coordination cages are used to study guest-guest and host–guest interactions and reactions. In some instance, planar aromatic molecules stack inside of metalloprisms, as can be observed by UV-visible spectroscopy. Metal-metal interactions can also be observed. Mixed valence species have also been trapped inside of coordination cages. References Coordination chemistry Supramolecular chemistry
Coordination cage
Chemistry,Materials_science
1,728
4,043,313
https://en.wikipedia.org/wiki/Noweb
Noweb, stylised in lowercase as noweb, is a literate programming tool, created in 1989–1999 by Norman Ramsey, and designed to be simple, easily extensible and language independent. As in WEB and CWEB, the main components of Noweb are two programs: "notangle", which extracts 'machine' source code from the source texts, and "noweave", which produces nicely-formatted printable documentation. Noweb supports TeX, LaTeX, HTML, and troff back ends and works with any programming language. Besides simplicity, this is the main advantage over WEB, which needs different versions to support programming languages other than Pascal. (Thus the necessity of CWEB, which supports C and similar languages.) Noweb's input A Noweb input text contains program source code interleaved with documentation. It consists of so-called chunks that are either documentation chunks or code chunks. A documentation chunk begins with a line that starts with an at sign (@) followed by a space or newline. A documentation chunk has no name. Documentation chunks normally contain LaTeX, but Noweb is also used with HTML, plain TeX, and troff. Code chunks are named. A code chunk begins with <<chunk name>>= on a line by itself. The double left angle bracket (<<) must be in the first column. Each chunk is terminated by the beginning of another chunk. If the first line in the file does not mark the beginning of a chunk, it is assumed to be the first line of a documentation chunk. Code chunks aren't treated specially by Noweb's tools—they may be placed in any order and, when needed, they are just concatenated. Chunk references in code are dereferenced and the whole requested source code is extracted. Example of a simple Noweb program This is an example of a "hello world" program with documentation: \section{Hello world} Today I awoke and decided to write some code, so I started to write Hello World in \textsf C. <<hello.c>>= /* <<license>> */ #include <stdio.h> int main(int argc, char *argv[]) { printf("Hello World!\n"); return 0; } @ \noindent \ldots then I did the same in PHP. <<hello.php>>= <?php /* <<license>> */ echo "Hello world!\n"; ?> @ \section{License} Later the same day some lawyer reminded me about licenses. So, here it is: <<license>>= This work is placed in the public domain. Assuming that the above code is placed in a file named 'hello.nw', the command to extract the human-readable document in HTML format is: noweave -filter l2h -index -html hello.nw | htmltoc > hello.html ... and in LaTeX format: noweave -index -latex hello.nw > hello.tex To extract machine source code: notangle -Rhello.c hello.nw > hello.c notangle -Rhello.php hello.nw > hello.php Compatibility Noweb defines a specific file format and a file is likely to interleave three different formats (Noweb, LaTeX and the language used for the software). This is not recognised by other software development tools and consequently using Noweb excludes the use of UML or code documentation tools. See also WEB CWEB Notes External links Norman Ramsey's home page notangle online man page noweb.py – an open-source noweb clone written in Python noweb.php – noweb clone in PHP Free documentation generators Source code documentation formats Literate programming Troff
Noweb
Mathematics
806
74,635,951
https://en.wikipedia.org/wiki/Aspiration%20window
An aspiration window is a heuristic used in pair with alpha-beta pruning in order to reduce search time for combinatorial games by supplying a window (or range) around an estimated score guess. Use of an aspiration window allows alpha-beta search to compete in the terms of efficiency against other pruning algorithms. Alpha-beta pruning achieves its performance by using cutoffs from its original range. Aspiration windows take advantage of this by supplying a smaller initial window, which increases the amount of cutoffs and therefore efficiency. However, due to search instability, the score may not always be in the window range. This may lead to a costly re-search that can penalize performance. Despite this, popular engines such as Stockfish still use aspiration windows. The guess that aspiration windows use is usually supplied by the last iteration of iterative deepening. See also Principal variation search References Sources Game artificial intelligence
Aspiration window
Mathematics
193
5,936,242
https://en.wikipedia.org/wiki/SQLJ
SQLJ is a working title for efforts to combine Java and SQL. It was a common effort started around 1997 by engineers from IBM, Oracle, Compaq, Informix, Sybase, Cloudscape and Sun Microsystems. It consists of the three parts: 0, 1 and 2. Part 0 describes the embedding of SQL statements into Java programs. SQLJ part 0 is the basis for part 10 of the SQL:1999 standard, aka SQL Object Language Bindings (SQL/OLB). SQLJ parts 1 and 2 describes the converse possibility to use Java classes (routines and types) from SQL statements. Parts 1 and 2 are the basis for part 13 of the SQL standard, SQL Routines and Types Using the Java Programming Language (SQL/JRT). "SQLJ" is commonly used to refer to just SQLJ part 0, usually when it is contrasted with other means of embedding SQL in Java, like JDBC. ANSI and ISO standards SQLJ part 0: ANSI X3.135.10-1998, "Database Language SQL—Part 10: Object Language Bindings (SQL/OLB)" SQLJ part 1: ANSI NCITS 331.1-1999, "SQLJ—Part 1: SQL Routines Using the Java Programming Language" SQLJ part 2: ANSI NCITS 331.2-2000, "SQLJ—Part 2: SQL Types Using the Java Programming Language" Part 0 was updated for JDBC 2.0 compatibility and ratified by ISO in 2000. The last two parts were combined when submitted to ISO. Part 2 was substantially rewritten for the ISO submission because the ANSI version was not formal enough for a specification, being closer to the style of a user manual. The combined version was ratified in 2002. ISO/IEC 9075-10:2000, Information technology—Database languages—SQL—Part 10: Object Language Bindings (SQL/OLB) ISO/IEC 9075-13:2002, Information technology—Database languages—SQL—Part 13: SQL Routines and Types Using the Java Programming Language (SQL/JRT). SQLJ part 0 The SQLJ part 0 specification largely originated from Oracle, who also provided the first reference implementation. In the following SQLJ is a synonym for SQLJ part 0. Whereas JDBC provides an API, SQLJ consists of a language extension. Thus programs containing SQLJ must be run through a preprocessor (the SQLJ translator) before they can be compiled. Advantages Some advantages of SQLJ over JDBC include: SQLJ commands tend to be shorter than equivalent JDBC programs. SQL syntax can be checked at compile time. The returned query results can also be checked strictly. Preprocessor might generate static SQL which performs better than dynamic SQL because query plan is created on program compile time, stored in database and reused at runtime. Static SQL can guarantee access plan stability. IBM DB2 supports static SQL use in SQLJ programs. Disadvantages SQLJ requires a preprocessing step. Many IDEs do not have SQLJ support. SQLJ lacks support for most of the common persistence frameworks, such as Hibernate. Oracle 18c (12.2) has desupported SQLJ in the database. Examples The following examples compare SQLJ syntax with JDBC usage. See also Embedded SQL Language Integrated Query (LINQ) References Further reading Connie Tsui, Considering SQLJ for Your DB2 V8 Java Applications, IBM developerworks, 13 Feb 2003 Owen Cline, Develop your applications using SQLJ, IBM developerworks, 16 Dec 2004 External links IBM Redbook: DB2 for z/OS and OS/390: Ready for Java Oracle SQLJ Developers Guide Database Connectivity Database APIs SQL data access
SQLJ
Technology
769
53,436,793
https://en.wikipedia.org/wiki/65%20Ursae%20Majoris
65 Ursae Majoris, abbreviated as 65 UMa, is a star system in the constellation of Ursa Major. With a combined apparent magnitude of about 6.5, it is at the limit of human eyesight and is just barely visible to the naked eye in ideal conditions. It is about 760 light years away from Earth. 65 Ursae Majoris is a sextuple star system. It contains six stars in a hierarchical orbit where each star orbits its inner stars. Such systems are uncommon, with only a few sextuple stars known. Higher-multiplicity star systems are uncommon because they are less stable than their simpler counterparts, and often decay into smaller systems. Multiplicity Hierarchy of orbits in the 65 Ursae Majoris system The central pair of stars, 65 Ursae Majoris Aa1 and Aa2, are both A-type main-sequence stars. These are relatively bright, white-colored stars that typically have masses from to . They have relatively low masses for A-type main sequence stars and have spectral types of A7V. Its orbital period is 1.73 days. The innermost binary pair 65 Ursae Majoris Aa is orbited by another star, designated 65 Ursae Majoris Ab. It is a spectroscopic binary: while the pair cannot be resolved, periodic Doppler shifts in their spectra indicate that there must be orbital motion. 65 Ursae Majoris Ab orbits the inner pair with a period of 641 days (1.76 years) and an eccentricity of 0.169. 65 Ursae Majoris B orbits the three inner stars every 118 years. It is separated from the triple by , and an astrometric orbit has been calculated. 65 Ursae Majoris C and D are common proper motion companions and are separated and respectively from the central system. 65 Ursae Majoris D also appears to be a chemically peculiar star with higher amounts of chromium, strontium, and europium than normal. Because of its unusual composition, determination of its stellar parameters is difficult; the effective temperature of this star may be 9,300 or , with the radius and the surface gravity determined for the star dependent on the effective temperature. Speckle interferometry results have resolved 65 Ursae Majoris D into two components separated by but this has not been confirmed by other observers. The two stars resolved differ in brightness by about two magnitudes. An orbit for two stars has been estimated to have a period of about 79 years. Variability 65 Ursae Majoris A is a variable star with the variable star designation DN Ursae Majoris. The pair Aa1 and Aa2 form an eclipsing binary as they periodically pass in front of each other while orbiting. The primary and secondary eclipses are almost identical and the apparent magnitude of the system varies between 6.55 and 6.65 twice during each orbit of 1.73 days. The brightness variation is very small because the non-eclipsing component Ab is the brightest of the three stars and contributes 80% of the visible light. Distance Trigonometric parallax measurements made by the Hipparcos spacecraft put the 65 Ursae Majoris ABC system at a distance of about 690 light years (210 parsecs), and component D at about 1,000 light years (300 parsecs). The dynamical parallax determined from the calculated orbits of the stars gives a distance of . Gaia has published measurements for the AB system and for component D, but they are both highly uncertain. Gaia Early Data Release 3 includes a somewhat more reliable measurement for the parallax of component C at , implying a distance of about 740 light years. See also Castor, another multiple star system with six stars Zeta Phoenicis, a multiple star system including an eclipsing binary References Spectroscopic binaries Algol variables 6 058112 103483 4560 Durchmusterung objects Ursae Majoris, DN Ursae Majoris, 65 A-type main-sequence stars Ursa Major Ap stars
65 Ursae Majoris
Astronomy
841
26,473,795
https://en.wikipedia.org/wiki/Rocio%20viral%20encephalitis
Rocio viral encephalitis is an epidemic flaviviral disease of humans first observed in São Paulo State, Brazil, in 1975. Low-level enzootic transmission is likely continuing in the epidemic zone, and with increased deforestation and population expansion, additional epidemics caused by Rocio virus are highly probable. If migratory species of birds are, or become involved in, the virus transmission cycle, the competency of a wide variety of mosquito species for transmitting Rocio virus experimentally suggest that the virus may become more widely distributed. The encephalitis outbreak in the western hemisphere caused by West Nile virus, a related flavivirus, highlights the potential for arboviruses to cause severe problems far from their source enzootic foci. The causative Rocio virus belongs to the genus Flavivirus (the same genus as the Zika virus) in family Flaviviridae and is closely related serologically to Ilhéus, St. Louis encephalitis, Japanese encephalitis and Murray Valley encephalitis viruses. Outbreaks During 1975 and 1976, Rocio virus was responsible for several epidemics of meningoencephalitis in coastal communities in southern São Paulo, Brazil. The outbreaks affected over 1,000 people and killed about 10% of those infected, but apparently responded well to treatment for viral encephalitides. The disease progresses rapidly after onset, with patients dying within 5 days of symptoms first appearing. The disease first presents with fever, headache, vomiting, and conjunctivitis, then progresses to neurological symptoms (confusion, disorientation, etc.) and muscle weakness; about one-third of cases enter a coma, and a third of those patients die, although supportive care such as intensive nursing and symptomatic treatment might reduce the case fatality rate to 4%. Survivors show neurological and psychological after-effects (sequelae) in about 20% of cases. Reservoirs and vectors Neither the epidemic nor the epizootic cycles of Rocio virus have been defined, but field and laboratory studies indicate the probable involvement of birds as a virus reservoir and mosquitoes as vectors. About 25% of wild birds in the epidemic zone tested during the time of the outbreaks were found to have antibodies to flaviviruses, with Rocio virus the most reactive antigen. Strains of Rocio virus were isolated from the blood of a rufous-collared sparrow, Zonotrichia capensis. Rocio virus was also isolated from sentinel mice exposed in a suspended cage, suggesting that a flying arthropod was the probable vector. Experimental studies with Colorado House sparrows, Passer domesticus, have indicated that the population tested was not a good amplification host for Rocio virus. Psorophora ferox was the only mosquito species directly implicated in transmission through detection of virus in specimens collected at the outbreak site, but species of Culex (Melanoconion), Coquillettidia chrysonotum, Mansonia indubitans, Ochlerotatus scapularis, Ochlerotatus serratus, and other mosquitoes in Tribes Culicini, Anophelini and Sabethini were also present in those collections. Studies with mosquitoes from the epidemic zone after the outbreak showed that Psorophora ferox and Ochlerotatus scapularis could be classified as potential vectors, but Ochlerotatus serratus was relatively insusceptible. Field investigations in the late 1970s and 1980s showed that Ochlerotatus scapularis, Ochlerotatus serratus and species of Culex (Melanoconion) were the predominant mosquitoes in the epidemic zone, and that Ochlerotatus scapularis was the most common and abundant mosquito in human settlements and human-made environments. Outside the epidemic zone, laboratory studies have shown that Culex tarsalis mosquitoes from Arizona and Culex pipiens pipiens mosquitoes from Illinois were relatively efficient experimental vectors; Tennessee Culex pipiens subspecies and Argentina Culex pipiens quinquefasciatus were moderately efficient experimental vectors; Louisiana Psorophora ferox, and Culex nigripalpus and Culex opisthopus from Florida were relatively inefficient experimental vectors. Psorophora ferox and Aedes scapularis were shown to be susceptible to per os infection with Rocio virus and could transmit the virus by bite following an incubation period, whereas infection rates in Ochlerotatus serratus did not exceed 36% and an ID50 could not be calculated for this species so it is unlikely to be an epidemiologically important vector of Rocio virus. References External links Flaviviruses Viral encephalitis Hemorrhagic fevers Insect-borne diseases Biological agents
Rocio viral encephalitis
Biology,Environmental_science
1,009
3,209,299
https://en.wikipedia.org/wiki/List%20of%20social%20software
This is a list of notable social software: selected examples of social software products and services that facilitate a variety of forms of social human contact. Blogs Apache Roller Blogger IBM Lotus Connections Posterous Telligent Community Tumblr Typepad WordPress Xanga Clipping Diigo Evernote Instant messaging Comparison of instant messaging clients IBM Lotus Sametime Live Communications Server 2003 Live Communications Server 2005 Microsoft Lync Server Internet forums Comparison of Internet forum software Internet Relay Chat (IRC) Internet Relay Chat eLearning Massively multiplayer online games Media sharing blip.tv Dailymotion Flickr Ipernity Metacafe Putfile SmugMug Tangle Vimeo YouTube Zooomr IBM Lotus Connections Media cataloging Online dating Web directories Social bookmarking Web widgets AddThis AddToAny ShareThis Social bookmark link generator Websites Enterprise software Altova MetaTeam IBM Lotus Connections Jumper 2.0 Enterprise Social cataloging aNobii Goodreads Knowledge Plaza Librarything Readgeek Shelfari KartMe Social citations BibSonomy CiteULike Connotea Jumper 2.0 Enterprise Knowledge Plaza Mendeley refbase Zotero Social evolutionary computation Knowledge iN Quora Yahoo! Answers Social login Social networks Social search Jumper 2.0 Knowledge Plaza Social customer support software Virtual worlds Active Worlds Google Lively (now defunct) Kaneva Second Life There Meez Wikis References Social software Social software
List of social software
Technology
283
14,005,652
https://en.wikipedia.org/wiki/Fringe%20%28trim%29
A Fringe is an ornamental textile trim applied to an edge of a textile item, such as drapery, a flag, or epaulettes. Fringe originated as a way of preventing a cut piece of fabric from unraveling when a hemming was not used. Several strands of weft threads would be removed, and the remaining warp threads would be twisted or braided together to prevent unraveling. In modern fabrics, fringe is more commonly made separately and sewn on. Modern "add-on" fringe may consist of wool, silk, linen, or narrow strips of leather. The use of fringe is ancient, and early fringes were generally made of unspun wool (rather than spun or twisted threads). There are many types of fringe. Particularly in Western Europe, as wealth and luxury items proliferated during the Renaissance, types of fringe began to assume commonly accepted names. Styles of fringes were clearly defined in England by at least 1688. Types of fringe include: Bullion fringe, is a twisted yarn which generally contains threads of silver or gold. The name derives from bullion hose, which had a twisted element at the top that resembled this type of fringe. Modern bullion fringe varies widely in texture and width, but generally is only in length. Campaign fringe, from the French word campane (meaning "bell"), consists of small, bell-like tassels on the end. Thread fringe, untwisted and unbraided loose warp threads. References Further reading Pegler, Martin M. The Dictionary of Interior Design. Fairchild Publications: 1983. Decorative ropework Notions (sewing) Parts of clothing Western wear
Fringe (trim)
Technology
336
1,906,523
https://en.wikipedia.org/wiki/Carl%20St%C3%B8rmer
Fredrik Carl Mülertz Størmer (3 September 1874 – 13 August 1957) was a Norwegian mathematician and astrophysicist. In mathematics, he is known for his work in number theory, including the calculation of and Størmer's theorem on consecutive smooth numbers. In physics, he is known for studying the movement of charged particles in the magnetosphere and the formation of aurorae, and for his book on these subjects, From the Depths of Space to the Heart of the Atom. He worked for many years as a professor of mathematics at the University of Oslo in Norway. A crater on the far side of the Moon is named after him. Personal life and career Størmer was born on 3 September 1874 in Skien, the only child of a pharmacist Georg Ludvig Størmer (1842–1930) and Elisabeth Amalie Johanne Henriette Mülertz (1844–1916). His uncle was the entrepreneur and inventor Henrik Christian Fredrik Størmer. Størmer studied mathematics at the Royal Frederick University in Kristiania, Norway (now the University of Oslo, in Oslo) from 1892 to 1897, earning the rank of candidatus realium in 1898. He then studied with Picard, Poincaré, Painlevé, Jordan, Darboux, and Goursat at the Sorbonne in Paris from 1898 to 1900. He returned to Kristiania in 1900 as a research fellow in mathematics, visited the University of Göttingen in 1902, and returned to Kristiania in 1903, where he was appointed as a professor of mathematics, a position he held for 43 years. After he received a permanent position in Kristiania, Størmer published his subsequent writings under a shortened version of his name, Carl Størmer. In 1918, he was elected as the first president of the newly formed Norwegian Mathematical Society. He participated regularly in Scandinavian mathematical congresses, and was president of the 1936 International Congress of Mathematicians in Oslo (from 1924 the new name of Kristiania). Størmer was also affiliated with the Institute of Theoretical Astrophysics at the University of Oslo, which was founded in 1934. He died on 13 August 1957, at Blindern. He was also an amateur street photographer, beginning in his student days. In the years 1893-1897, he documented daily life on the streets of Oslo using a miniature CP Stirn spy camera. Between 1942 and 1943, he shared a small portion of his works with the public. However, it was not until he was 70 years old that he organized an exhibition showcasing all his historical photographs of celebrities that he had taken over the years. For instance it included one of Henrik Ibsen strolling down Karl Johans gate, the main road in Oslo. Most of these can now be viewed in Norway's Digitalt Museum. He was also a supervisory council member of the insurance company Forsikringsselskapet Norden. In February 1900 he married consul's daughter Ada Clauson (1877–1973), with whom he eventually had five children. Their son Leif Størmer became a professor of historical geology at the University of Oslo. His daughter Henny married landowner Carl Otto Løvenskiold. Carl Størmer is also the grandfather of the mathematician Erling Størmer. Mathematical research Størmer's first mathematical publication, published when he was a beginning student at the age of 18, concerned trigonometric series generalizing the Taylor expansion of the arcsine function. He revisited this problem a few years later. Next, he systematically investigated Machin-like formula by which the number may be represented as a rational combination of the so-called "Gregory numbers" of the form . Machin's original formula, is of this type, and Størmer showed that there were three other ways of representing as a rational combination of two Gregory numbers. He then investigated combinations of three Gregory numbers, and found 102 representations of π of this form, but was unable to determine whether there might be additional solutions of this type. These representations led to fast algorithms for computing numerical approximations of . In particular, a four-term representation found by Størmer, was used in a record-setting calculation of to 1,241,100,000,000 decimal digits in 2002 by Yasumasa Kanada. Størmer is also noted for the Størmer numbers, which arose from the decomposition of Gregory numbers in Størmer's work. Størmer's theorem, which he proved in 1897, shows that, for any finite set of prime numbers, there are only finitely many pairs of consecutive integers having only the numbers from as their prime factors. In addition, Størmer describes an algorithm for finding all such pairs. The superparticular ratios generated by these consecutive pairs are of particular importance in music theory. Størmer proves this theorem by reducing the problem to a finite set of Pell equations, and the theorem itself can also be interpreted as describing the possible factorizations of solutions to Pell's equation. Chapman quotes Louis Mordell as saying "His result is very pretty, and there are many applications of it." Additional subjects of Størmer's mathematical research included Lie groups, the gamma function, and Diophantine approximation of algebraic numbers and of the transcendental numbers arising from elliptic functions. From 1905 Størmer was an editor of the journal Acta Mathematica, and he was also an editor of the posthumously-published mathematical works of Niels Henrik Abel and Sophus Lie. Astrophysical research From 1903, when Størmer first observed Kristian Birkeland's experimental attempts to explain the aurora borealis, he was fascinated by aurorae and related phenomena. His first work on the subject attempted to model mathematically the paths taken by charged particles perturbed by the influence of a magnetized sphere, and Størmer eventually published over 48 papers on the motion of charged particles. By modeling the problem using differential equations and polar coordinates, Størmer was able to show that the radius of curvature of any particle's path is proportional to the square of its distance from the sphere's center. To solve the resulting differential equations numerically, he used Verlet integration, which is therefore also known as Störmer's method. Ernst Brüche and Willard Harrison Bennett verified experimentally Størmer's predicted particle motions; Bennett called his experimental apparatus "Störmertron" in honor of Størmer. Størmer's calculations showed that small variations in the trajectories of particles approaching the earth would be magnified by the effects of the Earth's magnetic field, explaining the convoluted shapes of aurorae. Størmer also considered the possibility that particles might be trapped within the geomagnetic field, and worked out the orbits of these trapped particles. Størmer's work on this subject applies to what are today called the magnetospheric ring current and Van Allen radiation belts. As well as modeling these phenomena mathematically, Størmer took many photographs of aurorae, from 20 different observatories across Norway. He measured their heights and latitudes by triangulation from multiple observatories, and showed that the aurora are typically as high as 100 kilometers above ground. He classified them by their shapes, and discovered in 1926 the "solar-illuminated aurora", a phenomenon that can occur at twilight when the upper parts of an aurora are lit by the sun; these aurorae can be as high as 1000 km above ground. Størmer's book, From the Depths of Space to the Heart of the Atom, describing his work in this area, was translated into five different languages from the original Norwegian. A second book, The Polar Aurora (Oxford Press, 1955), contains both his experimental work on aurorae and his mathematical attempts to model them. In his review of this book, Canadian astronomer John F. Heard calls Størmer "the acknowledged authority" on aurorae. Heard writes, "The Polar Aurora will undoubtedly remain for many years a standard reference book; it belongs on the desk of anyone whose work or interest is involved with aurorae." Other astrophysical phenomena investigated by Størmer include pulsations of the earth's magnetic field, echoing in radio transmissions, nacreous clouds and noctilucent clouds, zodiacal light, meteor trails, the solar corona and solar vortices, and cosmic rays. Awards and honors Størmer was a Foreign Member of the Royal Society (ForMemRS) and a Corresponding Member of the French Academy of Sciences. He was also a member of the Norwegian Academy of Science and Letters from 1900. He was given honorary degrees by Oxford University (in 1947), the University of Copenhagen (1951), and the Sorbonne (1953), and in 1922 the French Academy awarded him their Janssen Medal. Three times Størmer was a plenary speaker in the International Congress of Mathematicians (1908 in Rome, 1924 in Toronto, and 1936 in Oslo); he was an invited speaker of the ICM in 1920 in Strasbourg and in 1932 in Zurich. In 1971, the crater Störmer on the far side of the Moon was named after him. In 1902, Størmer was decorated with King Oscar II's Medal of Merit in gold. He was also decorated as a Knight, First Order of the Order of St. Olav in 1939. He was upgraded to Grand Cross of the Order of St. Olav in 1954. References 1874 births 1957 deaths People from Skien University of Oslo alumni University of Paris alumni Academic staff of the University of Oslo Norwegian astronomers Norwegian mathematicians Norwegian physicists Number theorists 20th-century astronomers Members of the Norwegian Academy of Science and Letters Foreign members of the Royal Society Members of the French Academy of Sciences Presidents of the Norwegian Mathematical Society
Carl Størmer
Mathematics
2,044
14,806,503
https://en.wikipedia.org/wiki/Blocking%20%28computing%29
In computing, a process that is blocked is waiting for some event, such as a resource becoming available or the completion of an I/O operation. Once the event occurs for which the process is waiting ("is blocked on"), the process is advanced from blocked state to an imminent one, such as runnable. In a multitasking computer system, individual tasks, or threads of execution, must share the resources of the system. Shared resources include: the CPU, network and network interfaces, memory and disk. When one task is using a resource, it is generally not possible, or desirable, for another task to access it. The techniques of mutual exclusion are used to prevent this concurrent use. When the other task is blocked, it is unable to execute until the first task has finished using the shared resource. Programming languages and scheduling algorithms are designed to minimize the over-all effect of blocking. A process that blocks may prevent local work-tasks from progressing. In this case "blocking" often is seen as not wanted. However, such work-tasks may instead have been assigned to independent processes, where halting one has little to no effect on the others, since scheduling will continue. An example is "blocking on a channel" where passively waiting for the other part (i.e. no polling or spin loop) is part of the semantics of channels. Correctly engineered, any of these may be used to implement reactive systems. Deadlock means that processes pathologically wait for each other in a circle. As such it is not directly associated with blocking. See also Concurrent computing Data dependency Non-blocking algorithm Race condition Scheduling (computing) References Computing terminology Inter-process communication Input/output Scheduling (computing)
Blocking (computing)
Technology
349
68,003,371
https://en.wikipedia.org/wiki/C/2010%20U3%20%28Boattini%29
C/2010 U3 (Boattini) is the hyperbolic comet with the longest observation arc and took around a million years to complete half an orbit from its furthest distance in the Oort cloud. It was discovered on 31 October 2010 by Andrea Boattini in images taken with the Mount Lemmon Survey's 1.5-m reflector. The perihelion point is outside of the inner Solar System. The comet has an observation arc of 15 years allowing a very good estimate of the inbound (original) and outbound (future) orbits. The orbit of a long-period comet is properly obtained when the osculating orbit is computed at an epoch after leaving the planetary region and is calculated with respect to the center of mass of the Solar System. Inbound JPL Horizons shows an epoch 1950 barycentric orbital period of 2.2 millions years with aphelion of from the Sun. Hui et al 2019 has a similar inbound orbital period of 2 million years. Outbound with an epoch of 2050 JPL Horizons shows a period of approximately 350,000 years and an aphelion distance of . The generic JPL Small-Body Database browser uses a near-perihelion epoch of 2017-Jun-01 which is before the comet left the planetary region and makes the highly eccentric aphelion point inaccurate since it does not account for any planetary perturbations after that epoch. The heliocentric JPL Small-Body Database solution also does not account for the combined mass of the Sun+Jupiter. Precovery images from November 2005 when the comet was active from the Sun are known. The comet was seen to outburst in 2009 and 2017. The coma and tail consist of dust grains about 20 μm in diameter ejected at less than . Supervolatiles such as CO and CO2 can generate activity when a comet is this far from the Sun. References Non-periodic comets 20101031 Oort cloud
C/2010 U3 (Boattini)
Astronomy
401
8,209,995
https://en.wikipedia.org/wiki/Mazut
Mazut () is a low-quality heavy fuel oil, used in power plants and similar applications in Iran and some countries of the former Soviet Union. In the West, through fluid catalytic cracking, mazut is distilled into diesel and other light distillates. Mazut may be used for heating houses in some parts of the former USSR and in countries of the Far East that do not have the facilities to blend or break it down into more conventional petro-chemicals. Mazut is burned in Iran to compensate for the shortage of natural gas but it has caused environmental problems, such as huge amounts of air pollution in big cities such as Tehran. Mazut-100 is a fuel oil that is manufactured to GOST specifications, for example, GOST 10585-75 (not active) or GOST 10585-2013 (active as per December 2019). Mazut is almost exclusively manufactured in Russia, Kazakhstan, Azerbaijan, and Turkmenistan. This product is typically used for larger boilers in producing steam, since the energy value is high. The most important factor when grading this fuel is the sulfur content, which can mostly be affected by the source feedstock. For shipment purposes, this product is considered a "dirty oil" product, and because viscosity drastically affects whether it is able to be pumped, shipping has unique requirements. Much like No. 6 fuel oil (Bunker C), mazut is a refinery residual product, that is, products left over after gasoline, diesel, and other light distillates are distilled from crude oil except, unlike bunker fuel, mazut is produced from much lower grade feedstocks. Different types of Mazut-100 The main difference between the different types of Mazut-100 is the content of sulphur. The grades are represented by these sulfuric levels: "Very low sulphur" is mazut with a sulphur content of 0.5% "Low sulphur" is a mazut with a sulphur content of 0.5–1.0% "Normal sulphur" is a mazut with a sulphur content of 1.0–2.0% "High sulphur" is a mazut with a sulphur content of 2.0–3.5% Very-low-sulphur mazut is generally made from the lowest-sulfur crude feedstocks. It has a very limited volume to be exported because: The number of producers in Russia is limited. Refineries that produce this are generally owned by the largest domestic oil companies, such as Lukoil and Rosneft, etc. In Russia and the CIS countries a minimum of 50% from the total produced volume is sold only to domestic consumers in Russia and the CIS. Most of the remainder amount is reserved by state quotas for state-controlled companies abroad. The remaining volume available for export is sold according to state quotas, via state auctions, accessible only to Russian domestic companies. Low- to high-sulfur mazut is available from Russia and other CIS countries (Kazakhstan, Azerbaijan, Turkmenistan). The technical specifications are represented in the same way, according to the Russian GOST 10585-99. The Russian origin mazut demands higher prices. References Oils Petroleum products Petroleum in the Soviet Union
Mazut
Chemistry
692
4,604,270
https://en.wikipedia.org/wiki/Chemistry%20Development%20Kit
The Chemistry Development Kit (CDK) is computer software, a library in the programming language Java, for chemoinformatics and bioinformatics. It is available for Windows, Linux, Unix, and macOS. It is free and open-source software distributed under the GNU Lesser General Public License (LGPL) 2.0. History The CDK was created by Christoph Steinbeck, Egon Willighagen and Dan Gezelter, then developers of Jmol and JChemPaint, to provide a common code base, on 27–29 September 2000 at the University of Notre Dame. The first source code release was made on 11 May 2011. Since then more than 100 people have contributed to the project, leading to a rich set of functions, as given below. Between 2004 and 2007, CDK News was the project's newsletter of which all articles are available from a public archive. Due to an unsteady rate of contributions, the newsletter was put on hold. Later, unit testing, code quality checking, and Javadoc validation was introduced. Rajarshi Guha developed a nightly build system, named Nightly, which is still operating at Uppsala University. In 2012, the project became a support of the InChI Trust, to encourage continued development. The library uses JNI-InChI to generate International Chemical Identifiers (InChIs). In April 2013, John Mayfield (né May) joined the ranks of release managers of the CDK, to handle the development branch. Library The CDK is a library, instead of a user program. However, it has been integrated into various environments to make its functions available. CDK is currently used in several applications, including the programming language R, CDK-Taverna (a Taverna workbench plugin), Bioclipse, PaDEL, and Cinfony. Also, CDK extensions exist for Konstanz Information Miner (KNIME) and for Excel, called LICSS (). In 2008, bits of GPL-licensed code were removed from the library. While those code bits were independent from the main CDK library, and no copylefting was involved, to reduce confusions among users, the ChemoJava project was instantiated. Major features Chemoinformatics 2D molecule editor and generator 3D geometry generation ring finding substructure search using exact structures and Smiles arbitrary target specification (SMARTS) like query language QSAR descriptor calculation fingerprint calculation, including the ECFP and FCFP fingerprints force field calculations many input-output chemical file formats, including simplified molecular-input line-entry system (SMILES), Chemical Markup Language (CML), and chemical table file (MDL) structure generators International Chemical Identifier support, via JNI-InChI Bioinformatics protein active site detection cognate ligand detection metabolite identification pathway databases 2D and 3D protein descriptors General Python wrapper; see Cinfony Ruby wrapper active user community See also Bioclipse – an Eclipse–RCP based chemo-bioinformatics workbench Blue Obelisk JChemPaint – Java 2D molecule editor, applet and application Jmol – Java 3D renderer, applet and application JOELib – Java version of Open Babel, OELib List of free and open-source software packages List of software for molecular mechanics modeling References External links CDK Wiki – the community wiki Planet CDK - a blog planet CDK Depict OpenScience.org Bioinformatics software Chemistry software for Linux Computational chemistry software Free chemistry software Free software programmed in Java (programming language)
Chemistry Development Kit
Chemistry,Biology
754
648,134
https://en.wikipedia.org/wiki/Virtual%20environment
A virtual environment is a networked application that allows a user to interact with both the computing environment and the work of other users. Email, chat, and web-based document sharing applications are all examples of virtual environments. Simply put, it is a networked common operating space. Once the fidelity of the virtual environment is such that it "creates a psychological state in which the individual perceives himself or herself as existing within the virtual environment" then the virtual environment (VE) has progressed into the realm of immersive virtual environments (IVEs). Types of Virtual Environments Collaborative Virtual Environments (CVEs): These environments support real-time interaction between multiple users, often represented by avatars. Popular platforms include Second Life, Slack, and Zoom, which are used for collaboration in education and remote work. CVEs foster teamwork by simulating shared spaces for communication and resource sharing. Immersive Virtual Environments (IVEs): IVEs use VR headsets and motion tracking to create highly realistic environments. IVEs are applied in fields such as healthcare for surgical training, in the military for simulation-based training, and in psychotherapy to treat anxiety disorders through virtual exposure therapy. Gaming and entertainment industries also heavily employ IVEs for fully immersive experiences. Augmented Virtual Environments (AVEs): AVEs combine virtual reality (VR) with augmented reality (AR) elements, where users can see and interact with virtual objects superimposed on the real world. Devices such as Microsoft HoloLens and Google Glass are examples of AVEs, offering applications in industrial training, remote assistance, and collaborative design. Applications of Virtual Environments Education: VEs are revolutionizing education through virtual classrooms and labs, allowing remote students to engage in interactive learning experiences. Platforms like Google Classroom, and Blackboard provide tools for resource sharing, assessments, and live interactions . Virtual Meetings and Remote Work: VEs have transformed business operations, especially with the increased demand for remote work. Tools such as Zoom, Microsoft Teams, and Cisco Webex enable virtual meetings, allowing real-time collaboration across geographical boundaries. These platforms incorporate features like screen sharing and document collaboration. Training Simulations: VEs are critical in fields that require practical training in controlled environments. For example, flight simulators allow pilots to practice flight maneuvers in a safe virtual space, while medical professionals use virtual surgical simulators to improve their skills . Technological Components 3D Graphics: VEs rely on 3D graphics to create interactive, lifelike spaces. Engines like Unity and Unreal Engine are widely used to develop these environments, simulating physical attributes such as texture, depth, and lighting . Networked Communication: Real-time communication and interaction between users depend on stable, high-speed networks. Peer-to-peer (P2P) and client-server architectures are commonly used to synchronize data between users in CVEs and IVEs . Haptic Feedback: To enhance immersion, VEs often incorporate haptic feedback technology, which provides tactile responses to users interacting with virtual objects. This is particularly useful in IVEs where physical sensations can simulate real-world interactions. Future of Virtual Environments The next generation of virtual environments will likely see advancements in AI-driven avatars, full-body motion tracking, and enhanced haptic feedback. These innovations will further blur the line between physical and virtual spaces, offering more realistic and immersive experiences. VEs are expected to continue impacting fields such as healthcare, education, and entertainment, with widespread adoption of VR and AR technologies. Notes References Blascovich, J. (2002). Social Influence within Immersive Virtual Environments. In R. Schroeder (Ed.), The Social Life of Avatars: Presence and Interaction in Shared Virtual Environments (pp. 127-145). London: Springer. Fox, J., (2009). Virtual Reality: A Survival Guide for the Social Scientist. Arena, D., & Bailenson, J. N. Journal of Media Psychology, 21(3), 95-113. Huang, R., (2020). The Impact of VR and AR on STEM Education. Ritzhaupt, A. D., Sommer, M., & Zhu, J. Educational Technology Research and Development, 68, 179–180. Virtualization Human–computer interaction
Virtual environment
Technology,Engineering
872
28,846,141
https://en.wikipedia.org/wiki/HTC%207%20Pro
The HTC 7 Pro (also known as the HTC Arrive) is a business class smartphone, part of the HTC 7 series of Internet-enabled, Windows Phone smartphones designed and marketed by HTC Corporation. It is the successor of the HTC Touch Pro2 with a left-side slide-out QWERTY keyboard, with tilting screen. The CDMA variation of the HTC 7 Pro, known on Sprint Nextel as the HTC Arrive, became available on the Sprint Nextel CDMA network on 20 March 2011. It is also available on US Cellular and Alltel Wireless under its original name of the HTC 7 Pro. The phone initially comes with the 7.0.7389.0 firmware version, which includes "NoDo" update (March Update) that has features and fixes, which include improvements to functions like copy & paste, faster apps and games, better Marketplace search, Wi-Fi, Outlook, Facebook integration, camera, audio, and other performance improvements. See also Windows Phone References Windows Phone devices HTC smartphones Mobile phones introduced in 2011 Mobile phones with an integrated hardware keyboard Discontinued flagship smartphones pl:HTC 7 Pro
HTC 7 Pro
Technology
241
2,383,266
https://en.wikipedia.org/wiki/Apo2.7
Apo2.7 is a protein confined to the mitochondrial membrane. It can be detected during early stages of apoptosis. It can be used to detect apoptosis via flow cytometry. References Apoptosis Proteins
Apo2.7
Chemistry
49
14,410,501
https://en.wikipedia.org/wiki/CELSR2
Cadherin EGF LAG seven-pass G-type receptor 2 is a protein that in humans is encoded by the CELSR2 gene. The protein encoded by this gene is a member of the flamingo subfamily, part of the cadherin superfamily. The flamingo subfamily consists of nonclassic-type cadherins; a subpopulation that does not interact with catenins. The flamingo cadherins are located at the plasma membrane and have nine cadherin domains, seven epidermal growth factor-like repeats and two laminin A G-type repeats in their ectodomain. They also have seven transmembrane domains, a characteristic unique to this subfamily. It is postulated that these proteins are receptors involved in contact-mediated communication, with cadherin domains acting as homophilic binding regions and the EGF-like domains involved in cell adhesion and receptor-ligand interactions. The specific function of this particular member has not been determined. See also Flamingo (protein) References Further reading External links Adhesion G protein-coupled receptors G protein-coupled receptors
CELSR2
Chemistry
229
37,007,779
https://en.wikipedia.org/wiki/1971%20Aral%20smallpox%20incident
The Aral smallpox incident was a 30 July 1971 outbreak of the viral disease which occurred as a result of a field test at a Soviet biological weapons (BW) facility on an island in the Aral Sea. The incident sickened ten people, of whom three died, and came to widespread public notice only in 2002. Background In 1954, an existing biological weapons test site originally constructed on Vozrozhdeniya Island in the Aral Sea in 1948 was greatly expanded by the Soviet Ministry of Defence, including to the neighboring Komsomolskiy Island, and named Aralsk-7. A field scientific research laboratory to conduct biological experiments was expanded, and the town of Kantubek was constructed to house employees and scientists. Bio-agents tested there included Bacillus anthracis, Coxiella burnetii, Francisella tularensis, Brucella suis, Rickettsia prowazekii, Variola major (smallpox), Yersinia pestis, botulinum toxin, and Venezuelan equine encephalitis virus. (By 1960, the Soviet biological weapons program also included numerous other research and operational facilities throughout the country.) Aralsk-7 had a history of association with mass deaths of fish, various regional plague outbreaks, a saiga antelope die-off, and individual cases of infectious disease among visitors to Vozrozhdeniya Island. The incident According to Soviet General Pyotr Burgasov (Peter Burgasov), field testing of 400 grams of smallpox at Aralsk-7 caused an outbreak on 30 July 1971. Burgasov, former Chief Sanitary Physician of the Soviet Army, former Soviet Vice-Minister of Health and a senior researcher within the Soviet BW program, described the incident: On Vozrozhdeniya Island in the Aral Sea, the strongest recipes of smallpox were tested. Suddenly I was informed that there were mysterious cases of mortalities in Aralsk (Aral). A research ship [the Lev Berg] of the Aral fleet came to within 15 km of the island (it was forbidden to come any closer than 40 km). The lab technician of this ship took samples of plankton twice a day from the top deck. The smallpox formulation—400 gr. of which was exploded on the island—"got her" and she became infected. After returning home to Aralsk, she infected several people including children. All of them died. I suspected the reason for this and called the Chief of General Staff of Ministry of Defense and requested to forbid the stop of the Alma-Ata-Moscow train in Aralsk. As a result, the epidemic around the country was prevented. I called [future Soviet General Secretary Yuri] Andropov, who at that time was Chief of KGB, and informed him of the exclusive recipe of smallpox obtained on Vozrazhdenie Island. There is a contending belief that the disease actually spread to the Lev Berg from Uyaly or Komsomolsk-on-Ustyurt, two cities in what is now Uzbekistan where the ship docked. The incident caused ten individuals to contract smallpox and three unvaccinated individuals (a woman and two children) died from the haemorrhagic form of the disease. One crew member of the Lev Berg contracted smallpox as the ship passed within 15 km (9 miles) of the island. This crew member became ill on 6 August with fever, headache, and myalgia. The ship then landed in the port city of Aral on 11 August. The ill crew member returned to her home, and she developed a cough and temperature exceeding 38.9 °C (102 °F). Her physician prescribed antibiotics and aspirin. Although she was previously vaccinated for smallpox, a rash subsequently appeared on her back, face, and scalp; her fever subsided; and she recovered by 15 August. On 27 August this patient's 9-year-old brother developed a rash and fever, his pediatrician prescribed tetracycline and aspirin, and he recovered. During the following three weeks, eight additional cases of fever and rash occurred in Aral. Five adults ranging in age from 23 to 60, and three children (4 and 9 months old, and a 5-year-old) were diagnosed with smallpox both clinically and by laboratory testing. These children and the 23-year-old were previously unvaccinated. The two youngest children and the 23-year-old subsequently developed the haemorrhagic form of smallpox and died. The remaining individuals had previously been vaccinated, and all recovered after having an attenuated form of the disease. The high ratio of haemorrhagic smallpox cases in this outbreak, combined with the rate of infectivity and the testimony of General Burgasov, has led to the understanding that an enhanced weaponized strain of smallpox virus was released from Aralsk-7 in 1971. Response A massive public health response to the smallpox cases in Aral ensued once the disease was recognized. In less than two weeks, approximately 50,000 residents of Aral were vaccinated. Household quarantine of potentially exposed individuals was enacted, and hundreds were isolated in a makeshift facility at the edge of the city. All traffic in and out of the city was stopped, and approximately 5.000 sq. meter (54.000 sq. ft.) of living space and 18 metric tons of household goods were incinerated by health officials. References This article also contains information that originally came from US Government publications and websites and is in the public domain. Biological warfare Non-combat military accidents 1971 in the Soviet Union 1971 industrial disasters 1971 disasters in the Soviet Union 1971 health disasters Smallpox epidemics Health disasters in Russia Soviet cover-ups Health in the Soviet Union Aral Sea Smallpox eradication July 1971 events in Asia Soviet biological weapons program
1971 Aral smallpox incident
Biology
1,209
18,766,868
https://en.wikipedia.org/wiki/ELB-139
ELB-139 (LS-191,811) is an anxiolytic drug with a novel chemical structure, which is used in scientific research. It has similar effects to benzodiazepine drugs, but is structurally distinct and so is classed as a nonbenzodiazepine anxiolytic. ELB-139 is a subtype-selective partial agonist at GABAA receptors, with highest affinity for the α3 subtype, but highest efficacy at α1 and α2. It has primarily anxiolytic and anticonvulsant effects, but produces little sedative effects or ataxia, and has also been demonstrated in rats to increase serotonin levels in the striatum and prefrontal cortex, without affecting dopamine levels. It has been proposed as a possible candidate for a novel non-sedating anxiolytic or anticonvulsant drug for use in humans The sponsor registered a clinical trial in ClinicalTrials.gov for the treatment of anxiety associated with panic disorder but the results have not been reported. It was developed by Arzneimittelwerk Dresden in the 1990s. References Anxiolytics 4-Chlorophenyl compounds Imidazolines Ureas Lactams 1-Piperidinyl compounds GABAA receptor positive allosteric modulators
ELB-139
Chemistry
281
54,278,545
https://en.wikipedia.org/wiki/Skeletocutis%20subodora
Skeletocutis subodora is a species of poroid crust fungus in the family Polyporaceae. It was described as a new species by mycologists Josef Vlasák and Leif Ryvarden in 2012. The type specimen was collected in the Crater Lake visitor's centre in Oregon, United States, where it was growing on a log of Douglas fir. It is named after its similarity to Skeletocutis odora, from which it differs in microscopic characteristics, including its thick subiculum, non-allantoid (sausage-shaped) spores, large cystidioles, and monomitic flesh. References Fungi described in 2012 Fungi of the United States subodora Taxa named by Leif Ryvarden Fungi without expected TNC conservation status Fungus species
Skeletocutis subodora
Biology
164
1,636,593
https://en.wikipedia.org/wiki/Collaboratory
A collaboratory, as defined by William Wulf in 1989, is a “center without walls, in which the nation’s researchers can perform their research without regard to physical location, interacting with colleagues, accessing instrumentation, sharing data and computational resources, [and] accessing information in digital libraries” (Wulf, 1989). Bly (1998) refines the definition to “a system which combines the interests of the scientific community at large with those of the computer science and engineering community to create integrated, tool-oriented computing and communication systems to support scientific collaboration” (Bly, 1998, p. 31). Rosenberg (1991) considers a collaboratory as being an experimental and empirical research environment in which scientists work and communicate with each other to design systems, participate in collaborative science, and conduct experiments to evaluate and improve systems. A simplified form of these definitions would describe the collaboratory as being an environment where participants make use of computing and communication technologies to access shared instruments and data, as well as to communicate with others. However, a wide-ranging definition is provided by Cogburn (2003) who states that “a collaboratory is more than an elaborate collection of information and communications technologies; it is a new networked organizational form that also includes social processes; collaboration techniques; formal and informal communication; and agreement on norms, principles, values, and rules” (Cogburn, 2003, p. 86). This concept has a lot in common with the notions of Interlock research, Information Routing Group and Interlock diagrams introduced in 1984. Other meaning The word “collaboratory” is also used to describe an open space, creative process where a group of people work together to generate solutions to complex problems. This meaning of the word originates from the visioning work of a large group of people – including scholars, artists, consultant, students, activists, and other professionals – who worked together on the 50+20 initiative aiming at transforming management education. In this context, by fusing two elements, “collaboration” and “laboratory”, the word “collaboratory” suggests the construction of a space where people explore collaborative innovations. It is, as defined by Dr. Katrin Muff, “an open space for all stakeholders where action learning and action research join forces, and students, educators, and researchers work with members of all facets of society to address current dilemmas.” The concept of the collaboratory as a creative group process and its application are further developed in the book “The Collaboratory: A co-creative stakeholder engagement process for solving complex problems”. Examples of collaboratory events are provided on the website of the Collaboratory community as well as by Business School Lausanne- a Swiss business school that has adopted the collaboratory method to harness collective intelligence. Background Problems of geographic separation are especially present in large research projects. The time and cost for traveling, the difficulties in keeping contact with other scientists, the control of experimental apparatus, the distribution of information, and the large number of participants in a research project are just a few of the issues researchers are faced with. Therefore, collaboratories have been put into operation in response to these concerns and restrictions. However, the development and implementation proves to be not so inexpensive. From 1992 to 2000 financial budgets for scientific research and development of collaboratories ranged from US$447,000 to US$10,890,000 and the total use ranged from 17 to 215 users per collaboratory (Sonnenwald, 2003). Particularly higher costs occurred when software packages were not available for purchase and direct integration into the collaboratory or when requirements and expectations were not met. Chin and Lansing (2004) state that the research and development of scientific collaboratories had, thus far, a tool-centric approach. The main goal was to provide tools for shared access and manipulation of specific software systems or scientific instruments. Such an emphasis on tools was necessary in the early development years of scientific collaboratories due to the lack of basic collaboration tools (e.g. text chat, synchronous audio or videoconferencing) to support rudimentary levels of communication and interaction. Today, however, such tools are available in off-the-shelf software packages such as Microsoft NetMeeting, IBM Lotus Sametime, Mbone Videoconferencing (Chin and Lansing, 2004). Therefore, the design of collaboratories may now move beyond developing general communication mechanisms to evaluating and supporting the very nature of collaboration in the scientific context (Chin & Lansing, 2004). The evolution of the collaboratory As stated in Chapter 4 of the 50+20 "Management Education for the World" book, "the term collaboratory was first introduced in the late 1980s to address problems of geographic separation in large research projects related to travel time and cost, difficulties in keeping contact with other scientists, control of experimental apparatus, distribution of information, and the large number of participants. In their first decade of use, collaboratories were seen as complex and expensive information and communication technology (ICT) solutions supporting 15 to 200 users per project, with budgets ranging from 0.5 to 10 million USD. At that time, collaboratories were designed from an ICT perspective to serve the interests of the scientific community with tool-oriented computing requirements, creating an environment that enabled systems design and participation in collaborative science and experiments. The introduction of a user-centered approach provided a first evolutionary step in the design philosophy of the collaboratory, allowing rapid prototyping and development circles. Over the past decade the concept of the collaboratory expanded beyond that of an elaborate ICT solution, evolving into a “new networked organizational form that also includes social processes, collaboration techniques, formal and informal communication, and agreement on norms, principles, values, and rules”. The collaboratory shifted from being a tool-centric to a data-centric approach, enabling data sharing beyond a common repository for storing and retrieving shared data sets. These developments have led to the evolution of the collaboratory towards a globally distributed knowledge work that produces intangible goods and services capable of being both developed and distributed around the world using traditional ICT networks. Initially, the collaboratory was used in scientific research projects with variable degrees of success. In recent years, collaboratory models have been applied to areas beyond scientific research and the national context. The wide acceptance of collaborative technologies in many parts of the world opens promising opportunities for international cooperation in critical areas where societal stakeholders are unable to work out solutions in isolation, providing a platform for large multidisciplinary teams to work on complex global challenges. The emergence of open-source technology transformed the collaboratory into its next evolution. The term open-source was adopted by a group of people in the free software movement in Palo Alto in 1998 in reaction to the source code release of the Netscape Navigator browser. Beyond providing a pragmatic methodology for free distribution and access to an end product's design and implementation details, open-source represents a paradigm shift in the philosophy of collaboration. The collaboratory has proven to be a viable solution for the creation of a virtual organization. Increasingly, however, there is a need to expand this virtual space into the real world. We propose another paradigm shift, moving the collaboratory beyond its existing ICT framework to a methodology of collaboration beyond the tool- and data-centric approaches, and towards an issue-centered approach that is transdisciplinary in nature." Characteristics and considerations A distinctive characteristic of collaboratories is that they focus on data collection and analysis. Hence the interest to apply collaborative technologies to support data sharing as opposed to tool sharing. Chin and Lansing (2004) explore the shift of collaboratory development from traditional tool-centric approaches to more data-centric ones, to effectively support data sharing. This means more than just providing a common repository for storing and retrieving shared data sets. Collaboration, Chin and Lansing (2004) state, is driven both by the need to share data and to share knowledge about data. Shared data is only useful if sufficient context is provided about the data such that collaborators may comprehend and effectively apply it. It is therefore imperative, according to Chin and Lansing (2004), to know and understand how data sets relate to aspects of overall data space, applications, experiments, projects, and the scientific community, identifying the critical features or properties among which we can mention: General data set properties (owner, creation data, size, format); Experimental properties (conditions of the scientific experiment that generated that data); Data provenance (relationship with previous versions); Integration (relationship of data subsets within the full data set); Analysis and interpretation (notes, experiences, interpretations, and knowledge produced) Scientific organization (scientific classification or hierarchy); Task (research task that generated or applies the data set); Experimental process (relationship of data and tasks to the overall process); User community (application of data set to different users). Henline (1998) argues that communication about experimental data is another important characteristic of a collaboratory. By focusing attention on the dynamics of information exchange, the study of Zebrafish Information Network Project (Henline, 1998) concluded that the key challenges in creating a collaboratory may be social rather than technical. “A successful system must respect existing social conventions while encouraging the development of analogous mechanisms within the new electronic forum” (Henline, 1998, p. 69). Similar observations were made in the Computer-supported collaborative learning (CSCL) case study (Cogburn, 2003). The author (Cogburn, 2003) is investigating a collaboratory established for researchers in education and other related domains from United States of America and southern Africa. The main finding was that there have been important intellectual contributions on both sides, although the context was that of a developed country working together with a developing one and there have been social as well as cultural barriers. He further develops the idea that a successful CSCL would need to draw the best lessons learned on both sides in computer-mediated communication (CMC) and computer-supported cooperative work (CSCW). Sonnenwald (2003) conducted seventeen interviews with scientists and revealed important considerations. Scientists expect a collaboratory to “support their strategic plans; facilitate management of the scientific process; have a positive or neutral impact on scientific outcomes; provide advantages and disadvantages for scientific task execution; and provide personal conveniences when collaborating across distances” (Sonnenwald, 2003, p. 68). Many scientists looked at the collaboratory as means to achieve strategic goals that were organizational and personal in nature. Other scientists anticipated that the scientific process would speed up when they had access to the collaboratory. Design philosophy Finholt (1995), based on the case studies of the Upper Atmospheric Research Collaboratory (UARC) and the Medical Collaboratory, establishes a design philosophy: a collaboratory project must be dedicated to a user-centered design (UCD) approach. This means a commitment to develop software in programming environments that allow rapid prototyping, rapid development cycles (Finholt, 1995). A consequence of the user-centered design in the collaboratory is that the system developers must be able to distinguish when a particular system or modification has positive impact on users’ work practices. An important part of obtaining this understanding is producing an accurate picture of how work is done prior to the introduction of technology. Finholt (1995) explains that behavioral scientists had the task of understanding the actual work settings for which new information technologies were developed. The goal of a user-centered design effort was to inject those observations back into the design process to provide a baseline for evaluating future changes and to illuminate productive directions for prototype development (Finholt, 1995). A similar viewpoint is expressed by Cogburn (2003) who relates the collaboratory to a globally distributed knowledge work, stating that human-computer interaction (HCI) and user-centered design (UCD) principles are critical for organizations to take advantage of the opportunities of globalization and the emergence of an Information society. He (Cogburn, 2003) refers to distributed knowledge work as being a set of “economic activities that produce intangible goods and services […], capable of being both developed and distributed around the world using the global information and communication networks” (Cogburn, 2003, p. 81). Through the use of these global information and communications networks, organizations are able to take part in globally disarticulated production, which means they can locate their research and development facilities almost anywhere in the world, and engineers can collaborate across time zones, institutions and national boundaries. Evaluation Meeting expectations is a factor that influences adoption of innovations, including scientific collaboratories. Some of the collaboratories implemented thus far have not been entirely successful. The Mathematics and Computer Science Division of Argonne National Laboratory, Waterfall Glen collaboratory (Henline, 1998) is an illustrative example. This collaboratory had its shares of problems. There have been the occasional technical and social disasters, but most importantly it did not meet all of the collaboration and interaction requirements. The vast majority of the evaluations performed thus far are concentrating mainly on the usage statistics (e.g. total number of members, hours of use, amount of data communicated) or on the immediate role in the production of traditional scientific outcomes (e.g. publications and patents). Sonnenwald (2003), however, argues that we should rather look for longer-term and intangible measures such as new and continued relationship among scientists, and subsequent, longer-term creation of new knowledge. Regardless of the criteria used for evaluation, we must focus on understanding the expectations and requirements defined for a collaboratory. Without such understanding a collaboratory runs the risk of not being adopted. Success factors Olson, Teasley, Bietz, and Cogburn (2002) ascertain some of the success factors of a collaboratory. They are: collaboration readiness, collaboration infrastructure readiness, and collaboration technology readiness. Collaboration readiness is the most basic pre-requisite for an effective collaboratory, according to Olson, Teasley, Bietz, and Cogburn (2002). Often the critical component to collaboration readiness is based on the concept of “working together in order to achieve a science goal” (Olson, Teasley, Bietz, & Cogburn, 2002, p. 46). Incentives to collaborate, shared principles of collaboration, and experience with the elements of collaboration are also crucial. Successful interaction between users requires a certain amount of common ground. Interactions require a high degree of trust or negotiation, especially when they involve areas where there is a cultural difference. “Ethical norms tend to be culturally specific, and negotiations about ethical issues require high levels of trust” (Olson, Teasley, Bietz, & Cogburn, 2002, p. 49). When analyzing the collaboration infrastructure readiness Olson, Teasley, Bietz, and Cogburn (2002) state that modern collaboration tools require adequate infrastructure to operate properly. Many off-the-shelf applications will run effectively only on state-of-the-art workstations. An important piece of the infrastructure is the technical support necessary to ensure version control, to get participants registered, and to recover in case of disaster. Communications cost is another element which can be critical for collaboration infrastructure readiness (Olson, Teasley, Bietz, & Cogburn, 2002). Pricing structures for network connectivity can affect the choices that users will make and therefore have an effect on the collaboratory's final design and implementation. Collaboration technology readiness, according to Olson, Teasley, Bietz, and Cogburn (2002), refers to the fact that collaboration does not involve only technology and infrastructure, but also requires a considerable investment in training. Thus, it is essential to assess the state of technology readiness in the community to ensure success. If the level is too primitive more training is required to bring the users’ knowledge up-to-date. Examples Biological Sciences Collaboratory A comprehensively described example of a collaboratory, the Biological Sciences Collaboratory (BSC) at the Pacific Northwest National Laboratory (Chin & Lansing, 2004), enables the sharing and analysis of biological data through metadata capture, electronic laboratory notebooks, data organization views, data provenance tracking, analysis notes, task management, and scientific workflow management. BSC supports various data formats, has data translation capabilities, and can interact and exchange data with other sources (external databases, for example). It offers subscription capabilities (to allow certain individuals to access data) and verification of identities, establishes and manages permissions and privileges, and has data encryption capabilities (to ensure secure data transmission) as part of its security package. BSC also provides a data provenance tool and a data organization tool. These tools allow a hierarchical tree to display the historical lineage of a data set. From this tree-view the scientist may select a particular node (or an entire branch) to access a specific version of the data set (Chin & Lansing, 2004). The task management provided by BSC allows users to define and track tasks related to a specific experiment or project. Tasks can have deadlines assigned, levels of priority, and dependencies. Tasks can also be queried and various reports produced. Related to task management, BSC provides workflow management to capture, manage, and supply standard paths of analyses. The scientific workflow may be viewed as process templates that captures and semi-automate the steps of an analysis process and its encompassing data sets and tools (Chin & Lansing, 2004). BSC provides project collaboration by allowing scientists to define and manage members of their group. Security and authentication mechanisms are therefore applied to limit access to project data and applications. Monitoring capability allows for members to identify other members that are online working on the project (Chin & Lansing, 2004). BSC offers community collaboration capabilities: scientists may publish their data sets to a larger community through the data portal. Notifications are in place for scientists interested in a particular set of data - when that data changes, the scientists get notification via email (Chin & Lansing, 2004). Diesel Combustion Collaboratory Pancerella, Rahn, and Yang (1999) analyzed the Diesel Combustion Collaboratory (DCC) which was a problem-solving environment for combustion research. The main goal of DCC was to make the information exchange for the combustion researchers more efficient. Researchers would collaborate over the Internet using various DCC tools. These tools included “a distributed execution management system for running combustion models on widely distributed computers (distributed computing), including supercomputers; web accessible data archiving capabilities for sharing graphical experimental or modeling data; electronic notebooks and shared workspaces for facilitating collaboration; visualization of combustion data; and videoconferencing and data conferencing among researchers at remote sites” (Pancerella, Rahn, & Yang, 1999, p. 1). The collaboratory design team defined the requirements to be (Pancerella, Rahn, & Yang, 1999): Ability share graphical data easily; Ability to discuss modeling strategies and exchange model descriptions; Archiving collaborative information; Ability to run combustion models at widely separated locations; Ability to analyze experimental data and modeling results in a web-accessible format; Videoconference and group meetings capabilities. Each of these requirements had to be done securely and efficiently across the Internet. Resources availability was a major concern because many of the chemistry simulations could run for hours or even days on high-end workstations and produce Kilobytes to Megabytes of data sets. These data sets had to be visualized using simultaneous 2-D plots of multiple variables (Pancerella, Rahn, & Yang, 1999). The deployment of the DCC was done in a phased approach. The first phase was based on iterative development, testing, and deployment of individual collaboratory tools. Once collaboratory team members had adequately tested each new tool, it was deployed to combustion researchers. The deployment of the infrastructure (videoconferencing tools, multicast routing capabilities, and data archives) was done in parallel (Pancerella, Rahn, & Yang, 1999). The next phase was to implement full security in the collaboratory. The primary focus was on two-way synchronous and multi-way asynchronous collaborations (Pancerella, Rahn, & Yang, 1999). The challenge was to balance the increased access to data that was needed with the security requirements. The final phase was the broadening of the target research to multiple projects including a broader range of collaborators. The collaboratory team found that the highest impact was perceived by the geographically separated scientists that truly depended on each other to achieve their goals. One of the team's major challenges was to overcome the technological and social barriers in order to meet all of the objectives (Pancerella, Rahn, & Yang, 1999). User openness and low maintenance security collaboratories are hard to achieve, therefore user feedback and evaluation are constantly required. Other collaboratories Other collaboratories that have been implemented and can be further investigated are: Marine Biological Laboratory (MBL) is an international center for research and education in biology, biomedicine and ecology. Biological Collaborative Research Environment (BioCoRE) developed at University of Illinois at Urbana–Champaign – a collaboration tool for biologists (Chin and Lansing, 2004); The CTQ Collaboratory, a virtual community of teacher leaders and those who value teacher leadership, run by the Center for Teaching Quality, a national education nonprofit (Berry, Byrd, & Wieder, 2013); HASTAC (Humanities, Arts, Science, and Technology Alliance and Collaboratory), founded in 2002 by Cathy N. Davidson, then Vice Provost for Interdisciplinary Studies at Duke University and David Theo Goldberg, Director of the University of California Humanities Research Institute (UCHRI), after contacting scholars across the humanities (including digital humanities), social sciences, media studies, the arts, and technology sectors who shared these convictions and wanted to envision a new kind of organization—an academic social network—that would allow anyone to join and would offer any member of the community to contribute. They began working with a team of developers at Stanford University to code and design a participatory, community site, originally a display website and a Wiki for open contribution and as a community-based publishing and networking platform. Molecular Interactive Collaborative Environment (MICE) developed at the San Diego Supercomputer Center – provides collaborative access and manipulation of complex, three-dimensional molecular models as captured in various scientific visualization programs (Chin and Lansing, 2004); Molecular Modeling Collaboratory (MMC) developed at University of California, San Francisco – allows remote biologists to share and interactively manipulate three-dimensional molecular models in applications such as drug design and protein engineering (Chin and Lansing, 2004); Collaboratory for Microscopic Digital Anatomy (CMDA) – a computational environment to provide biomedical scientists remote access to a specialized research electron microscope (Henline, 1998); The Collaboratory for Strategic Partnerships and Applied Research at Messiah College - an organization of Christian students, educators, and professionals affiliated with Messiah College, aspiring to fulfill Biblical mandates to foster justice, empower the poor, reconcile adversaries, and care for the earth, in the context of academic engagement. Waterfall Glen – a multi-user object-oriented (MOO) collaboratory at Argonne National Laboratory (Henline, 1998); The International Personality Item Pool (IPIP) – a scientific collaboratory for the development of advanced measures of personality and other individual differences (Henline, 1998); TANGO – a set of collaborative applications for education and distance learning, command and control, health care, and computer steering (Henline, 1998). Special consideration should be attributed to TANGO (Henline, 1998) because it is a step forward in implementing collaboratories, as it has distance learning and health care as main domains of operation. Henline (1998) mentions that the collaboratory has been successfully used to implement applications for distance learning, command and control center, telemedical bridge, and a remote consulting tool suite. Collaborative architecture and Interactive architecture, the work of Adam Somlai-Fischer and Usman Haque. The Internet & Society Collaboratory supported by Google in Germany Summary To date, most collaboratories have been applied largely in scientific research projects, with various degrees of success and failure. Recently, however, collaboratory models have been applied to additional areas of scientific research in both national and international contexts. As a result, a substantial knowledge base has emerged helping us in understanding their development and application in science and industry (Cogburn, 2003). Extending the collaboratory concept to include both social and behavioral research as well as more scientists from the developing world could potentially strengthen the concept and provide opportunities of learning more about the social and technical factors that support a distributed knowledge network (Cogburn, 2003). The use of collaborative technologies to support geographically distributed scientific research is gaining wide acceptance in many parts of the world. Such collaboratories hold great promise for international cooperation in critical areas of scientific research and not only. As the frontiers of knowledge are pushed back the problems get more and more difficult, often requiring large multidisciplinary teams to make progress. The collaboratory is emerging as a viable solution, using communication and computing technologies to relax the constraints of distance and time, creating an instance of a virtual organization. The collaboratory is both an opportunity with very useful properties, but also a challenge to human organizational practices (Olson, 2002). See also Information and communication technologies Human–computer interaction User-centered design Participatory design Footnotes References Berry, B., Byrd, A., & Wieder, A. (2013). Teacherpreneurs: Innovative teachers who lead but don't leave. San Francisco: Jossey-Bass. Bly, S. (1998). Special section on collaboratories, Interactions, 5(3), 31, New York: ACM Press. Bos, N., Zimmerman, A., Olson, J., Yew, J., Yerkie, J., Dahl, E. and Olson, G. (2007), From Shared Databases to Communities of Practice: A Taxonomy of Collaboratories. Journal of Computer-Mediated Communication, 12: 652–672. Chin, G., Jr., & Lansing, C. S. (2004). Capturing and supporting contexts for scientific data sharing via the biological sciences collaboratory, Proceedings of the 2004 ACM conference on computer supported cooperative work, 409-418, New York: ACM Press. Cogburn, D. L. (2003). HCI in the so-called developing world: what's in it for everyone, Interactions, 10(2), 80-87, New York: ACM Press. Cosley, D., Frankowsky, D., Kiesler, S., Terveen, L., & Riedl, J. (2005). How oversight improves member-maintained communities, Proceedings of the SIGCHI conference on Human factors in computing systems, 11-20. Finholt, T. A. (1995). Evaluation of electronic work: research on collaboratories at the University of Michigan, ACM SIGOIS Bulletin, 16(2), 49–51. Finholt, T.A. Collaboratories. (2002). In B. Cronin (Ed.), Annual Review of Information Science and Technology (pp. 74–107), 36. Washington, D.C.: American Society for Information Science. Finholt, T.A., & Olson, G.M. (1997). From laboratories to collaboratories: A new organizational form for scientific collaboration. Psychological Science, 8, 28-36. Henline, P. (1998). Eight collaboratory summaries, Interactions, 5(3), 66–72, New York: ACM Press. Olson, G.M. (2004). Collaboratories. In W.S. Bainbridge (Ed.), Encyclopedia of Human-Computer Interaction. Great Barrington, MA: Berkshire Publishing. Olson, G.M., Teasley, S., Bietz, M. J., & Cogburn, D. L. (2002). Collaboratories to support distributed science: the example of international HIV/AIDS research, Proceedings of the 2002 annual research conference of the South African institute of computer scientists and information technologists on enablement through technology, 44–51. Olson, G.M., Zimmerman, A., & Bos, N. (Eds.) (2008). Scientific collaboration on the Internet. Cambridge, MA: MIT Press. Pancerella, C.M., Rahn, L. A., Yang, C. L. (1999). The diesel combustion collaboratory: combustion researchers collaborating over the internet, Proceedings of the 1999 ACM/IEEE conference on supercomputing, New York: ACM Press. Rosenberg, L. C. (1991). Update on National Science Foundation funding of the “collaboratory”, Communications of the ACM, 34(12), 83, New York: ACM Press. Sonnenwald, D.H. (2003). Expectations for a scientific collaboratory: A case study, Proceedings of the 2003 international ACM SIGGROUP conference on supporting group work, 68–74, New York: ACM Press. Sonnenwald, D.H., Whitton, M.C., & Maglaughlin, K.L. (2003). Scientific collaboratories: evaluating their potential, Interactions, 10(4), 9–10, New York: ACM Press. Wulf, W. (1989, March). The national collaboratory. In Towards a national collaboratory. Unpublished report of a National Science Foundation invitational workshop, Rockefeller University, New York. Wulf, W. (1993) The collaboratory opportunity. Science, 261, 854-855. 1989 introductions Technology systems Collaboration Laboratories
Collaboratory
Technology,Engineering
6,154
28,384,440
https://en.wikipedia.org/wiki/International%20Center%20for%20Technology%20Assessment
The International Center for Technology Assessment (ICTA) is a U.S. non-profit bi-partisan organization, based in Washington, D.C. ICTA was formed in 1994. Its executive director is Andrew Kimbrell. Its sister organization is the Center for Food Safety. In 2004, ICTA took an active part in Monsanto Canada Inc. v. Schmeiser, a leading Supreme Court of Canada case on patent rights for biotechnology. The case involved Percy Schmeiser, a Saskatchewan canola farmer. Intervening on Schmeiser’s behalf were a consortium of six non-government organizations, among which was the International Center for Technology Assessment. Schmeiser lost the case. In 2006, Friends of the Earth and ICTA filed a formal petition with the Food and Drug Administration urging better monitoring and regulation of cosmetic and toiletry products containing nanoparticles, and stating that they would sue the FDA if it did not take adequate action in 180 days. See also Implications of nanotechnology Nanotoxicology Regulation of nanotechnology Environmental implications of nanotechnology Health implications of nanotechnology Genetically modified food controversies Polly the sheep Institute on Biotechnology and the Human Future References Bibliography Principles for the Oversight of Nanotechnologies and Nanomaterials (PDF file) ICTA - January 31, 2008 External links International Center For Technology Assessment Dust-Up: The Great Nanotech Debate, Op-Ed series, Los Angeles Times, (Feb 25-29, 2008). Center for Corporate Policy Military Nanotechnology Applications ASU's Center on Nanotechnology and Society UCSB's Center on Nanotechnology and Society The Nanoethics Group Center for Responsible Nanotechnology The NanoEthicsBank Environmental organizations based in Washington, D.C. Nanotechnology institutions
International Center for Technology Assessment
Materials_science
368
31,399,039
https://en.wikipedia.org/wiki/Plastin
Plastin is part of a family of actin-bundling proteins, specifically the α-actinin family of actin-binding protein, which are found in many lifeforms, from humans and other animals to plants and yeasts. These proteins are known to cross-link actin filaments into bundles for various cell purposes.Members of plastin include: LCP1 PLS1 PLS3 Structure The structure of plastin has been evolutionarily maintained within organisms who utilize this protein, which includes humans and lower eukaryotic organisms. Plastin structures are known for their EF-hand Ca2+- and actin-binding domains that assist in assembling actin into higher-order bundles. Plastins have two actin-binding domains (ABD) in each of their polypeptide in which each ABD contains two of 125-residue calponin-homology (CH) components. This structure allows for plastins to cross-link actin filaments into their bundles in order to perform various tasks. Function Plastin, along with other actin-binding proteins, help stabilize and rearrange organization of actin cytoskeleton when present with external stimuli, cell migration, and cell adhesion. The EF-hand Ca2+-binding domains are important for the function of plastin as their activities are regulated by Ca2+. In mammals, three isoforms of plastin have been identified: L-plastin, found mostly within hematopoietic cells; T-plastin, found in cells of solid tissues; and lastly, I-plastin, expressed specifically in the small intestine, colon, and kidneys. Each of these isoforms of plastin have their own roles, dependent of the cell-type, in order to regulate the actin cytoskeleton. L-plastin L-plastin (leukocyte plastin, LPL, Plastin-2, LCP1), are expressed in hematopoietic cells and in various types of leukocytes (i.e., T- and B-lymphocytes). They help to defend against foreign pathogens by using phagocytosis. They help contribute to the motility of T-cells so that these cells are capable of activating during an immune response. L-plastins are known to be the only isoform within the three to undergo phosphorylation during leukocyte activation via interleukin-1, -2, and phorbol myristate acetate. This, and the fact that plastins are Ca2+-dependent, allow L-plastins to be capable of signaling to leukocytes for rapid responses to stimuli. As a result, organisms or cells that are lacking L-plastins have a more difficult time responding to external stimuli that need an immunity activation. T-Plastin Also written PLS3 or Plastin-3, T-plastins are primarily found in solid tissues, within cells that are capable of replication (i.e., epithelial and mesenchymal cells). T-plastins are needed in order for cells to carry out proliferation and migration as well as membranes to protrude from cell to another cell and gap the distance within the extracellular matrix (ECM). These proteins strengthen the actin-cytoskeleton network to accomplish cell migration and distribution. Without T-plastin, cells who migrate using the protrusion method would not be able to bypass the membrane tension that needs to be overcome in order for protrusion to occur. I-Plastin I-plastin (intestinal plastin, plastin-1, PLS1) is localized to the intestinal epithelial cells, specifically in the intestinal brush border microvilli. I-plastin help to stabilize the intestinal brush border microvilli and its function. Without I-plastin, it has been observed that there was decreased transepithelial resistance, increased cellular turnover, and increased sensitivity to specific diseases in the intestines. Phenotypically, microvilli in the intestines lacking this protein were found to be shorter, constricted, not have profound rootlets, and have increased fragility. References EF-hand-containing proteins Protein families Human proteins
Plastin
Biology
934
62,317,758
https://en.wikipedia.org/wiki/Project%20Nightingale
Project Nightingale is a data storage and processing project by Google Cloud and Ascension, a Catholic health care system comprising a chain of 2,600 hospitals, doctors' offices and other related facilities, in 21 states, with tens of millions of patient records available for processing health care data. Ascension is one of the largest health-care systems in the United States with comprehensive and specific health care information of millions who are part of its system. The project is Google's attempt to gain a foothold into the healthcare industry on a large scale. Amazon, Microsoft and Apple Inc. are also actively advancing into health care, but none of their business arrangements are equal in scope to Project Nightingale. History In early 2019, Ascension began talks with Google about developing health aggregation software to store and search medical records. The two companies signed a Health Insurance Portability and Accountability Act (HIPAA) business associate agreement, which would allow Ascension to transfer patient data to Google Cloud, and would bar Google from using this data for purposes other than providing services to Ascension. Google first mentioned its project with Ascension in a July 2019 earnings call, which said the partnership was meant to "improve the healthcare experience and outcomes." The Wall Street Journal first reported on "Project Nightingale" on November 11, 2019, writing that doctors and patients had not been notified of the project and that 150 Google employees had access to patient data. Google Health chief David Feinberg responded to the report in a blog post, saying all employees with access to protected health information went through medical ethics training and were approved by Ascension. The project raised privacy fears because of Google's involvement in other privacy controversies, like DeepMind's medical data-sharing controversy and a lawsuit against Google and the University of Chicago Medical Center for allegedly processing identifying medical records. Google Cloud executive Tariq Shaukat wrote that patient data gathered from the project "cannot and will not be combined with any Google consumer data." Types of data The data sharing includes patient names and their dates of birth, along with doctor diagnoses, lab results, and hospitalization records, amounting to access to complete electronic health records. Also included in the data sharing are addresses of the patient, family members, allergies, immunizations, radiology scans, medications, and medical conditions. After the patient checks in to the doctor's office, or hospital, or senior center - the doctor and nurse examination results are entered into a computer and uploaded to Google's cloud servers. At this point, the system is then used to suggest treatment plans, recommend replacement or removal of a doctor from the patient's health-care team, and administer policies on narcotics. Ascension, the company sharing data with Google, may also vary their billing according to treatment or procedures. Investigations Soon after The Wall Street Journal reported on Project Nightingale, The Guardian published an account from an anonymous whistleblower who worked on Project Nightingale. This person who raised concerns that patients could not opt in or out of having their records stored on Google's servers, and that the project may not be HIPAA compliant. The United States Department of Health and Human Services (HHS) launched an inquiry into Google's partnership with Ascension. The investigation will be run by HHS' Office of Civil Rights. Director Roger Severino said, his office "would like to learn more information about this mass collection of individuals' medical records with respect to the implications for patient privacy under [the Health Insurance Portability and Accountability Act of 1996 or HIPAA]." See also Google Health References External links Our Partnership with Ascension, Google Cloud blog post and FAQ 2019 establishments in the United States 2019 controversies in the United States Healthcare in the United States Catholic health care Code names Google Cloud Electronic health records Medical controversies in the United States Projects established in 2019 Databases in the United States
Project Nightingale
Technology
771
3,206,350
https://en.wikipedia.org/wiki/Viridiplantae
Viridiplantae (; kingdom Plantae sensu stricto) is a clade of around 450,000–500,000 species of eukaryotic organisms, most of which obtain their energy by photosynthesis. The green plants are chloroplast-bearing autotrophs that play important primary production roles in both terrestrial and aquatic ecosystems. They include green algae, which are primarily aquatic, and the land plants (embryophytes, Plantae sensu strictissimo), which emerged within freshwater green algae. Green algae traditionally excludes the land plants, rendering them a paraphyletic group, however it is cladistically accurate to think of land plants as a special clade of green algae that evolved to thrive on dry land. Since the realization that the embryophytes emerged from within the green algae, some authors are starting to include them. Viridiplantae species all have cells with cellulose in their cell walls, and primary chloroplasts derived from endosymbiosis with cyanobacteria that contain chlorophylls a and b and lack phycobilins. Corroborating this, a basal phagotroph Archaeplastida group has been found in the Rhodelphydia. In some classification systems, the group has been treated as a kingdom, under various names, e.g. Viridiplantae, Chlorobionta, or simply Plantae, the latter expanding the traditional plant kingdom of embryophytes to include the green algae. Adl et al., who produced a classification for all eukaryotes in 2005, introduced the name Chloroplastida for this group, reflecting the group having primary chloroplasts. They rejected the name Viridiplantae on the grounds that some of the species are not plants as understood traditionally. Together with Rhodophyta, glaucophytes and other basal groups, Viridiplantae belong to a larger clade called Archaeplastida which in itself is sometimes described as Plantae sensu lato. Evolution Taxonomy Leliaert et al, 2012 propose the following simplified taxonomy of the Viridiplantae. Viridiplantae Chlorophyta core chlorophytes Ulvophyceae Cladophorales Dasycladales Bryopsidales Trentepohliales Ulvales-Ulotrichales Oltmannsiellopsidales Chlorophyceae Oedogoniales Chaetophorales Chaetopeltidales Chlamydomonadales Sphaeropleales Trebouxiophyceae Chlorellales Oocystaceae Microthamniales Trebouxiales Prasiola clade Chlorodendrophyceae Chlorodendrales Pedinophyceae prasinophytes (paraphyletic) Pyramimonadales Mamiellophyceae Pycnococcaceae Nephroselmidophyceae Prasinococcales Palmophyllales Streptophyta Charophytes Mesostigmatophyceae Mesostigmatales Chlorokybales Klebsormidiophyceae Phragmoplastophyta Charophyceae Coleochaetophyceae Zygnematophyceae Embryophyta (land plants) Phylogeny In 2019, a phylogeny based on genomes and transcriptomes from 1,153 plant species was proposed. The placing of algal groups is supported by phylogenies based on genomes from the Mesostigmatophyceae and Chlorokybophyceae that have since been sequenced. Both the "chlorophyte algae" and the "streptophyte algae" are treated as paraphyletic (vertical bars beside phylogenetic tree diagram) in this analysis. The classification of Bryophyta is supported both by Puttick et al. 2018, and by phylogenies involving the hornwort genomes that have also since been sequenced. Ancestrally, the green algae were flagellates. References Biological classification Subkingdoms Taxa named by Thomas Cavalier-Smith
Viridiplantae
Biology
883
64,982,850
https://en.wikipedia.org/wiki/Energy%20poverty%20and%20gender
Energy poverty is defined as lacking access to the affordable sustainable energy service. Geographically, it is unevenly distributed in developing and developed countries. In 2019, there were an estimated 770 million people who have no access to electricity, with approximately 95% distributed in Asia and sub-Saharan Africa. In developing countries, poor women and girls living in the rural areas are significantly affected by energy poverty, because they are usually responsible for providing the primary energy for households. In developed countries, old women living alone are mostly affected by energy poverty due to the low income and high cost of energy service. Even though energy access is an important climate change adaptation tool especially for maintaining health (i.e. access to air conditioning, information etc.), a systematic review published in 2019 found that research does not account for these effects onto vulnerable populations like women. Energy poverty has a disproportionate impact on women. Without access to other energy sources, 13% of the global population is compelled to collect wood for fuel. Out of the population, women and girls contribute to more than 85% of the work involved in gathering wood for fuel. In developing countries Domestic responsibilities In developing countries, energy poverty has significant gender characteristics. Approximately 70% of the 1.3 billion population in the developing countries living in poverty are women. Women living in rural areas are usually responsible for housework, including gathering fuels and water, cooking, farming etc. Studies in India indicates that rural women provide approximately 92% of total household energy supply, and 85% of their energy for cooking is provided by the biomass from forests or fields. Health impacts from energy consumption Energy poverty in rural households causes the health problems of women and children. One health problem is caused by indoor air pollution from traditional stoves. Study indicated that cooking with biomass is predicted to lead to 1.5 million deaths per year by 2030. The other health risks are caused by the heavy workload of collecting fuels and exposed to malnutrition. Meanwhile, the scarcity of fuel makes them less likely to use the fuels for cooking waters, which might increase risk of water-borne diseases. Also powering of medical equipment and stools, storage of blood and vaccines, performing of basic health procedures after dark are all possible on reliable energy supply. Unreliable energy supply prevent patient care at night especially pregnant women during delivery and those undergoing emergency caesarean section at night. All these result to 95% of maternal mortality in sub-Saharan Africa. Time poverty Energy poverty further affects women by putting them into the situation of "time poverty", which refers to the lack of time for resting, leisure, working outsides, getting education etc. It is the consequence of spending a long time gathering the fuels to supply the domestic energy use. The forest degradation caused by climate change might exaggerate the current problem. Participation in decision-making Also, energy poverty and gender menifest in the area of decision making and participation within the household. Studies have shown that in the rural areas in developing countries, men usually have more power in making decisions in purchasing energy devices or new technologies. This is because men and women have different and distinct perception about energy needs.Excluding women from participating public discussion and decision-making process is likely to lead to the failure in addressing the effects of energy poverty on women. Energy Poverty and Education Energy poverty affects teaching and learning. Lack of access to energy reduces children performance and attendance. Example In sub-Saharan African countries, energy poverty is especially challenging, due to the high cost of extending grid electricity in existing scattered rural settlement. For example, in Tanzania, energy poverty is affecting the livelihood of the majority, with only 15.5% of the population has access to electricity. The lack of electricity leads to the missing of efficient energy services like cooking, lighting etc. Hence the basic capabilities for development, like education, health, transportation are restricted. In face of energy poverty, the burden of supplying household energy use disproportionately lies on women than men. A case study in Tanzania examines impact of a women-oriented solar lighting social enterprise project on health, education, livelihood and gender equality. The results indicate that increasing the accessibility to energy service for women, could contribute to empowering women, children and local families’ development. In developed countries In developed countries, lone and old women are affected disproportionately by energy poverty. There are more women living alone than men because of their relatively longer life expectancy. Those older women usually have less pensions to support themselves, because they worked mostly inside the house. The rise in energy cost affects the affordability of heating and cooling service at home. Data from the UK Office for National Statistics indicates that, women have higher Excess Winter Mortality (EWM) more than men, and there is an increase of EWM from 8.2% to 12.4% between 2013 and 2012, among women under 65. Furthermore, the increasing energy price, relative low income, and together energy-inefficient houses contribute to energy poverty in developed countries. Components There are gender gaps in the energy labor market, energy-related education and decision-making process in developed countries. In the European Union, men dominant the energy sector with 77.9% in the workforce. Studies show that the under-representation is attributed to the following reasons: lack of necessary skills caused by the energy education gap, the perception of stereotypical men-domain energy sectors, and lack of opportunities for women working in energy sectors. The gendered energy education is related to the traditional images of ‘feminine’ or ‘masculine’ subjects as well as the lack of mentoring programs engaging female students to study in science subjects, like energy. Women are also under-represented in the decision-making process in energy sections in developed countries. A study conducted in Germany, Sweden and Spain shows that there is no female staff work in management group or as board member in the 295 energy companies they investigated in 2010. Similar situation is observed in the public energy sector, with 82.7% of high-level position occupied by men, though it is better in Nordic countries than in Mediterranean countries. Those gender gaps contribute to the "gender blindness" in the energy policies in developed countries. Example Caitlin Robinson (2019) conducted a study on gender and poverty in England. With the socio-spatial analysis, he argued that energy poverty could increase the gendered vulnerabilities. Five dimensions of gendered socio-spatial energy vulnerability are examined, including Exclusion from a productive economy Unpaid reproductive, caring or domestic roles Coping and helping others to cope Susceptibility to physiological and mental health impacts Lack of social protection during a life course " The result indicated that energy poverty is connected with economic and social activities and health, but more complex effects of energy vulnerability and gender should be analyzed at the household level, since it is relatively individual. Responses Some research indicates that investing in low-emission energy technologies can increase the accessibility to modern energy services, which will benefit the women living in energy poverty. The low-emission technologies are believed to be able to free poor women from fuel collection and drudgery, protect them from the air pollution caused by burning biomass, and enable them to have time for education and participating in public discussion etc. Other research argues that merely technologies approach is not enough, and suggests engaging local women in the decision-making process for locally appropriate energy programs. Pueyo & Maestre (2019) further studied whether men and women benefit differently in electrification. The results indicate that electrification benefits women in accessing paid works, but not as much as men. Women still have relatively lower quality works after electrification. Policies that address gender mainstreaming are suggested to consider both women's existing domestic work, and their accessibility to profitable activities, hence empowering them for long-term development. References Energy Gender Gender equality
Energy poverty and gender
Physics,Biology
1,584
46,499,192
https://en.wikipedia.org/wiki/Richard%20J.%20Bolte%20Sr.%20Award
The Richard J. Bolte Sr. Award recognizes "outstanding contributions by a leader who provides products or services vital to the continuing growth and development of the chemical and molecular sciences community". The medal is presented annually under the sponsorship of the Science History Institute (formerly the Chemical Heritage Foundation) at its annual Heritage Day. The inaugural award was presented to Richard J. Bolte, Sr., founder and chairman of BDP International, in 2006, as the Award for Supporting Industries. It was renamed the Richard J. Bolte Sr. Award for Supporting Industries in 2007. Recipients The award is given yearly and was first presented in 2006. Steven Holland, 2020 Frederick Frank, 2019 W. Graham Richards, 2018 Peter Young, 2017 Roy T. Eddleman, 2016 Abdul Aziz Bin Abdullah Al Zamil, 2015 Atsushi Horiba, 2014 Alan Walton, 2013 G. Steven Burrill, 2012 Lawrence B. Evans, 2011 C. Berdon Lawrence, 2010 David and Alice Schwartz, 2009 Jerry M. Sudarsky, 2008 Eugene Garfield, 2007 Richard J. Bolte, Sr., 2006 Photo Gallery See also List of chemistry awards References Chemistry awards
Richard J. Bolte Sr. Award
Technology
236
25,886,202
https://en.wikipedia.org/wiki/Conversion%20between%20Julian%20and%20Gregorian%20calendars
The tables below list equivalent dates in the Julian and Gregorian calendars. Years are given in astronomical year numbering. Conventions Within these tables, January 1 is always the first day of the year. The Gregorian calendar did not exist before October 15, 1582. Gregorian dates before that are proleptic, that is, using the Gregorian rules to reckon backward from October 15, 1582. Years are given in astronomical year numbering. Augustus corrected errors in the observance of leap years by omitting leap days until AD 8. Julian calendar dates before March AD 4 are proleptic, and do not necessarily match the dates actually observed in the Roman Empire. Conversion table This table is taken from the book by the Nautical almanac offices of the United Kingdom and United States originally published in 1961. Using the tables Dates near leap days that are observed in the Julian calendar but not in the Gregorian are listed in the table. Dates near the adoption date in some countries are also listed. For dates not listed, see below. The usual rules of algebraic addition and subtraction apply; adding a negative number is the same as subtracting the absolute value, and subtracting a negative number is the same as adding the absolute value. If conversion takes you past a February 29 that exists only in the Julian calendar, then February 29 is counted in the difference. Years affected are those which divide by 100 without remainder but do not divide by 400 without remainder (e.g., 1900 and 2100 but not 2000). No guidance is provided about conversion of dates before March 5, -500, or after February 29, 2100 (both being Julian dates). For unlisted dates, find the date in the table closest to, but earlier than, the date to be converted. Be sure to use the correct column. If converting from Julian to Gregorian, add the number from the "Difference" column. If converting from Gregorian to Julian, subtract. See also Revised Julian calendar References External links Calendars
Conversion between Julian and Gregorian calendars
Physics
407
8,863,274
https://en.wikipedia.org/wiki/Encampment%20%28Chinese%20constellation%29
The Encampment mansion () is one of the 28 mansions of the Chinese constellations. It is one of the northern mansions of the Black Tortoise. Asterisms References Chinese constellations
Encampment (Chinese constellation)
Astronomy
42