id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
41,214,485
https://en.wikipedia.org/wiki/Interspecies%20family
The term "interspecies family" refers to a group consisting of at least two members of different species who deeply care for each other. Examples include a human and their dog, a couple and their cats, a dog and a cat, or even a mule and a sheep. The emphasis is on love and that the members of the group care for and treat each other like a family. For instance, "interspecies family" may describe a group composed of a dog and a person who refers to their dog as their child, best friend, or other phrase that connotes a stronger bond than just a "pet", a term which implies a sense of property and ownership. Most often this is used to discuss non-human interspecies families, typically where a mother of one species will foster a youngling from a different species. With the more recent growth of the fields of anthrozoology and animal studies, this is being used more frequently to refer especially to bonds between human and non-human animals. History and use In 1881, Scientific American published a correspondence noting a female dog apparently mothered one puppy, but appeared to adopt a kitten born nearby and supposedly proceeded to suckling both her puppy and the kitten. The kitten was reported to accept the dog as a foster mother. Anthropomorphism Some of the earliest instances of the interspecies family involve the use of anthropomorphism, such as in the children's book Stuart Little in which a mouse is a member of a human family. This is a fairly common idea which led to some of the first uses of "interspecies family" in media targeted towards children. Non-human animal relationships There is also a popular trend in books and internet sites that involves capturing photographs and stories of interspecies non-human animal families. These heartwarming cases promote the idea that animals can look past such differences. However, in these cases they are more often referred to as "interspecies adoptions", "interspecies pairings", and "interspecies friendships". Human and non-human animal relationships Just recently, the term has been used to describe non-fiction situations involving human and non-human animal relationships. In 2011 a dissertation was written by Avigdor Edminster entitled "'This Dog Means Life': Making Interspecies Relations at an Assistance Dog Agency" in which "Interspecies families" is used frequently. There have been other instances since then, but they have been strictly within academic works. In the fall of 2013, The National Museum of Animals & Society created the exhibit entitled My Dog Is My Home: The Experience of Human-Animal Homelessness. The exhibit explores the experience of being homeless with an animal, and specifically the needs that are unmet because of homeless services' failure to recognize the legitimacy of the human-animal bond and their status as an "interspecies family". The exhibit helped popularize the phrase "interspecies family" among its animal protection audience as well as among social service providers. Part of the museum's mission is to promote this idea of inter-species families and make the general public more aware of the strong bonds that can be shared between human and non-human animals. References Behavioural sciences
Interspecies family
Biology
678
13,733,414
https://en.wikipedia.org/wiki/School%20of%20Restoration%20Arts%20at%20Willowbank
Willowbank is an independent educational institution located on the Willowbank National Historic Site and in the village centre of Queenston, Ontario, along the Canada-United States border. It operates a School of Restoration Arts which offers a three-year post-secondary diploma in conservation skills and theory, and a Centre for Cultural Landscape, a forum for cultural landscape theory and practice in Canada and the world. Willowbank was created from the rescue of a 19th-century estate which today forms the centre of its campus, and it is one of a handful of Canadian organisations of which Charles III is Royal Patron. School of Restoration Arts Founded in 2006, the School of Restoration Arts offers a three-year, post-secondary Diploma in Heritage Conservation which combines academic and apprenticeship learning, taught by over 50 faculty associates who are leading practitioners in the field. The School accepts a maximum of 18 students each year. The following are examples of courses offered in three primary areas of study: I. Research and Documentation Conservation theory - a cultural landscape approach to understanding and transforming historic places Historic research methods - oral and documentary research, identifying social value, sources Archaeological research - theoretical and legislative framework Architectural history - high style and vernacular architecture, traditions, technologies, styles Landscape history – urban and rural landscapes, research and documentation, garden history Cultural practice - aboriginal perspectives, tangible and intangible heritage, ecological awareness Documentation - measured drawings, hand-drawing, AutoCAD, architectural photography, interpretive recording Field investigations - condition analysis, identifying and dating, reporting II. Planning and Project Management Historic structures report - compiling research and documentation findings Cultural landscape studies – heritage districts, complex sites, cognitive mapping, artifact and ritual Statements of significance - identifying historical, physical and social value, tangible and intangible components, designation options Conservation plan - preservation, restoration, rehabilitation and adaptive reuse, contemporary design interventions Business models - real estate practice, the restoration economy Legal and zoning issues - heritage legislation, building codes, zoning bylaws Energy systems - traditional and alternative theory and practice, sustainable design, theoretical and empirical models Project and construction management – coordination of specialized materials and skills, interdisciplinary approaches, design-build. III. Craft and Design Skills Stone and mortar - basic geology, quarrying, stone dressing and coursing, stone carving, lime mortars, conservation techniques Brick, terra cotta – history, physical and chemical properties, traditional practice, repair, conservation Concrete – mass and reinforced concrete, precast, traditional and contemporary practice, patterns of decay, conservation Plasters - materials, applications, plain and decorative plaster, cast plaster, conservation, replication Wood - species, milling, traditional and alternative tools, carpentry, joinery, doors, windows, repair Metals - forge practice, ironwork, sheet metal work, metal repair Glass - leaded and stained glass windows, glass replacement, repair and restoration Fittings - hardware, traditional and contemporary lighting Design – setting parameters, continuity and creativity, design as a material-based and site-based activity, drawing for design Conservation science - basic chemistry, environmental issues, artifact care Centre for Cultural Landscape The Willowbank Centre for Cultural Landscape is an initiative that builds on its exploration of cultural landscape theory and practice to define a particularly Canadian perspective in this area of current discussion and debate. It facilitates a program of lecture series, workshops and publications to promote a connection between cultural landscape and cultural practice, and to advocate for its ecological view of cultural and natural heritage conservation. The centre also coordinates Willowbank's seat in global deliberations about conservation, whose participants include a cross-section of board, staff and students. In 2013, Willowbank signed an agreement with the UNESCO-affiliated World Heritage Institute in Shanghai, China, with a focus on the UNESCO Recommendation for Historic Urban Landscapes. The centre was created in 2012 through a multi-year grant from the Ontario Trillium Foundation. Campus Willowbank's campus is composed of two main parts, a 19th-century estate, and a former village elementary school with its surrounding lands. The former, described in further detail below, traces its history to 8,000 years of human habitation. The landscape of the estate includes a ravine and a hill upon which sits a 19th-century stone mansion. Newly built and re-built outbuildings have been constructed to house student workshops, including a barn and a forge and stone studio. Adjacent to the estate is the former Laura Secord School in the centre of the village, bought in 2012 by Willowbank after it was abandoned. The transfer is designed to house school activities while also returning the building and lands to community use and creating affordable housing. Its original wing was constructed in 1914, with substantial additions made in the 1950s. The original, two-classroom building housed students upstairs while the lower level was a centre of village life. Field School Willowbank runs a three-week field school every summer in partnership with the Canova Association in the Ossola Valley in Northern Italy. It is a microcosm of the diploma program, with an introduction to cultural landscape theory and practice, an experience of documentation and design, and then a major hands-on component. The work involves creative restoration and adaptive reuse of abandoned medieval stone structures, carried out under the direction of Italian masons and craftsmen. It coincides with the International Architect Encounter in the same place, an annual, intimate conference of world leaders in conservation and ecological design, with whom the field school's participants are able to engage. National Historic Site Willowbank's campus includes a designated National Historic Site in the century estate to which it traces its name, also designated under Part IV of the Ontario Heritage Act, and further protected by a heritage easement granted to The Ontario Heritage Trust. It is named after willow trees that were once located on its grounds, is an example of the rural estates of the wealthy settlers of early 19th century Upper Canada. Its mansion, which is the centre of campus, was built between 1832 and 1834 for Alexander Hamilton, third son of Robert Hamilton, one of the founders of Upper Canada. Constructed in the Greek Revival style of architecture then at its height in North America for such grand houses, Willowbank's is an example of such buildings on the continent. Designed by architect, John Latshaw, and built of local Whirlpool sandstone, the building is characterised by the rare features of eight hand-carved columns running its full two-story height, and by a front doorway of Greek design. The 13-acre estate today forms the centre of the Willowbank campus. Royal Patronage The Prince of Wales became Royal Patron of Willowbank in 2014, and first met its leaders during his tour of Canada at a meeting of urban designers, planners, developers and civic leaders convened by the prince in Winnipeg, Manitoba. In 2015, Lieutenant Governor Elizabeth Dowdeswell, the Queen's representative in Ontario, visited Willowbank. External links Willowbank School official website Willowbank School Instagram Willowbank Centre Courses Willowbank Centre Instagram Footnotes Architectural history Schools in Ontario Organizations based in Canada with royal patronage New Classical architecture in Canada
School of Restoration Arts at Willowbank
Engineering
1,426
44,041
https://en.wikipedia.org/wiki/Solvation
Solvations describes the interaction of a solvent with dissolved molecules. Both ionized and uncharged molecules interact strongly with a solvent, and the strength and nature of this interaction influence many properties of the solute, including solubility, reactivity, and color, as well as influencing the properties of the solvent such as its viscosity and density. If the attractive forces between the solvent and solute particles are greater than the attractive forces holding the solute particles together, the solvent particles pull the solute particles apart and surround them. The surrounded solute particles then move away from the solid solute and out into the solution. Ions are surrounded by a concentric shell of solvent. Solvation is the process of reorganizing solvent and solute molecules into solvation complexes and involves bond formation, hydrogen bonding, and van der Waals forces. Solvation of a solute by water is called hydration. Solubility of solid compounds depends on a competition between lattice energy and solvation, including entropy effects related to changes in the solvent structure. Distinction from solubility By an IUPAC definition, solvation is an interaction of a solute with the solvent, which leads to stabilization of the solute species in the solution. In the solvated state, an ion or molecule in a solution is surrounded or complexed by solvent molecules. Solvated species can often be described by coordination number, and the complex stability constants. The concept of the solvation interaction can also be applied to an insoluble material, for example, solvation of functional groups on a surface of ion-exchange resin. Solvation is, in concept, distinct from solubility. Solvation or dissolution is a kinetic process and is quantified by its rate. Solubility quantifies the dynamic equilibrium state achieved when the rate of dissolution equals the rate of precipitation. The consideration of the units makes the distinction clearer. The typical unit for dissolution rate is mol/s. The units for solubility express a concentration: mass per volume (mg/mL), molarity (mol/L), etc. Solvents and intermolecular interactions Solvation involves different types of intermolecular interactions: Hydrogen bonding Ion–dipole interactions The van der Waals forces, which consist of dipole–dipole, dipole–induced dipole, and induced dipole–induced dipole interactions. Which of these forces are at play depends on the molecular structure and properties of the solvent and solute. The similarity or complementary character of these properties between solvent and solute determines how well a solute can be solvated by a particular solvent. Solvent polarity is the most important factor in determining how well it solvates a particular solute. Polar solvents have molecular dipoles, meaning that part of the solvent molecule has more electron density than another part of the molecule. The part with more electron density will experience a partial negative charge while the part with less electron density will experience a partial positive charge. Polar solvent molecules can solvate polar solutes and ions because they can orient the appropriate partially charged portion of the molecule towards the solute through electrostatic attraction. This stabilizes the system and creates a solvation shell (or hydration shell in the case of water) around each particle of solute. The solvent molecules in the immediate vicinity of a solute particle often have a much different ordering than the rest of the solvent, and this area of differently ordered solvent molecules is called the cybotactic region. Water is the most common and well-studied polar solvent, but others exist, such as ethanol, methanol, acetone, acetonitrile, and dimethyl sulfoxide. Polar solvents are often found to have a high dielectric constant, although other solvent scales are also used to classify solvent polarity. Polar solvents can be used to dissolve inorganic or ionic compounds such as salts. The conductivity of a solution depends on the solvation of its ions. Nonpolar solvents cannot solvate ions, and ions will be found as ion pairs. Hydrogen bonding among solvent and solute molecules depends on the ability of each to accept H-bonds, donate H-bonds, or both. Solvents that can donate H-bonds are referred to as protic, while solvents that do not contain a polarized bond to a hydrogen atom and cannot donate a hydrogen bond are called aprotic. H-bond donor ability is classified on a scale (α). Protic solvents can solvate solutes that can accept hydrogen bonds. Similarly, solvents that can accept a hydrogen bond can solvate H-bond-donating solutes. The hydrogen bond acceptor ability of a solvent is classified on a scale (β). Solvents such as water can both donate and accept hydrogen bonds, making them excellent at solvating solutes that can donate or accept (or both) H-bonds. Some chemical compounds experience solvatochromism, which is a change in color due to solvent polarity. This phenomenon illustrates how different solvents interact differently with the same solute. Other solvent effects include conformational or isomeric preferences and changes in the acidity of a solute. Solvation energy and thermodynamic considerations The solvation process will be thermodynamically favored only if the overall Gibbs energy of the solution is decreased, compared to the Gibbs energy of the separated solvent and solid (or gas or liquid). This means that the change in enthalpy minus the change in entropy (multiplied by the absolute temperature) is a negative value, or that the Gibbs energy of the system decreases. A negative Gibbs energy indicates a spontaneous process but does not provide information about the rate of dissolution. Solvation involves multiple steps with different energy consequences. First, a cavity must form in the solvent to make space for a solute. This is both entropically and enthalpically unfavorable, as solvent ordering increases and solvent-solvent interactions decrease. Stronger interactions among solvent molecules leads to a greater enthalpic penalty for cavity formation. Next, a particle of solute must separate from the bulk. This is enthalpically unfavorable since solute-solute interactions decrease, but when the solute particle enters the cavity, the resulting solvent-solute interactions are enthalpically favorable. Finally, as solute mixes into solvent, there is an entropy gain. The enthalpy of solution is the solution enthalpy minus the enthalpy of the separate systems, whereas the entropy of solution is the corresponding difference in entropy. The solvation energy (change in Gibbs free energy) is the change in enthalpy minus the product of temperature (in Kelvin) times the change in entropy. Gases have a negative entropy of solution, due to the decrease in gaseous volume as gas dissolves. Since their enthalpy of solution does not decrease too much with temperature, and their entropy of solution is negative and does not vary appreciably with temperature, most gases are less soluble at higher temperatures. Enthalpy of solvation can help explain why solvation occurs with some ionic lattices but not with others. The difference in energy between that which is necessary to release an ion from its lattice and the energy given off when it combines with a solvent molecule is called the enthalpy change of solution. A negative value for the enthalpy change of solution corresponds to an ion that is likely to dissolve, whereas a high positive value means that solvation will not occur. It is possible that an ion will dissolve even if it has a positive enthalpy value. The extra energy required comes from the increase in entropy that results when the ion dissolves. The introduction of entropy makes it harder to determine by calculation alone whether a substance will dissolve or not. A quantitative measure for solvation power of solvents is given by donor numbers. Although early thinking was that a higher ratio of a cation's ion charge to ionic radius, or the charge density, resulted in more solvation, this does not stand up to scrutiny for ions like iron(III) or lanthanides and actinides, which are readily hydrolyzed to form insoluble (hydrous) oxides. As these are solids, it is apparent that they are not solvated. Strong solvent–solute interactions make the process of solvation more favorable. One way to compare how favorable the dissolution of a solute is in different solvents is to consider the free energy of transfer. The free energy of transfer quantifies the free energy difference between dilute solutions of a solute in two different solvents. This value essentially allows for comparison of solvation energies without including solute-solute interactions. In general, thermodynamic analysis of solutions is done by modeling them as reactions. For example, if you add sodium chloride to water, the salt will dissociate into the ions sodium(+aq) and chloride(-aq). The equilibrium constant for this dissociation can be predicted by the change in Gibbs energy of this reaction. The Born equation is used to estimate Gibbs free energy of solvation of a gaseous ion. Recent simulation studies have shown that the variation in solvation energy between the ions and the surrounding water molecules underlies the mechanism of the Hofmeister series. Macromolecules and assemblies Solvation (specifically, hydration) is important for many biological structures and processes. For instance, solvation of ions and/or of charged macromolecules, like DNA and proteins, in aqueous solutions influences the formation of heterogeneous assemblies, which may be responsible for biological function. As another example, protein folding occurs spontaneously, in part because of a favorable change in the interactions between the protein and the surrounding water molecules. Folded proteins are stabilized by 5-10 kcal/mol relative to the unfolded state due to a combination of solvation and the stronger intramolecular interactions in the folded protein structure, including hydrogen bonding. Minimizing the number of hydrophobic side chains exposed to water by burying them in the center of a folded protein is a driving force related to solvation. Solvation also affects host–guest complexation. Many host molecules have a hydrophobic pore that readily encapsulates a hydrophobic guest. These interactions can be used in applications such as drug delivery, such that a hydrophobic drug molecule can be delivered in a biological system without needing to covalently modify the drug in order to solubilize it. Binding constants for host–guest complexes depend on the polarity of the solvent. Hydration affects electronic and vibrational properties of biomolecules. Importance of solvation in computer simulations Due to the importance of the effects of solvation on the structure of macromolecules, early computer simulations which attempted to model their behaviors without including the effects of solvent (in vacuo) could yield poor results when compared with experimental data obtained in solution. Small molecules may also adopt more compact conformations when simulated in vacuo; this is due to favorable van der Waals interactions and intramolecular electrostatic interactions which would be dampened in the presence of a solvent. As computer power increased, it became possible to try and incorporate the effects of solvation within a simulation and the simplest way to do this is to surround the molecule being simulated with a "skin" of solvent molecules, akin to simulating the molecule within a drop of solvent if the skin is sufficiently deep. See also Born equation Saturated solution Solubility equilibrium Solvent models Supersaturation Water model References Further reading (part A), (part B), (Chemistry). One example of a solvated MOF, where partial dissolution is described. External links Solutions Chemical processes
Solvation
Chemistry
2,426
22,923,559
https://en.wikipedia.org/wiki/1948%20American-Australian%20Scientific%20Expedition%20to%20Arnhem%20Land
The American-Australian Scientific Expedition to Arnhem Land (also known as the Arnhem Land Expedition) remains one of the most significant, most ambitious and least understood expeditions. Commenced in February 1948, it was one of the largest scientific expeditions to have taken place in Australia and was conducted by a team of Australian and American researchers and support staff. Background A number of publications, including H. H. Finlayson’s The Red Centre: Man and Beast in the Heart of Australia (1935), and Walkabout travel and geographical magazine (1934–1974), revised Australians' concept of 'The Centre" from the picture presented in J. W. Gregory's The Dead Heart of Australia (1909). Leader-to-be of the Arnhem Land Expedition, Charles P. Mountford and his wife Bessie travelled over four months from Ernabella to Uluru in 1940, with Lauri Sheard and skilled cameleer Tommy Dodd undertaking an extensive study of the art and mythology surrounding Uluru and Kata Tjuta. The results of this endeavour were showcased through photographic exhibitions and a prize-winning film created in 1940, which subsequently became the foundation for Mountford's first publication Brown Men and Red Sand (1948), and his 1945 lecture tour in the United States which paved the way for the establishment of the American-Australian Scientific Expedition to Arnhem Land. The American-Australian Scientific Expedition to Arnhem Land, known as the 'last of the big expeditions,' was not primarily about terrestrial exploration but aimed to advance knowledge. It focused on studying the natural environment and Aboriginal inhabitants. Taking place after World War II, it symbolized transformations in Australia and globally. The expedition served diplomatic objectives by showcasing collaboration between the United States and Australia, enhancing their trans-Pacific relationship. The mission's public face hid negotiations that would shape this relationship for the 20th century. The expedition garnered domestic support due to Australia's pro-American sentiments after WWII, as the nation adjusted to post-war changes and Britain's reduced global influence. The subsequent signing of the ANZUS Treaty by Robert Menzies continued this collaborative trajectory. The expedition Seventeen individuals, both men and women, journeyed across the remote region known as Arnhem Land in northern Australia for nine months. From varying disciplinary perspectives, and under the guidance of expedition leader Charles Mountford, they investigated the Indigenous populations and the environment of Arnhem Land. In addition to an ethnographer, archaeologist, photographer, and filmmaker, the expedition included a botanist, a mammalogist, an ichthyologist, an ornithologist, and a team of medical and nutritional scientists. Their first base camp was Groote Eylandt in the Gulf of Carpentaria. Three months later they moved to Yirrkala on the Gove Peninsula and three months following that to Oenpelli (now Gunbalanya) in west Arnhem Land. The journey involved the collaboration of different sponsors and partners (among them the National Geographic Society, the Smithsonian Institution, and various agencies of the Commonwealth of Australia). A Bulletin article in 1956 noted that the scientists collected 13,500 plants, 30,000 fish, 850 birds, 460 animals, thousands of implements, amounting to twenty-five tons, and photographed and filmed in colour and black-and-white and made tracings of cave-paintings from Chasm Island, Groote Eylandt and Oenpelli. The Australian Broadcasting Commission promoted the Expedition in its ABC Weekly magazine by appealing to readers curiosity about "...a fish that looks exactly like a leaf, a multi-coloured praying mantis, intricate string games the aborigines play, a fungus used to cure wounds..." In the wake of the expedition came volumes of scientific publications. The legacy of the 1948 Arnhem Land Expedition is vast, complex, and, at times, contentious. Human remains collected by Setzler and later held by the Smithsonian Institution have since been repatriated to Gunbalanya. Expedition members ABC reporters Two staff members from ABC Radio also joined the expedition: Colin Simpson Raymond Frank Giles - Sound Recorder Publications Collections National Museum of Australia Australian Museum National Museum of Natural History, Smithsonian Institution Art Gallery of New South Wales South Australian Museum State Herbarium of South Australia Art Gallery of South Australia State Library of South Australia (literary collections) Tasmanian Museum and Art Gallery Art Gallery of Western Australia Queensland Art Gallery National Gallery of Victoria Notes and references Further reading May, Sally K. in press 2009. Collecting Indigenous Cultures: myth, politics and collaboration in the 1948 Arnhem Land Expedition. California: Altamira. May, Sally K. 2008 ‘The Art of Collecting: Charles Pearcy Mountford’. In Nicholas Peterson, Lindy Allen, and Louise Hamby, The Makers and Making of Indigenous Australian Museum Collections. Melbourne: Museum Victoria. May, Sally K. with Donald Gumurdul, Jacob Manakgu, Gabriel Maralngurra and Wilfred Nawirridj. 2005. 'You Write it Down and Bring it Back… That's What We Want" - Revisiting the 1948 Removal of Human Remains from Gunbalanya (Oenpelli), Australia', in Smith, Claire & Wobst, H. Martin (eds). Indigenous Peoples and Archaeology. London: Routledge. May, Sally K. 2005 ‘Collecting the ‘Last Frontier’’, in Hamby, Louise (ed). Twined Together. Melbourne: Museum Victoria. May, Sally K, Jennifer McKinnon and Jason Raupp, 2009. ‘Boats on Bark: an analysis of Groote Eylandt bark paintings featuring Macassan praus from the 1948 Arnhem Land Expedition’, International Journal of Nautical Archaeology. May, Sally K. 2003 'Colonial Collections of Portable Art and Intercultural Encounters in Aboriginal Australia', in Paul Faulstich, Sven Ouzman, and Paul S.C. Taçon (eds), Before Farming: the archaeology and anthropology of hunter-gatherers. California: Altamira. 1, 8, p. 1-17. May, Sally K. 2000. The Last Frontier? Acquiring the American-Australian Scientific Expedition Ethnographic Collection 1948, Unpublished B.A. (Honours) Thesis, Flinders University of South Australia. Neale, Margo. 1993 'Charles Mountford and the 'Bastard Barks' A Gift from the American-Australian Scientific Expedition to Arnhem Land, 1948. In Lynne Seear & Julie Ewington, Brought to Light, Australian Art 1850 - 1965, From the Queensland Art Gallery Collection. Brisbane: Queensland Art Gallery. Brittain, N. (1990). The South Australian Museum collection of Aboriginal bark paintings from Northern Australia. Unpublished Honors BA Thesis, Flinders University of South Australia, Adelaide. Calwell, A. (1978). Be just and fear not. Adelaide: Rigby Limited. Clarke, A. (1998). Engendered fields: The language of the 1948 American-Australian expedition to Arnhem Land. In Redefining Archaeology, Feminist Perspectives. Canberra: North Australia Research Unit. Florek, S. (1993). F. D. McCarthy’s string figures from Yirrkala: A museum perspective. Records of the Australian Museum, Supplement 17, pp. 117–24. Johnson, D. H. (1955). The incredible kangaroo. National geographic, 108(4), 487–500. Walker, H. (1949). Cruise to Stone Age Arnhem Land. National Geographic, 96(3), 417–30. Jones, C. (1987). The toys of the American Australian Scientific Expedition to Arnhem Land ethnographic collection. Unpublished Diploma Thesis, University of Sydney, Sydney. Lamshed, M. (1972). Monty: A biography of CP. Mountford. Adelaide: Rigby. McArthur, M., Billington, B. P., and Hodges, K. J. (2000). Nutrition and health (1948) of Aborigines in settlements in Arnhem Land, northern Australia. Asia Pacific journal of clinical nutrition, 9(3), 164–213. McArthur, M., McCarthy, F., and Specht, R. (2000). Nutrition studies (1948) of nomadic Aborigines in Arnhem Land, northern Australia. Asia Pacific Journal of clinical nutrition, 9(3), 215–23. Simpson, C. (1951). Adam in Ochre: Inside Aboriginal Australia. Sydney: Angus and Robertson. External links National Museum of Australia Audio on Demand: Barks, Birds and Billabongs: Exploring the Legacy of the 1948 American-Australian Scientific Expedition to Arnhem Land, International Symposium held at the National Museum of Australia 16–20 November 2009 State Library of South Australia: Mountford-Sheard Collection 1940s in the Northern Territory Arnhem Land Arnhem Land expedition Arnhem Land expedition Anthropological research institutes Biochemistry Ornithology in Australia Ichthyology Botanical expeditions National Geographic Society Archaeology in Oceania Scientific expeditions Photographic collections Australian Aboriginal cultural history Nutritional science Mammalogy Museology
1948 American-Australian Scientific Expedition to Arnhem Land
Chemistry,Biology
1,880
35,714,108
https://en.wikipedia.org/wiki/Cancer%20biomarker
A cancer biomarker refers to a substance or process that is indicative of the presence of cancer in the body. A biomarker may be a molecule secreted by a tumor or a specific response of the body to the presence of cancer. Genetic, epigenetic, proteomic, glycomic, and imaging biomarkers can be used for cancer diagnosis, prognosis, and epidemiology. Ideally, such biomarkers can be assayed in non-invasively collected biofluids like blood or serum. While numerous challenges exist in translating biomarker research into the clinical space; a number of gene and protein based biomarkers have already been used at some point in patient care; including, AFP (liver cancer), BCR-ABL (chronic myeloid leukemia), BRCA1 / BRCA2 (breast/ovarian cancer), BRAF V600E (melanoma/colorectal cancer), CA-125 (ovarian cancer), CA19.9 (pancreatic cancer), CEA (colorectal cancer), EGFR (Non-small-cell lung carcinoma), HER-2 (Breast Cancer), KIT (gastrointestinal stromal tumor), PSA (prostate specific antigen) (prostate cancer), S100 (melanoma), and many others. Mutant proteins themselves detected by selected reaction monitoring (SRM) have been reported to be the most specific biomarkers for cancers because they can only come from an existing tumor. About 40% of cancers can be cured if detected early through examinations. Definitions of cancer biomarkers Organizations and publications vary in their definition of biomarker. In many areas of medicine, biomarkers are limited to proteins identifiable or measurable in the blood or urine. However, the term is often used to cover any molecular, biochemical, physiological, or anatomical property that can be quantified or measured. The National Cancer Institute (NCI), in particular, defines biomarker as a: “A biological molecule found in blood, other body fluids, or tissues that is a sign of a normal or abnormal process, or of a condition or disease. A biomarker may be used to see how well the body responds to a treatment for a disease or condition. Also called molecular marker and signature molecule." In cancer research and medicine, biomarkers are used in three primary ways: To help diagnose conditions, as in the case of identifying early stage cancers (diagnostic) To forecast how aggressive a condition is, as in the case of determining a patient's ability to fare in the absence of treatment (prognostic) To predict how well a patient will respond to treatment (predictive) Role of biomarkers in cancer research and medicine Uses of biomarkers in cancer medicine Risk assessment Cancer biomarkers, particular those associated with genetic mutations or epigenetic alterations, often offer a quantitative way to determine when individuals are predisposed to particular types of cancers. Notable examples of potentially predictive cancer biomarkers include mutations on genes KRAS, p53, EGFR, erbB2 for colorectal, esophageal, liver, and pancreatic cancer; mutations of genes BRCA1 and BRCA2 for breast and ovarian cancer; abnormal methylation of tumor suppressor genes p16, CDKN2B, and p14ARF for brain cancer; hypermethylation of MYOD1, CDH1, and CDH13 for cervical cancer; and hypermethylation of p16, p14, and RB1, for oral cancer. Diagnosis Cancer biomarkers can also be useful in establishing a specific diagnosis. This is particularly the case when there is a need to determine whether tumors are of primary or metastatic origin. To make this distinction, researchers can screen the chromosomal alterations found on cells located in the primary tumor site against those found in the secondary site. If the alterations match, the secondary tumor can be identified as metastatic; whereas if the alterations differ, the secondary tumor can be identified as a distinct primary tumor. For example, people with tumors have high levels of circulating tumor DNA (ctDNA) due to tumor cells that have gone through apoptosis. This tumor marker can be detected in the blood, saliva, or urine. The possibility of identifying an effective biomarker for early cancer diagnosis has recently been questioned, in light of the high molecular heterogeneity of tumors observed by next-generation sequencing studies. Prognosis and treatment predictions Another use of biomarkers in cancer medicine is for disease prognosis, which take place after an individual has been diagnosed with cancer. Here biomarkers can be useful in determining the aggressiveness of an identified cancer as well as its likelihood of responding to a given treatment. In part, this is because tumors exhibiting particular biomarkers may be responsive to treatments tied to that biomarker's expression or presence. Examples of such prognostic biomarkers include elevated levels of metallopeptidase inhibitor 1 (TIMP1), a marker associated with more aggressive forms of multiple myeloma, elevated estrogen receptor (ER) and/or progesterone receptor (PR) expression, markers associated with better overall survival in patients with breast cancer; HER2/neu gene amplification, a marker indicating a breast cancer will likely respond to trastuzumab treatment; a mutation in exon 11 of the proto-oncogene c-KIT, a marker indicating a gastrointestinal stromal tumor (GIST) will likely respond to imatinib treatment; and mutations in the tyrosine kinase domain of EGFR1, a marker indicating a patient's non-small-cell lung carcinoma (NSCLC) will likely respond to gefitinib or erlotinib treatment. Pharmacodynamics and pharmacokinetics Cancer biomarkers can also be used to determine the most effective treatment regime for a particular person's cancer. Because of differences in each person's genetic makeup, some people metabolize or change the chemical structure of drugs differently. In some cases, decreased metabolism of certain drugs can create dangerous conditions in which high levels of the drug accumulate in the body. As such, drug dosing decisions in particular cancer treatments can benefit from screening for such biomarkers. An example is the gene encoding the enzyme thiopurine methyl-transferase (TPMPT). Individuals with mutations in the TPMT gene are unable to metabolize large amounts of the leukemia drug, mercaptopurine, which potentially causes a fatal drop in white blood count for such patients. Patients with TPMT mutations are thus recommended to be given a lower dose of mercaptopurine for safety considerations. Monitoring treatment response Cancer biomarkers have also shown utility in monitoring how well a treatment is working over time. Much research is going into this particular area, since successful biomarkers have the potential of providing significant cost reduction in patient care, as the current image-based tests such as CT and MRI for monitoring tumor status are highly costly. One notable biomarker garnering significant attention is the protein biomarker S100-beta in monitoring the response of malignant melanoma. In such melanomas, melanocytes, the cells that make pigment in our skin, produce the protein S100-beta in high concentrations dependent on the number of cancer cells. Response to treatment is thus associated with reduced levels of S100-beta in the blood of such individuals. Similarly, additional laboratory research has shown that tumor cells undergoing apoptosis can release cellular components such as cytochrome c, nucleosomes, cleaved cytokeratin-18, and E-cadherin. Studies have found that these macromolecules and others can be found in circulation during cancer therapy, providing a potential source of clinical metrics for monitoring treatment. Recurrence Cancer biomarkers can also offer value in predicting or monitoring cancer recurrence. The Oncotype DX® breast cancer assay is one such test used to predict the likelihood of breast cancer recurrence. This test is intended for women with early-stage (Stage I or II), node-negative, estrogen receptor-positive (ER+) invasive breast cancer who will be treated with hormone therapy. Oncotype DX looks at a panel of 21 genes in cells taken during tumor biopsy. The results of the test are given in the form of a recurrence score that indicates likelihood of recurrence at 10 years. Uses of biomarkers in cancer research Developing drug targets In addition to their use in cancer medicine, biomarkers are often used throughout the cancer drug discovery process. For instance, in the 1960s, researchers discovered the majority of patients with chronic myelogenous leukemia possessed a particular genetic abnormality on chromosomes 9 and 22 dubbed the Philadelphia chromosome. When these two chromosomes combine they create a cancer-causing gene known as BCR-ABL. In such patients, this gene acts as the principle initial point in all of the physiological manifestations of the leukemia. For many years, the BCR-ABL was simply used as a biomarker to stratify a certain subtype of leukemia. However, drug developers were eventually able to develop imatinib, a powerful drug that effectively inhibited this protein and significantly decreased production of cells containing the Philadelphia chromosome. Surrogate endpoints Another promising area of biomarker application is in the area of surrogate endpoints. In this application, biomarkers act as stand-ins for the effects of a drug on cancer progression and survival. Ideally, the use of validated biomarkers would prevent patients from having to undergo tumor biopsies and lengthy clinical trials to determine if a new drug worked. In the current standard of care, the metric for determining a drug's effectiveness is to check if it has decreased cancer progression in humans and ultimately whether it prolongs survival. However, successful biomarker surrogates could save substantial time, effort, and money if failing drugs could be eliminated from the development pipeline before being brought to clinical trials. Some ideal characteristics of surrogate endpoint biomarkers include: Biomarker should be involved in process that causes the cancer Changes in biomarker should correlate with changes in the disease Levels of biomarkers should be high enough that they can be measured easily and reliably Levels or presence of biomarker should readily distinguish between normal, cancerous, and precancerous tissue Effective treatment of the cancer should change the level of the biomarker Level of the biomarker should not change spontaneously or in response to other factors not related to the successful treatment of the cancer Two areas in particular that are receiving attention as surrogate markers include circulating tumor cells (CTCs) and circulating miRNAs. Both these markers are associated with the number of tumor cells present in the blood, and as such, are hoped to provide a surrogate for tumor progression and metastasis. However, significant barriers to their adoption include the difficulty of enriching, identifying, and measuring CTC and miRNA levels in blood. New technologies and research are likely necessary for their translation into clinical care. Types of cancer biomarkers Molecular cancer biomarkers Other examples of biomarkers: Tumor suppressors lost in cancer Examples: BRCA1, BRCA2 RNA Examples: mRNA, microRNA Proteins found in body fluids or tissue. Examples: Prostate-specific antigen, and CA-125 Antibodies to cancer antigens Examples: Merkel cell polyomavirus DNA Examples: Circulating Tumor DNA (ctDNA) Cancer biomarkers without specificity Not all cancer biomarkers have to be specific to types of cancer. Some biomarkers found in the circulatory system can be used to determine an abnormal growth of cells present in the body. All these types of biomarkers can be identified through diagnostic blood tests, which is one of the main reasons to get regularly health tested. By getting regularly tested, many health issues such as cancer can be discovered at an early stage, preventing many deaths. The neutrophil-to-lymphocyte ratio has been shown to be a non-specific determinant for many cancers. This ratio focuses on the activity of two components of the immune system that are involved in inflammatory response which is shown to be higher in presence of malignant tumors. Additionally, basic fibroblast growth factor (bFGF) is a protein that is involved in the proliferation of cells. Unfortunately, it has been shown that in the presence of tumors it is highly active which has led to the conclusion that it may help malignant cells reproduce at faster rates. Research has shown that anti-bFGF antibodies can be used to help treat tumors from many origins. Moreover, insulin-like growth factor (IGF-R) is involved in cell proliferation and growth. It has is possible that it is involved in inhibiting apoptosis, programmed cell death due to some defect. Due to this, the levels of IGF-R can be increased when cancer such as breast, prostate, lung, and colorectum is present. See also Tumor marker References Oncology Biomarkers
Cancer biomarker
Biology
2,749
2,916,802
https://en.wikipedia.org/wiki/53%20Cancri
53 Cancri is a variable star in the zodiac constellation Cancer, located around 960 light years from the Sun. It has the variable star designation BO Cancri; 53 Cancri is the Flamsteed designation. This object is a challenge to view with the naked eye, having an apparent visual magnitude around 6. It is around 960 light years away. The star is moving further away from the Earth with a heliocentric radial velocity of +14 km/s. 53 Cancri is an aging red giant on the asymptotic giant branch and has a stellar classification of M3 III. It has expanded to 87 times the radius of the Sun, and its bolometric luminosity is over a thousand times higher than the Sun's at an effective temperature of . In 1969, Olin Jeuck Eggen announced that small vaiarions in the brightness of 53 Cancri had been detected. For that reason it was given a variable star designation in 1972. 53 Cancri is a semiregular variable that varies between magnitude 5.9 and 6.4 with a period of 27 days. There is a suspected second period of 270 days. References M-type giants Semiregular variable stars Cancer (constellation) Durchmusterung objects Cancri, 53 075716 3521 043575 Cancri, BO
53 Cancri
Astronomy
282
48,063
https://en.wikipedia.org/wiki/Fibonacci%20coding
In mathematics and computing, Fibonacci coding is a universal code which encodes positive integers into binary code words. It is one example of representations of integers based on Fibonacci numbers. Each code word ends with "11" and contains no other instances of "11" before the end. The Fibonacci code is closely related to the Zeckendorf representation, a positional numeral system that uses Zeckendorf's theorem and has the property that no number has a representation with consecutive 1s. The Fibonacci code word for a particular integer is exactly the integer's Zeckendorf representation with the order of its digits reversed and an additional "1" appended to the end. Definition For a number , if represent the digits of the code word representing then we have: where is the th Fibonacci number, and so is the th distinct Fibonacci number starting with . The last bit is always an appended bit of 1 and does not carry place value. It can be shown that such a coding is unique, and the only occurrence of "11" in any code word is at the end (that is, d(k−1) and d(k)). The penultimate bit is the most significant bit and the first bit is the least significant bit. Also, leading zeros cannot be omitted as they can be in, for example, decimal numbers. The first few Fibonacci codes are shown below, and also their so-called implied probability, the value for each number that has a minimum-size code in Fibonacci coding. To encode an integer : Find the largest Fibonacci number equal to or less than ; subtract this number from , keeping track of the remainder. If the number subtracted was the th Fibonacci number , put a 1 in place in the code word (counting the left most digit as place 0). Repeat the previous steps, substituting the remainder for , until a remainder of 0 is reached. Place an additional 1 after the rightmost digit in the code word. To decode a code word, remove the final "1", assign the remaining the values 1,2,3,5,8,13... (the Fibonacci numbers) to the bits in the code word, and sum the values of the "1" bits. Comparison with other universal codes Fibonacci coding has a useful property that sometimes makes it attractive in comparison to other universal codes: it is an example of a self-synchronizing code, making it easier to recover data from a damaged stream. With most other universal codes, if a single bit is altered, then none of the data that comes after it will be correctly read. With Fibonacci coding, on the other hand, a changed bit may cause one token to be read as two, or cause two tokens to be read incorrectly as one, but reading a "0" from the stream will stop the errors from propagating further. Since the only stream that has no "0" in it is a stream of "11" tokens, the total edit distance between a stream damaged by a single bit error and the original stream is at most three. This approach, encoding using sequence of symbols, in which some patterns (like "11") are forbidden, can be freely generalized. Example The following table shows that the number 65 is represented in Fibonacci coding as 0100100011, since . The first two Fibonacci numbers (0 and 1) are not used, and an additional 1 is always appended. Generalizations The Fibonacci encodings for the positive integers are binary strings that end with "11" and contain no other instances of "11". This can be generalized to binary strings that end with N consecutive 1s and contain no other instances of N consecutive 1s. For instance, for N = 3 the positive integers are encoded as 111, 0111, 00111, 10111, 000111, 100111, 010111, 110111, 0000111, 1000111, 0100111, …. In this case, the number of encodings as a function of string length is given by the sequence of tribonacci numbers. For general constraints defining which symbols are allowed after a given symbol, the maximal information rate can be obtained by first finding the optimal transition probabilities using a maximal entropy random walk, then using an entropy coder (with switched encoder and decoder) to encode a message as a sequence of symbols fulfilling the found optimal transition probabilities. See also Golden ratio base NegaFibonacci coding Ostrowski numeration Universal code Varicode, a practical application Zeckendorf's theorem Maximal entropy random walk References Further reading Non-standard positional numeral systems Lossless compression algorithms Fibonacci numbers Data compression
Fibonacci coding
Mathematics
1,015
14,465,687
https://en.wikipedia.org/wiki/Oleg%20Viro
Oleg Yanovich Viro () (b. 13 May 1948, Leningrad, USSR) is a Russian mathematician in the fields of topology and algebraic geometry, most notably real algebraic geometry, tropical geometry and knot theory. Contributions Viro developed a "patchworking" technique in algebraic geometry, which allows real algebraic varieties to be constructed by a "cut and paste" method. Using this technique, Viro completed the isotopy classification of non-singular plane projective curves of degree 7. The patchworking technique was one of the fundamental ideas which motivated the development of tropical geometry. In topology, Viro is most known for his joint work with Vladimir Turaev, in which the Turaev-Viro invariants (relatives of the Reshetikhin-Turaev invariants) and related topological quantum field theory notions were introduced. Education and career Viro studied at the Leningrad State University where he received his Ph.D. degree in 1974; his advisor was Vladimir Rokhlin. Viro taught from 1973 until 1991 at Leningrad State University. Since 1986 he has been a member of the Saint Petersburg Department of the Steklov Institute of Mathematics. In 1992-1997, Viro was a F. B. Jones chair professor in Topology at the University of California, Riverside. In 1994-2003 he was a professor at Uppsala University, Sweden. On 8 February 2007, Viro and his colleague Burglind Juhl-Jöricke were forced to resign from the university. There had been a history of conflict at the Mathematics Institute, with allegations of disagreeable behavior by several parties in the conflict. A number of Swedish, European and American mathematicians protested the manner in which the two Professors of Mathematics were forced to resign. These protests include the following: an open letter by Lennart Carleson, former president of the International Mathematical Union, a letter by Ari Laptev, current president of the European Mathematical Society, and a letter from M. Salah Baouendi, Arthur Jaffe, Joel Lebowitz, Elliott H. Lieb and Nicolai Reshetikhin. As of 2009, Viro is a senior researcher at the St. Petersburg Department of the Steklov Institute of Mathematics, and a professor at Stony Brook University. Awards and honors Viro was an invited speaker at the International Congress of Mathematicians in 1983 (Warsaw) and the European Congress of Mathematicians in 2000 (Barcelona). He was awarded the Göran Gustafsson Prize (1997) by the Swedish government. In 2012 he became a fellow of the American Mathematical Society. References External links Oleg Viro's website 1948 births Living people 20th-century Russian mathematicians 21st-century Russian mathematicians Soviet mathematicians Topologists Algebraic geometers University of California, Riverside faculty Fellows of the American Mathematical Society Stony Brook University faculty
Oleg Viro
Mathematics
566
25,665,908
https://en.wikipedia.org/wiki/C20H26O4
{{DISPLAYTITLE:C20H26O4}} The molecular formula C20H26O4 (molar mass: 330.42 g/mol, exact mass: 330.1831084 u) may refer to: Carnosol Momilactone B Molecular formulas
C20H26O4
Physics,Chemistry
63
240,540
https://en.wikipedia.org/wiki/Albert%20Szent-Gy%C3%B6rgyi
Albert Imre Szent-Györgyi de Nagyrápolt (; September 16, 1893 – October 22, 1986) was a Hungarian biochemist who won the Nobel Prize in Physiology or Medicine in 1937. He is credited with first isolating vitamin C and discovering many of the components and reactions of the citric acid cycle and the molecular basis of muscle contraction. He was also active in the Hungarian Resistance during World War II, and entered Hungarian politics after the war. Early life Szent-Györgyi was born in Budapest, Kingdom of Hungary, on September 16, 1893. His father, Miklós Szent-Györgyi, was a landowner, born in Marosvásárhely, Transylvania (today Târgu Mureş, Romania), a Calvinist, and could trace his ancestry back to 1608 when Sámuel, a Calvinist predicant, was ennobled. At the time of Szent-Györgyi's birth, being of the nobility was considered important and created opportunities that otherwise were not available. (Miklós Szent-Györgyi's parents were Imre Szent-Györgyi and Mária Csiky). His mother, Jozefina, a Roman Catholic, was a daughter of József Lenhossék and Anna Bossányi. Jozefina was a sister of Mihály Lenhossék; both of these men were Professors of Anatomy at the Eötvös Loránd University. His family included three generations of scientists. Music was important in the Lenhossék family. His mother Jozefina prepared to become an opera singer and auditioned for Gustav Mahler, then a conductor at the Budapest Opera. He advised her to marry instead, since her voice was not enough. Albert himself was good at the piano, while his brother Pál became a professional violinist. Education Szent-Györgyi began his studies at the Semmelweis University in 1911, and then began research in his uncle's anatomy lab. His studies were interrupted in 1914 to serve as an army medic in World War I. In 1916, disgusted with the war, Szent-Györgyi shot himself in the arm, claimed to be wounded from enemy fire, and was sent home on medical leave. He was then able to finish his medical education and received his MD in 1917. He married Kornélia Demény, the daughter of the Hungarian Postmaster General, that same year. After the war, Szent-Györgyi began his research career in Pozsony (today Bratislava, Slovakia). He switched universities several times over the next few years, finally ending up at the University of Groningen, where his work focused on the chemistry of cellular respiration. This work landed him a position as a Rockefeller Foundation fellow at the University of Cambridge. He received his PhD from the University of Cambridge in 1929 where he was a student at Fitzwilliam College, Cambridge. His research involved isolating an organic acid, which he then called "hexuronic acid", from adrenal gland tissue. Career and research Szent-Györgyi accepted a position at the University of Szeged in Hungary in 1930. There Szent-Györgyi and his research fellow Joseph Svirbely found that "hexuronic acid" was actually the long-sought antiscorbutic factor, henceforth known as vitamin C. After Walter Norman Haworth had determined its structure, the antiscorbutic was given the formal chemical name of L-ascorbic acid. In some experiments they used paprika as the source for their vitamin C. Also during this time, Szent-Györgyi continued his work on cellular respiration, identifying fumaric acid and other steps in what would become known as the Krebs cycle. In Szeged he also met Zoltán Bay, a physicist who became his friend and research partner in bio-physics. In 1937 he received the Nobel Prize in Physiology or Medicine "for his discoveries in connection with the biological combustion process with special reference to vitamin C and the catalysis of fumaric acid". Albert Szent-Györgyi offered all of his Nobel prize money to Finland in 1940. (Hungarian volunteers in the Winter War travelled to fight for the Finns after the Soviet invasion of Finland in 1939.) In 1938 he began work on the biophysics of muscle movement. He found that muscles contain actin, which when combined with the protein myosin and the energy source ATP, contract muscle fibers. In 1946, Albert received the Cameron Prize for Therapeutics of the University of Edinburgh. In 1947 Szent-Györgyi established the Institute for Muscle Research at the Marine Biological Laboratory (MLB) in Woods Hole, Massachusetts with financial support from Hungarian businessman Stephen Rath. However, he still faced funding difficulties for several years due to his foreign status and former association with the Hungarian Communist government. In 1948, he received a research position with the National Institutes of Health (NIH) in Bethesda, Maryland and began dividing his time between there and Woods Hole. In 1950, grants from the Armour Meat Company and the American Heart Association allowed him to establish the Institute for Muscle Research at Woods Hole. Szent-Györgyi conducted research at the MBL from 1947 to 1986 year-round. There, he found that whole muscle tissue retained its contractility almost indefinitely if stored cold in a fifty percent glycerol solution, thus eliminating the need to have fresh muscle on hand. During the 1950s Szent-Györgyi began using electron microscopes to study muscles at the subunit level. He received the Lasker Award in 1954. In 1955, he became a naturalized citizen of the United States. He was elected to the National Academy of Sciences (NAS) in 1956. In 1941, Szent-Györgyi developed a research interest in cancer and developed ideas on applying the theories of quantum mechanics to the biochemistry (quantum biology) of cancer. The death of Rath, who had acted as the financial administrator of the Institute for Muscle Research, left Szent-Györgyi in a financial mess. He refused to write government grant proposals minutely specifiying his research methods and expected results. After Szent-Györgyi commented on his financial hardships in a 1971 newspaper interview, attorney Franklin Salisbury helped him establish a private nonprofit organization, the National Foundation for Cancer Research. Late in life, Szent-Györgyi began to pursue free radicals as a potential cause of cancer. He came to see cancer as being ultimately an electronic problem at the molecular level. In 1974, reflecting his interests in quantum physics, he proposed the term "syntropy" to replace "negentropy". Ralph Moss, a protégé during his cancer research years, wrote a biography, Free Radical: Albert Szent-Gyorgyi and the Battle over Vitamin C. Aspects of his work are an important precursor to the understanding of redox signaling. Statement on scientific discovery Albert Szent-Györgyi, who realized that "a discovery must be, by definition, at variance with existing knowledge," divided scientists into two categories: the Apollonians and the Dionysians. He called Dionysians the scientific dissenters who explore "the fringes of knowledge". He wrote, "In science the Apollonian tends to develop established lines to perfection, while the Dionysian rather relies on intuition and is more likely to open new, unexpected alleys for research...The future of mankind depends on the progress of science, and the progress of science depends on the support it can find. Support mostly takes the form of grants, and the present methods of distributing grants unduly favor the Apollonian." Involvement in politics As the government of Gyula Gömbös and the associated Hungarian National Defence Association gained control of politics in Hungary, Szent-Györgyi helped his Jewish friends escape from the country. During World War II, he joined the Hungarian resistance movement. Although Hungary was allied with the Axis Powers, the Hungarian prime minister Miklós Kállay sent Szent-Györgyi to Istanbul in 1944 under the guise of a scientific lecture to begin secret negotiations with the Allies. The Germans learned of this plot and Adolf Hitler himself issued a warrant for the arrest of Szent-Györgyi. He escaped from house arrest and spent 1944 to 1945 as a fugitive from the Gestapo. After the war, Szent-Györgyi had become well-recognized as a public figure and there was some speculation that he might become President of Hungary, should the Soviets permit it. Szent-Györgyi established a laboratory at the University of Budapest and became head of the biochemistry department there. He was elected a member of Parliament and helped re-establish the Academy of Sciences. Dissatisfied with the Communist rule of Hungary, he emigrated to the United States in 1947. In 1967, Szent-Györgyi signed a letter declaring his intention to refuse to pay taxes as a means of protesting against the U.S. war against Vietnam, and urging other people to take a similar stand. He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth. Works online "Teaching and the Expanding Knowledge", in Rampart Journal of Individualist Thought, Vol. 1, No. 1 (March 1965). 24–28. (Reprinted from Science, Vol. 146, No. 3649 [December 4, 1964]. 1278–1279.) Publications On Oxidation, Fermentation, Vitamins, Health, and Disease (1940) Bioenergetics (1957) Introduction to a Submolecular Biology (1960) The Crazy Ape (1970) What next?! (1971) Electronic Biology and Cancer: A New Theory of Cancer (1976) The living state (1972) Bioelectronics: a study in cellular regulations, defense and cancer Lost in the Twentieth Century (Gandu) (1963) Personal life He married Cornelia Demény (1898–1981), daughter of the Hungarian Postmaster-General, in 1917. Their daughter, Cornelia Szent-Györgyi, was born in 1918 and died in 1969. He and Cornelia divorced in 1941. In 1941, he wed Marta Borbiro Miskolczy. She died of cancer in 1963. Szent-Györgyi married June Susan Wichterman, the 25-year-old daughter of Woods Hole biologist Ralph Wichterman, in 1965. They were divorced in 1968. He married his fourth wife, Marcia Houston, in 1975. They adopted a daughter, Lola von Szent-Györgyi. Death and legacy Szent-Györgyi died in Woods Hole, Massachusetts, US, on October 22, 1986. He was honored with a Google Doodle September 16, 2011, 118 years after his birth. In 2004, nine interviews were conducted with family, colleagues, and others to create a Szent-Györgyi oral history collection. Notes References Bibliography US National Library of Medicine. The Albert Szent-Györgyi Papers.NIH Profiles in Science Ilona Újszászi (ed.): The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. László Dux: On the Basics of Biochemistry. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 13–23. János Wölfling: Life through the eyes of a chemist. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 24–34. Gábor Tóth: From vitamins to peptides - Research topics in Szent-Györgyi's departments. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 35–57. István Hannus: The Analysis of Vitamin C in Szeged. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 58–76. Mária Homoki-Nagy: Protection of the creations of the mind in the history of Hungarian law. Copyright and patent rights; primacy and ethics in science. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 77–93. Miklós Gábor: Albert Szent-Györgyi's Studies on Flavones. Impact of the Discovery. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 94-122. Tamás Vajda: Effects of the discovery of vitamin C on the paprika industry and the economy of the southern part of the Hungarian Great Plain. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 123–152. Béla Pukánszky: The thoughts of Albert Szent-Györgyi on pedagogy. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 153–169. Csaba Jancsák: Albert Szent-Györgyi and the Student Union of the University of Szeged. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 170–193. József Pál: From the Unity of Life to the Coequality of the Forms of Consciousness. Worries of Albert Szent-Györgyi in Times of War. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 194–210. http://publicatio.bibl.u-szeged.hu/6615/1/Sz_Gy-Unity_of_life.pdf Ildikó Tasiné Csúcs: The science-rescuing activity of Albert Szent-Györgyi and its roots in Hungary after 1945. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 211–227. http://publicatio.bibl.u-szeged.hu/5744/1/Science_rescuing.pdf József Pál: About Albert Szent-Györgyi's Poems. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 228–237. https://web.archive.org/web/20160506215625/http://publicatio.bibl.u-szeged.hu/6611/1/A_Sz-Gy_poems.pdf Gábor Szabó: The passage of Szent-Györgyi to biophysics: a journey from the blur of the boundaries of disciplines through the instruments used for research with a stopover at the paprika centrifuge and arriving at the super lasers. In: The intellectual heritage of Albert Szent-Györgyi = Szegedi Egyetemi Tudástár 2.(Series editors.: László Dux, István Hannus, József Pál, Ilona Újszászi) Publishing Department-University of Szeged. 2014. 238–253. External links 1893 births 1986 deaths Alumni of Fitzwilliam College, Cambridge American anti-war activists American tax resisters Cancer researchers Citric acid cycle Honorary Fellows of the Royal Society of Edinburgh Hungarian biochemists Hungarian emigrants to the United States Hungarian Nobel laureates Nobel laureates from Austria-Hungary Hungarian people of World War II Institute for Advanced Study visiting scholars Members of the National Assembly of Hungary (1945–1947) Members of the United States National Academy of Sciences Nobel laureates in Physiology or Medicine Hungarian physiologists Recipients of the Albert Lasker Award for Basic Medical Research Semmelweis University alumni Academic staff of the University of Szeged Vitamin C Vitamin researchers World Constitutional Convention call signatories
Albert Szent-Györgyi
Chemistry
3,992
1,389,528
https://en.wikipedia.org/wiki/Tony%20Lecomber
Anthony "Tony" Mark Lecomber (born 1961) is a British far-right activist and former British National Party (BNP) politician who was deputy leader of the BNP from 1999 to 2006. Background Lecomber has been active in far-right politics since the early 1980s. His role is mainly behind the scenes in planning BNP election campaigns, but his history of convictions for violence have given him prominence in anti-BNP publicity and led to his removal from the party. He joined the National Front in the early 1980s, but allied with John Tyndall who was being blamed for the NF's poor performance at the 1979 general election. When Tyndall split to form the New National Front and later the British National Party, Lecomber followed him. He was editor of Young Nationalist, a racist and antisemitic magazine. Convictions Lecomber was convicted for criminal damage in 1982, offences under the Explosive Substances Act in 1985, and was sentenced to three years' imprisonment in 1991 for an attack on a Jewish teacher. On 31 October 1986, he was injured by a nailbomb that he was carrying to the offices of the Workers Revolutionary Party in Clapham. Police found 10 grenades, seven petrol bombs and two detonators at his home. For this offence, he received a three-year prison sentence at his trial on 28 November that year. In 1991, while he was Propaganda Director of the BNP, Lecomber was sentenced to three years' imprisonment for an attack on a Jewish teacher. Lecomber was released from his three-year sentence in time to play a part in the BNP's by-election win in Millwall ward of Tower Hamlets in September 1993. Later in the 1990s, Lecomber became closer to Nick Griffin and supported Griffin when he successfully challenged John Tyndall's leadership of the BNP in 1999. In 2006, Lecomber was sacked from his position as Group Development Officer. This followed allegations made by former Merseyside BNP organiser that Lecomber had tried to recruit him to assassinate prominent politicians and members of the British establishment. Lecomber admitted that a conversation had taken place but stated that he hadn't meant the comments to be taken literally. References 1961 births Living people British fascists British male criminals British National Party politicians British people convicted of assault British politicians convicted of crimes National Front (UK) politicians Neo-fascist terrorism People convicted on terrorism charges Prisoners and detainees of England and Wales Terrorism in the United Kingdom
Tony Lecomber
Chemistry
511
23,435,343
https://en.wikipedia.org/wiki/C10H16N2O3S
{{DISPLAYTITLE:C10H16N2O3S}} The molecular formula C10H16N2O3S (molar mass : 244.31 g/mol) may refer to : Amidephrine, an alpha-adrenergic agonist Biotin, a water-soluble B-complex vitamin
C10H16N2O3S
Chemistry
73
20,903,416
https://en.wikipedia.org/wiki/David%20Cleevely
David Douglas Cleevely CBE FREng FIET (born September 1953) is a British entrepreneur and international telecoms expert who has built and advised many companies, principally in Cambridge, UK. Telecommunications In 1985 Cleevely founded the telecommunications consultancy Analysys which became Analysys Mason, when it was acquired by Datatec in 2004. Whilst at Analysys he made a significant contribution to the theory and practice of calculating Universal Service Obligation costs and was involved with a report to the European Commission on VoIP creating the framework for VoIP within the EU and the identification of The Broadband Gap – where the cost of supply would exceed the price consumers were willing to pay which prompted UK Government policy intervention in 2001–2005 to force increased broadband infrastructure in the UK. Entrepreneurship Cleevely's entrepreneurial activities have been focused on the Cambridge area, with Business Weekly describing him as, "Intellectual heavyweight and passionate evangelist for the cluster" and was reported in the Financial Times which noted his role in founding Cambridge Network, Cambridge Angels and other contributions. He has worked tirelessly to get government to understand what makes Cambridge academia and business tick. In 1997 Cleevely co-founded Cambridge Network with Hermann Hauser, Alec Broers, Nigel Brown, Fred Hallsworth and Anthony Ross. In 1998 he co-founded biotech company Abcam plc and was chairman until November 2009. In 2001 he co-founded and became chairman of Cambridge Wireless (originally Cambridge 3G) with Edward Astle. He later said of the mobile industry, "This is an industry undergoing a revolution. The competitive edge is moving from handsets to platforms, from voice to data, from services to apps. The move of the big internet players into mobile is just the beginning. The future of the industry hinges on how this will play out." In the same year Cleevely co-founded Cambridge Angels, a group of angel investors who have now invested over £20m into 40 companies in the Cambridge area. In late 2004 he co-founded the 3g pico base station company, 3WayNetworks, which was sold to Airvana in April 2007. Between 2005 and 2008 he was Chairman of the Communications Research Network at University of Cambridge, part of the Cambridge–MIT Institute. In 2007 he co-founded and became the Chairman of the spectrum monitoring company CRFS, which has subsequently carried out the first ever UK-wide spectrum monitoring. In 2008 he also became the Chairman of the scanning ion-conductance microscopy company Ionscope. He funded and became chairman of the Bocca di Lupo restaurant in Soho, London in 2008, and of its subsidiary, Gelupo, in 2011. Bocca di Lupo came top in Time Out London's 50 best restaurants for 2009, was a runner-up in the Observer Food Monthly Awards 2010 and was named by Restaurant Magazine as the 23rd best restaurant in the UK at the National Restaurant Awards 2010. In 2013 he also invested in Cambridge restaurant The Pint Shop. Cleevely was Chairman of LabTech company OpenIOLabs, and became Non-Executive Director when they were acquired by DeepMatter (formerly Cronin Group) in 2017 and stepped down in May 2019. In 2019 he joined the board of Focal Point Positioning as Chair and has taken it through two successive funding rounds. In 2020 Focal Point Positioning was awarded both The Duke of Edinburgh's Navigation Award for Outstanding Technical Achievement from the Royal Institute of Navigation, and the Hottest SpaceTech Startup in Europe accolade from the Europas. In April 2023 Privitar a privacy safeguarding company based on a patent by David Cleevely and John Taysom was one of the winners of the Challenge to Drive Innovation in Privacy-Enhancing Technologies that Reinforce Democratic Values, awarded by the United States and the United Kingdom governments. Cleevely helped Professor Lee Cronin spin Chemify out from the University of Glasgow in 2021 and became Chairman. In 2023 Chemify announced the completion of £36m ($43m) funding. Nobel Prize winner Sir Fraser Stoddart said “I see Chemify as a major development for the field of chemistry.” Public policy and government Cleevely is an authority on telecommunication policy and has advised numerous governments on policy and innovation frameworks. He advised the Prime Minister and UK Government on the ecommerce@its.best.uk report, and was one of the 8 industry experts that compiled the Communications White Paper which became the Communications Act 2003. In 2001 he was appointed by the UK government to the Spectrum Management Advisory Group, which became the Ofcom Spectrum Advisory Board, and the IET Communications Policy Panel and was also appointed Advisor to Main Board of DCSA (later the DES ISS) until 2009. He has also appeared before Select committees in both Parliament and in the House of Lords. In 2009 David Cleevely became the Founding Director for the new Centre for Science and Policy and subsequently Chair of the Advisory Council, stepping down from the role in 2018. January 2015 saw him also join the Digital Economy Council (where he was a member until 2017) and was also on the advisory board for the Oxford Internet Institute from 2012 to 2018. In 2015, his contribution to the UK Government-backed report Visions of Cambridge 2065 saw him predict dramatic changes in the city over the coming 50 years, such as having more than 1 million residents, two $100 billion companies and a regional underground system. In 2017 he wrote the initial terms of reference for the Cambridge and Peterborough Independent Economic Review funded by Cambridge Ahead and the Combined Authority and agreed at the meeting of the Council Authority 28 June 2017. He was Vice Chair and Commissioner for the Cambridge and Peterborough Independent Economic Review until September 2018. In 2018 he gave the Founding Director's lecture at the University of Cambridge on getting academics and policy makers to work together. In 2018 he became an advisor to the National Engineering Policy Centre at the Royal Academy of Engineering and subsequently became chair of the Policy Fellowship Working Group. He was Chair of the Digital Sector Strategy Commission for the Greater Cambridgeshire Greater Peterborough Combined Authority which reported in March 2019. In September 2019 he became Chair of the Cambridge Autonomous Metro Technical Advisory Committee (CAM TAC) Sep 2019 and in June 2020 co-authored a report for the CAM TAC with Professor John Miles setting out the technical and costing options for the CAM which James Palmer, the Mayor of the Combined Authority, described as "a game changer .... an exceptional piece of work which gives a clear way forward". In November 2019 he took over from Ian Shott as Chair of the Royal Academy of Engineering Enterprise Committee, and was succeeded by John Lazar in October 2022. During 2018 he proposed setting up a Policy Fellowship Scheme at the Royal Academy of Engineering and became Chair of the Policy Fellows Working Group in September 2019. The programme has grown rapidly to over 60 policy fellows from Whitehall and the devolved administrations. In March 2020 he was appointed Chair of the Royal Academy of Engineering COVID-19 Triage Group and which issued a report in August 2020 setting out how the RAE made a major contribution to addressing the crisis. He was Chair of the New Era for the Cambridge Economy which reported on 31 March 2022 on the changes that the Covid pandemic has brought about and the 6 challenges facing Cambridge and other city regions to recognise and take advantage of the changes that are taking place. Cleevely is a member of the Advisory Council for Creative Destruction Labs (Oxford), a non-profit organisation helping science and technology-based startups from across the globe. Charitable work In 2012 Cleevely joined forces with Hermann Hauser and Jonathan Milner – described as the "three musketeers of the Cambridge technology cluster" - to provide funding to create a Science Centre in Cambridge. He has been Chairman and substantial donor since 2013 and the Cambridge Science Centre reported over 300,000 cumulative visitors in 2019. In 2013 he joined the board of Raspberry Pi (Trading) Ltd. and in 2014 he became Chairman (unremunerated) of the Raspberry Pi Foundation and of Raspberry Pi Trading. He stepped down as Chair of Raspberry Pi Trading in February 2019 and was succeeded by John Lazar. Education After gaining a BSc in Cybernetics and Instrument Physics with Mathematics from the University of Reading, Cleevely gained a PhD in Telecommunications and Economic Development from Cambridge University. Awards and honours David Cleevely was appointed Commander of the Order of the British Empire (CBE) in the 2013 New Year Honours for services to technology and innovation. He is a Fellow of the Royal Academy of Engineering (FREng), and has held an Industrial Fellowship at the University of Cambridge Computer Laboratory. He is also a Fellow of the Institute of Engineering and Technology (FIET), where he gave the IEE Pinkerton Lecture, "Seizing the Moment: The Far Reaching Effects of Broadband on Economy and Society" in November 2002, and the 41st IEE Appleton Lecture 'Is there a future for research in telecommunications?' in January 2006 and the 46th IET Appleton Lecture 'What is the future for communications? What does it mean for the UK?' in January 2011. In June 2013, Cleevely became a Fellow Commoner of Queens' College, Cambridge. and in October 2015 became an Honorary Fellow of Trinity Hall, Cambridge In November 2018 Cleevely won Barclays "Entrepreneurs' Icon of the Year" In December 2022 Cleevely was awarded the honorary degree of Doctor of Education by the University of Bath "in recognition of his impact on the technological aspects of our industries, his ongoing role in the mentoring and support of the entrepreneurs and engineers who have followed him into starting their own companies, for his development of our national policy and contributions to our national life." In July 2023 Cleevely was awarded the honorary degree of Doctor of Technology by Anglia Ruskin University for "significant contributions to the success of Cambridge, having been a co-founder and Chair of Cambridge Network, Cambridge Wireless, Cambridge Angels, and co-founder of Cambridge Ahead and Founding Director of the Centre for Science and Policy at the University of Cambridge." Publications References 1953 births Academics of the University of Cambridge Alumni of the University of Cambridge Alumni of the University of Reading Living people Commanders of the Order of the British Empire Fellows of the Royal Academy of Engineering Fellows of the Institution of Engineering and Technology British businesspeople Fellows of Queens' College, Cambridge
David Cleevely
Engineering
2,125
44,309,749
https://en.wikipedia.org/wiki/Jos%C3%A9%20Elguero%20Bertolini
José Elguero Bertolini (born 1934) is a Spanish chemist best known for his contributions to heterocyclic chemistry. He is Honorary Research Professor at the Medicinal Chemistry Institute of the Spanish National Research Council (CSIC), an institution he chaired from 1983 to 1984. Since 2015, he is the president of the Spanish Royal Academy of Sciences. Life José Elguero was born on Christmas Day, 1934, in Madrid, Spain, where he graduated in chemistry from the Central University, now University Complutense of Madrid (B.Sc., 1957). In spite of the possibility to continue his studies with either Professor Francisco Fariña or Professor Jesús Morcillo in Madrid he moved to France. After a fruitless attempt to become a perfumist, Professor Robert Jacquier at the University of Montpellier accepted him as a PhD student (PhD, 1961). He also received a Doctorate of Science by the University Complutense of Madrid (1977). He was appointed “Attaché de Recherche” and promoted to “Maître de Recherche” at the Centre National de la Recherche Scientifique (CNRS) first in Montpellier and later at the laboratory of Professor Jacques Metzger in Marseille where he worked until 1979. He was visitor at Prof. Alan R. Katritzky laboratory in England. After more than 20 years of research in France he returned in 1980 to Spain to hold a Research Professor position at the Spanish Council for Scientific Research (CSIC) in Madrid where he has continued his career. He was appointed Honorary Research Professor in 2005. He has served as President of CSIC (1983–1984), President of the Social Council of the Autonomous University of Madrid (1986–1990), President of the Scientific Advisory Board of Comunidad de Madrid (1990–1995) and President of the Forum Foro Permanente Química y Sociedad (2008–2010). He is probably the most prolific Spanish scientist with more than 1500 scientific publications. His humanist view of science and the world is also well documented throughout many articles and interviews. Elguero's contributions to chemistry have been numerous thanks to a multitude of interdisciplinary collaborations. For instance, in the field of heterocyclic chemistry he has studied tautomerism, hydrogen bonding and aromaticity in systems including numerous azoles and phosphaphenalenes. In physical chemistry he has investigated the spectroscopic behaviour of heterocycles and organometallic systems by NMR and the application of computational chemistry to the study of the structures and reactivity of heterocycles. He has also been involved in crystallographic studies for crystal engineering. In synthetic chemistry he has contributed to areas such as phase-transfer catalysis, photochemistry, flash pyrolysis and process optimization. Solid-state and gas-phase chemistry in relation to sonochemistry and microwave chemistry has also been of interest to him. In medicinal chemistry he has made extensive use of mathematical Quantitative Structure-Activity Relationships (QSAR) methods for the design of a variety of biologically active compounds for different therapeutic applications. Books He is the co-author of a fundamental book in heterocyclic chemistry: The Tautomerism of Heterocycles. Advances in Heterocyclic Chemistry-Supplement 1, 1976. References https://web.archive.org/web/20120621125211/http://www.iqm.csic.es/are/jeb/ http://www.are.iqm.csic.es/index.php/discursos-conferencias-entrevistas-de-jose-elguero Elguero, J.; Marzin, C.; Roberts, J.D. (1974). NMR Studies of Heterocyclic Compounds. XI. Carbon-13 Magnetic Resonance Studies of Azoles. Tautomerism, Shift Reagents, and Solvent Effects. J. Org. Chem. 39, 357–363. (http://pubs.acs.org/doi/pdf/10.1021/jo00917a017) Claramunt, R.M.; Sanz, D.; Alarcón, S.M.; Pérez-Torralba, M.; Elguero, J; Foces-Foces, C.; Pietrzak, M.; Langer, I.; Limbach, H.-H. (2001). “6-Aminofulvene-1-aldimine: A Model Molecule for the Study of Intramolecular Hydrogen Bonds.” Angew. Chem. Int. Ed. 40, 420–423. DOI: 10.1002/1521-3773(20010119)40:2<420::AID-ANIE420>3.0.CO;2-I (http://onlinelibrary.wiley.com/doi/10.1002/1521-3773(20010119)40:2%3C420::AID-ANIE420%3E3.0.CO;2-I/pdf) Espinosa, E.; Alkorta, I; Elguero, J.; Molins, E. (2002) “From weak to strong interactions: A comparative analysis of the topological and energetic properties of the electron density distribution involving X–F•••F–Y systems.” J. Chem. Phys. 117, 5529–5543. DOI:10.1063/1.1501133 (http://scitation.aip.org/docserver/fulltext/aip/journal/jcp/117/12/1.1501133.pdf?expires=1413452724&id=id&accname=2120139&checksum=47971180E0E80013D4F3526DC9269D15) Elguero, J. (2013), “Tautomerism”, Brenner's Encyclopedia of Genetics, Second Edition, 7, 18–22. José Elguero and Claude Marzin; Alan. R. Katritzky; Paolo. Linda (1976) The Tautomerism of Heterocycles, Academic Press Inc., New York, , 655 pages. External links Members of the Theoretical Chemistry Group of Medicinal Chemistry Institute of CSIC in Madrid (http://www.are.iqm.csic.es/index.php/group-members) 1934 births Living people Spanish chemists Complutense University of Madrid alumni Computational chemists
José Elguero Bertolini
Chemistry
1,416
41,776,671
https://en.wikipedia.org/wiki/Programming%20Research%20Limited
Programming Research Limited (PRQA) was a United Kingdom-based developer of code quality management software for embedded software, which included the static program analysis tools QA·C and QA·C++, now known as Helix QAC. It created the High Integrity C++ software coding standard. In May 2018, the company was acquired by Minneapolis, MNbased Perforce, and its products were renamed. Key Tools by PRQA QA·C: Static analysis for C code. QA·C++: Static analysis for C++ code. QA·Verify: A central platform for managing quality across teams and projects. QA·Framework: For enterprise-level static analysis and reporting. These tools were widely adopted in safety-critical industries like automotive, aerospace, and medical devices. References External links Programming Research Limited website Borough of Elmbridge Companies based in Surrey Privately held companies of the United Kingdom Science and technology in Surrey Software companies of the United Kingdom
Programming Research Limited
Technology
199
47,375,779
https://en.wikipedia.org/wiki/Joaquim%20Gomes%20de%20Souza
Joaquim Gomes de Souza "Souzinha" (15 February 1829, in Itapecuru Mirim – 1 June 1864, in London) was a Brazilian mathematician who worked on numerical analysis and differential equations. He was a pioneer on the study of mathematics in Brazil, and was described by José Leite Lopes as "the first great mathematician from Brazil". In 1844, Gomes de Souza enrolled at the Faculdade de Medicina do Rio de Janeiro (now a part of the Federal University of Rio de Janeiro) to study medicine. He had a deep love for the natural sciences, which led him to also be interested in mathematics, and so he started to learn mathematics as a self-taught in parallel with his studies of medicine. In 1848, he obtained his doctorate in mathematics from the Escola Real Militar, with the thesis Dissertação Sobre o Modo de Indagar novos Astros sem o Auxílio das Observações Directas (Dissertation about the means of investigating new celestial objects without the aid of direct observations). He later went to the Sorbonne, in France, where he continued his mathematical studies. He was a personal friend of Cauchy, of whose classes he attended (in one of them, Souza spotted a mathematical mistake by Cauchy, he then asked his license and corrected it on the blackboard). In 1856, he obtained a doctorate in medicine from Paris Faculty of Medicine. In the same year, he presented his mathematical works at the Académie des sciences. Souza held a paid public post in Brazil, and after much time in Europe, he was noticed he should return immediately to Brazil because he had been elected a member of the parliament. Souza had already married Rosa Edith in England and then had to return to Brazil without her. In his book Mélanges de calcul intégral (1882), Souza aimed to obtain a general method to solve PDEs, according to Manfredo do Carmo: "[in his book] He [Souza] employed methods not entirely rigorous and it is not clear exactly how much of his work would remain if submitted to a careful scrutiny; as far as I know, it was never put to such a test." He died at the age of 35, in London. The cause of death was a disease of the lung. C. S. Fernandez and C. M. Souza described his endeavorer in Europe: "He was audacious and fought with insistence for his scientific recognition in Europe. His effort was fruitless, though." Writings Resoluções das Equações Numéricas (1850, in Portuguese) Recuel de Memoires d’Analise Mathematiques (1857, in French) Anthologie universelle (1859, in French) Mélanges de calcul intégral (1882, posthumous, in French) Further reading Irine Coelho de Araujo, Joaquim Gomes de Souza (1829-1864): A construção de uma imagem de Souzinha, São Paulo, 2012 Carlos Ociran Silva Nascimento, Alguns aspectos da obra matematica de Joaquim Gomes de Souza, Campinas, 2008 References 1829 births 1864 deaths Brazilian mathematicians Brazilian politicians Mathematical analysts Partial differential equation theorists Federal University of Rio de Janeiro alumni
Joaquim Gomes de Souza
Mathematics
692
34,948,394
https://en.wikipedia.org/wiki/Beez%27s%20theorem
In mathematics, Beez's theorem, introduced by Richard Beez in 1875, implies that if n > 3 then in general an (n – 1)-dimensional hypersurface immersed in Rn cannot be deformed. References Theorems in differential geometry
Beez's theorem
Mathematics
54
19,898,251
https://en.wikipedia.org/wiki/DR%206
DR 6 is a cluster of stars in the Milky Way galaxy, composed of dust, gas, and about 10 large newborn stars, each roughly ten to twenty times the size of the Sun. It was discovered by astronomers at NASA with the Spitzer Space Telescope, viewing the nebula using infrared light. The areas of the cluster that appear green are mainly composed of gas, while the parts that seem to be red are made of dust. The DR 6 nebula is located about 3,900 light-years away in the constellation Cygnus. The center of the nebula, where the ten stars are located, is roughly 3.5 light-years long, roughly equivalent to the distance between the Sun and Alpha Centauri, the closest star to the Sun. "Galactic Ghoul" The DR 6 cluster is nicknamed the "Galactic Ghoul" because of the nebula's resemblance to a human face; astronomers have described it as "some sort of freakish space face," emphasizing the cavity-like regions that look like eyes and a mouth. These large cavities are the result of "energetic light" and strong stellar wind that come from the ten stars in the center of the nebula (the part also known as the "nose"). Because of the nebula's spooky appearance, it was featured on the NASA website as the Astronomy Picture of the Day on All Hallows Eve, November 1, 2004. References Open clusters Pre-stellar nebulae Star-forming regions Cygnus (constellation)
DR 6
Astronomy
308
3,002,779
https://en.wikipedia.org/wiki/Arsine%20%28data%20page%29
This page provides supplementary chemical data on arsine. Material Safety Data Sheet SIRI Soxal Structure and properties Thermodynamic properties Spectral data References Chemical data pages Chemical data pages cleanup
Arsine (data page)
Chemistry
39
19,919
https://en.wikipedia.org/wiki/Monosaccharide
Monosaccharides (from Greek monos: single, sacchar: sugar), also called simple sugars, are the simplest forms of sugar and the most basic units (monomers) from which all carbohydrates are built. Chemically, monosaccharides are polyhydroxy aldehydes with the formula or polyhydroxy ketones with the formula with three or more carbon atoms. They are usually colorless, water-soluble, and crystalline organic solids. Contrary to their name (sugars), only some monosaccharides have a sweet taste. Most monosaccharides have the formula (CH2O)x (though not all molecules with this formula are monosaccharides). Examples of monosaccharides include glucose (dextrose), fructose (levulose), and galactose. Monosaccharides are the building blocks of disaccharides (such as sucrose, lactose and maltose) and polysaccharides (such as cellulose and starch). The table sugar used in everyday vernacular is itself a disaccharide sucrose comprising one molecule of each of the two monosaccharides -glucose and -fructose. Each carbon atom that supports a hydroxyl group is chiral, except those at the end of the chain. This gives rise to a number of isomeric forms, all with the same chemical formula. For instance, galactose and glucose are both aldohexoses, but have different physical structures and chemical properties. The monosaccharide glucose plays a pivotal role in metabolism, where the chemical energy is extracted through glycolysis and the citric acid cycle to provide energy to living organisms. Maltose is the dehydration condensate of two glucose molecules. Structure and nomenclature With few exceptions (e.g., deoxyribose), monosaccharides have the chemical formula (CH2O)x, where conventionally x ≥ 3. Monosaccharides can be classified by the number x of carbon atoms they contain: triose (3), tetrose (4), pentose (5), hexose (6), heptose (7), and so on. Glucose, used as an energy source and for the synthesis of starch, glycogen and cellulose, is a hexose. Ribose and deoxyribose (in RNA and DNA, respectively) are pentose sugars. Examples of heptoses include the ketoses mannoheptulose and sedoheptulose. Monosaccharides with eight or more carbons are rarely observed as they are quite unstable. In aqueous solutions monosaccharides exist as rings if they have more than four carbons. Linear-chain monosaccharides Simple monosaccharides have a linear and unbranched carbon skeleton with one carbonyl (C=O) functional group, and one hydroxyl (OH) group on each of the remaining carbon atoms. Therefore, the molecular structure of a simple monosaccharide can be written as H(CHOH)n(C=O)(CHOH)mH, where ; so that its elemental formula is CxH2xOx. By convention, the carbon atoms are numbered from 1 to x along the backbone, starting from the end that is closest to the C=O group. Monosaccharides are the simplest units of carbohydrates and the simplest form of sugar. If the carbonyl is at position 1 (that is, n or m is zero), the molecule begins with a formyl group H(C=O)− and is technically an aldehyde. In that case, the compound is termed an aldose. Otherwise, the molecule has a ketone group, a carbonyl −(C=O)− between two carbons; then it is formally a ketone, and is termed a ketose. Ketoses of biological interest usually have the carbonyl at position 2. The various classifications above can be combined, resulting in names such as "aldohexose" and "ketotriose". A more general nomenclature for open-chain monosaccharides combines a Greek prefix to indicate the number of carbons (tri-, tetr-, pent-, hex-, etc.) with the suffixes "-ose" for aldoses and "-ulose" for ketoses. In the latter case, if the carbonyl is not at position 2, its position is then indicated by a numeric infix. So, for example, H(C=O)(CHOH)4H is pentose, H(CHOH)(C=O)(CHOH)3H is pentulose, and H(CHOH)2(C=O)(CHOH)2H is pent-3-ulose. Open-chain stereoisomers Two monosaccharides with equivalent molecular graphs (same chain length and same carbonyl position) may still be distinct stereoisomers, whose molecules differ in spatial orientation. This happens only if the molecule contains a stereogenic center, specifically a carbon atom that is chiral (connected to four distinct molecular sub-structures). Those four bonds can have any of two configurations in space distinguished by their handedness. In a simple open-chain monosaccharide, every carbon is chiral except the first and the last atoms of the chain, and (in ketoses) the carbon with the keto group. For example, the triketose H(CHOH)(C=O)(CHOH)H (glycerone, dihydroxyacetone) has no stereogenic center, and therefore exists as a single stereoisomer. The other triose, the aldose H(C=O)(CHOH)2H (glyceraldehyde), has one chiral carbon—the central one, number 2—which is bonded to groups −H, −OH, −C(OH)H2, and −(C=O)H. Therefore, it exists as two stereoisomers whose molecules are mirror images of each other (like a left and a right glove). Monosaccharides with four or more carbons may contain multiple chiral carbons, so they typically have more than two stereoisomers. The number of distinct stereoisomers with the same diagram is bounded by 2c, where c is the total number of chiral carbons. The Fischer projection is a systematic way of drawing the skeletal formula of an acyclic monosaccharide so that the handedness of each chiral carbon is well specified. Each stereoisomer of a simple open-chain monosaccharide can be identified by the positions (right or left) in the Fischer diagram of the chiral hydroxyls (the hydroxyls attached to the chiral carbons). Most stereoisomers are themselves chiral (distinct from their mirror images). In the Fischer projection, two mirror-image isomers differ by having the positions of all chiral hydroxyls reversed right-to-left. Mirror-image isomers are chemically identical in non-chiral environments, but usually have very different biochemical properties and occurrences in nature. While most stereoisomers can be arranged in pairs of mirror-image forms, there are some non-chiral stereoisomers that are identical to their mirror images, in spite of having chiral centers. This happens whenever the molecular graph is symmetrical, as in the 3-ketopentoses H(CHOH)2(CO)(CHOH)2H, and the two halves are mirror images of each other. In that case, mirroring is equivalent to a half-turn rotation. For this reason, there are only three distinct 3-ketopentose stereoisomers, even though the molecule has two chiral carbons. Distinct stereoisomers that are not mirror-images of each other usually have different chemical properties, even in non-chiral environments. Therefore, each mirror pair and each non-chiral stereoisomer may be given a specific monosaccharide name. For example, there are 16 distinct aldohexose stereoisomers, but the name "glucose" means a specific pair of mirror-image aldohexoses. In the Fischer projection, one of the two glucose isomers has the hydroxyl at left on C3, and at right on C4 and C5; while the other isomer has the reversed pattern. These specific monosaccharide names have conventional three-letter abbreviations, like "Glu" for glucose and "Thr" for threose. Generally, a monosaccharide with n asymmetrical carbons has 2n stereoisomers. The number of open chain stereoisomers for an aldose monosaccharide is larger by one than that of a ketose monosaccharide of the same length. Every ketose will have 2(n−3) stereoisomers where n > 2 is the number of carbons. Every aldose will have 2(n−2) stereoisomers where n > 2 is the number of carbons. These are also referred to as epimers which have the different arrangement of −OH and −H groups at the asymmetric or chiral carbon atoms (this does not apply to those carbons having the carbonyl functional group). Configuration of monosaccharides Like many chiral molecules, the two stereoisomers of glyceraldehyde will gradually rotate the polarization direction of linearly polarized light as it passes through it, even in solution. The two stereoisomers are identified with the prefixes - and -, according to the sense of rotation: -glyceraldehyde is dextrorotatory (rotates the polarization axis clockwise), while -glyceraldehyde is levorotatory (rotates it counterclockwise). The - and - prefixes are also used with other monosaccharides, to distinguish two particular stereoisomers that are mirror-images of each other. For this purpose, one considers the chiral carbon that is furthest removed from the C=O group. Its four bonds must connect to −H, −OH, −CH2(OH), and the rest of the molecule. If the molecule can be rotated in space so that the directions of those four groups match those of the analog groups in -glyceraldehyde's C2, then the isomer receives the - prefix. Otherwise, it receives the - prefix. In the Fischer projection, the - and - prefixes specifies the configuration at the carbon atom that is second from bottom: - if the hydroxyl is on the right side, and - if it is on the left side. Note that the - and - prefixes do not indicate the direction of rotation of polarized light, which is a combined effect of the arrangement at all chiral centers. However, the two enantiomers will always rotate the light in opposite directions, by the same amount. See also system. of monosaccharides (hemiacetal formation) A monosaccharide often switches from the acyclic (open-chain) form to a cyclic form, through a nucleophilic addition reaction between the carbonyl group and one of the hydroxyl groups of the same molecule. The reaction creates a ring of carbon atoms closed by one bridging oxygen atom. The resulting molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. The reaction is easily reversed, yielding the original open-chain form. In these cyclic forms, the ring usually has five or six atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the aldehyde group on carbon 1 and the hydroxyl on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a ring, called glucopyranose. Cyclic forms with a seven-atom ring (the same of oxepane), rarely encountered, are called heptoses. For many monosaccharides (including glucose), the cyclic forms predominate, in the solid state and in solutions, and therefore the same name commonly is used for the open- and closed-chain isomers. Thus, for example, the term "glucose" may signify glucofuranose, glucopyranose, the open-chain form, or a mixture of the three. Cyclization creates a new stereogenic center at the carbonyl-bearing carbon. The −OH group that replaces the carbonyl's oxygen may end up in two distinct positions relative to the ring's midplane. Thus each open-chain monosaccharide yields two cyclic isomers (anomers), denoted by the prefixes α- and β-. The molecule can change between these two forms by a process called mutarotation, that consists in a reversal of the ring-forming reaction followed by another ring formation. Haworth projection The stereochemical structure of a cyclic monosaccharide can be represented in a Haworth projection. In this diagram, the α-isomer for the pyranose form of a -aldohexose has the −OH of the anomeric carbon below the plane of the carbon atoms, while the β-isomer has the −OH of the anomeric carbon above the plane. Pyranoses typically adopt a chair conformation, similar to that of cyclohexane. In this conformation, the α-isomer has the −OH of the anomeric carbon in an axial position, whereas the β-isomer has the −OH of the anomeric carbon in equatorial position (considering -aldohexose sugars). Derivatives A large number of biologically important modified monosaccharides exist: Amino sugars such as: galactosamine glucosamine sialic acid N-acetylglucosamine Sulfosugars such as: sulfoquinovose Others such as: ascorbic acid mannitol glucuronic acid See also Monosaccharide nomenclature Reducing sugar Sugar acid Sugar alcohol References Literature McMurry, John. Organic Chemistry. 7th ed. Belmont, CA: Thomson Brooks/Cole, 2008. Print. External links Nomenclature of Carbohydrates Carbohydrate chemistry
Monosaccharide
Chemistry
3,148
71,455,095
https://en.wikipedia.org/wiki/Andrew%20Clennel%20Palmer
Andrew Clennel Palmer (26 May 1938 – 21 December 2019) was a British engineer who worked on offshore geotechnical problems of submarine pipeline design and the study of the properties of ice. He spent much of his career as a teacher and academic researcher, at the University of Liverpool, Cambridge University, the University of Manchester Institute of Science and Technology, and the National University of Singapore, punctuated by work in industry, while also serving as an expert witness and as a member of various industrial and academic committees. Early life and education Born in Colchester, Palmer was the son of Gerald Basil Coote Palmer, headmaster of Mark Hall Comprehensive School in Harlow, and Muriel née Howes. After attending the Royal Liberty School in Gidea Park, he became the first student from his school to go on to study at Cambridge University, reading Mechanical Sciences at Pembroke College and completing his undergraduate degree in 1961. He achieved first-class honours in his first two years (the third being unclassed). Academia Daniel C. Drucker was visiting Cambridge while Palmer was a student and was sufficiently impressed to extend an invitation to return to Brown University and perform research there, which Palmer accepted after graduation, receiving a doctorate in 1965. Drucker said that his work could have been worth three doctorates. His work at Brown included plasticity, glacial creep and ice lensing. After his doctorate, Palmer spent two years as a lecturer at the University of Liverpool, but was dissatisfied with the university's engineering curriculum and returned to Cambridge in 1967, where he became a fellow of Churchill College. His initial research there was on the physical properties of soil and how temperature affected soil plasticity; he was able to analogise from the stress–strain relationships in metals, which were more understood. These topics led to his involvement with BP's trans-Alaska pipeline project, beginning in 1970 with the company seeking an expert in permafrost; Palmer had no specific understanding of oil pipelines, but the company was seeking a new perspective on its engineering problems, and he would follow up his initial work by contributing to the Forties and Ninian pipelines. He solved the problem of predicting the shape of the curve of a pipeline, and thus the mechanical stress it suffers, as it is being laid by an S-lay barge by building a physical model and dimensional analysis, avoiding the need for numerically-laborious calculation of finite element analysis. In industry and at UMIST In 1975, after his engagement with BP, Palmer left Cambridge and joined R. J. Brown & Associates as an industrial engineer. He worked on the first under-ice pipelines in the arctic in Northern Canada, the Polar Gas pipeline and the Panarctic Drake F-76 flowline, serving as the project manager of the latter. Having physically modelled laying the pipeline, the physical model was used to optimise the process and the actual installation went very smoothly. After this success, Palmer stayed with the company and worked in London, Houston and Singapore, rising to the role of head of the London office, as well as travelling around Europe on business. Palmer left R. J. Brown & Associates following other departures and conflict at the company. After a period of unemployment—the petroleum industry being in a slump—Palmer joined the University of Manchester Institute of Science and Technology and began a course to train practitioners in submarine pipeline design; the course would be repeated many times over the next forty years. Despite enjoying his tenure at the university, it was not long, spending only three years there before leaving during turmoil around budgets, job losses, and a merger with Victoria University. Andrew Palmer & Associates Returning to industry, Palmer established a company, Andrew Palmer & Associates Limited (APAL). As well as consulting on various projects, APAL developed a modular software suite for oil engineers, PLUSONE. The company was successful and earned a reputation for high-quality engineering work, expanding from its original office in London to sites in Aberdeen, Glasgow and Newcastle and also becoming involved in project management, eventually employing over 55 people. The company had a high proportion of young and female employees, and practiced employee stock ownership and equally shared profits between employees—though the latter was not entirely a success. Palmer did not enjoy the role of being a manager, preferring to be involved in the engineering process, and the company was sold in 1993, with Palmer staying on until 1996 as part of the sale agreement. Return to academia Palmer returned to Cambridge in 1996 as a professor of petroleum engineering, with a remit for cross-disciplinary collaboration. He flourished in this role, introducing students to a variety of the problems faced by practitioners, as well as in university administration and benefactor relations, soliciting donations from industry. During a sabbatical, he spent a year as a visiting professor at Harvard University. He retired from Cambridge in 2005 and, in 2006, moved to the National University of Singapore to a chair sponsored by Keppel Corporation, where he continued to teach and supervise graduate students. Research Palmer's initial topic of study was soil mechanics, particularly at low temperature; he would later investigate ice flow and the mechanical properties of ice, which would remain a recurrent, long-term interest of his. He would deploy dimensional analysis, which he described as 'a magical way of finding useful results with almost no effort,' as well as physical models of systems that, while simple, nevertheless captured a relevant aspect of the problem and allowed for experimentation and optimisation cheaply, which was especially important before digital computers were powerful enough to simulate complex systems. Sometimes, the models were not so small: Palmer realised that the 1:20 scale modelling of storm hazards near Western Australia were insufficient, and instead was involved in building a large 1:6 flow cell. His contribution to understanding how pipelines buckle form the basis of modern pipelines are designed to avoid this hazard, and he introduced a new way of laying pipelines in deep water partly-filled with seawater (previously, they had been laid empty), so that the walls did not need to be as thick (significantly reducing costs) to stop the pipelines buckling under the pressure. Other work Palmer made himself available as an expert witness, and enjoyed working with lawyers, who he found quick-witted (though forgetful once a case was over). He testified for the Crown at the Piper Alpha disaster inquest and at various other investigations. He served on several committees and editorial boards, including as president of the Pipeline Industries Guild from 1998 to 2000. Personal life Palmer met Jane Evans, an artist, on an American holiday while they were both volunteering to construct schools; they married in 1963, and had a daughter, Emily. The two shared many interests and hobbies, including art and travel. As an undergraduate, Palmer was keenly left wing and debated at the Cambridge Union. He was elected president of his college's Junior Combination Room. He spoke many languages: as well as his native English, he learnt Chinese, Dutch, French, German, Italian, Russian and Spanish. Colleagues found him kind, if quirky, and he was well-liked. Awards Fellow of the Royal Society, 1994 Fellow of the Royal Academy of Engineering Fellow of the Institution of Civil Engineers Clarkson University, honorary doctorate, 2007 References Bibliography 1938 births 2019 deaths People from Colchester 20th-century British engineers 21st-century British engineers British engineers Petroleum engineers Geotechnical engineers People educated at the Royal Liberty Grammar School Alumni of Pembroke College, Cambridge Brown University School of Engineering alumni Academics of the University of Liverpool Engineering professors at the University of Cambridge Fellows of Churchill College, Cambridge Academics of the University of Manchester Institute of Science and Technology Academic staff of the National University of Singapore Fellows of the Royal Society Fellows of the Royal Academy of Engineering
Andrew Clennel Palmer
Engineering
1,570
10,299,080
https://en.wikipedia.org/wiki/Reduced%20residue%20system
In mathematics, a subset R of the integers is called a reduced residue system modulo n if: gcd(r, n) = 1 for each r in R, R contains φ(n) elements, no two elements of R are congruent modulo n. Here φ denotes Euler's totient function. A reduced residue system modulo n can be formed from a complete residue system modulo n by removing all integers not relatively prime to n. For example, a complete residue system modulo 12 is {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}. The so-called totatives 1, 5, 7 and 11 are the only integers in this set which are relatively prime to 12, and so the corresponding reduced residue system modulo 12 is {1, 5, 7, 11}. The cardinality of this set can be calculated with the totient function: φ(12) = 4. Some other reduced residue systems modulo 12 are: {13,17,19,23} {−11,−7,−5,−1} {−7,−13,13,31} {35,43,53,61} Facts Every number in a reduced residue system modulo n is a generator for the additive group of integers modulo n. A reduced residue system modulo n is a group under multiplication modulo n. If {r1, r2, ... , rφ(n)} is a reduced residue system modulo n with n > 2, then . If {r1, r2, ... , rφ(n)} is a reduced residue system modulo n, and a is an integer such that gcd(a, n) = 1, then {ar1, ar2, ... , arφ(n)} is also a reduced residue system modulo n. See also Complete residue system modulo m Multiplicative group of integers modulo n Congruence relation Euler's totient function Greatest common divisor Least residue system modulo m Modular arithmetic Number theory Residue number system Notes References External links Residue systems at PlanetMath Reduced residue system at MathWorld Modular arithmetic Elementary number theory
Reduced residue system
Mathematics
472
43,864,268
https://en.wikipedia.org/wiki/SPEDAS
SPEDAS (Space Physics Environment Data Analysis System) is an open-source data analysis tool intended for space physics users. It was developed using Interactive Data Language (IDL). Since its creation, the tool has also been ported to Python in the form of a program referred to as pySPEDAS. Overview SPEDAS is free software that can download and manipulate data from scientific space missions. It contains both a GUI (graphical user interface) and a command line mode for advanced users. It offers various tools for performing calculations and transformations of the data and for visualizing the results. Software modules can be developed for SPEDAS, extending its capabilities. It also includes a tool for downloading data from NASA servers using CDAWeb. SPEDAS evolved from software developed for the THEMIS mission, which was called TDAS (THEMIS Data Analysis Software). In turn, TDAS used IDL code developed previously for earlier missions going back to the 1990s. SPEDAS was developed by scientists and programmers of the University of California, Berkeley's Space Sciences Laboratory, University of California, Los Angeles's IGPP and other contributors. Deployment Three different types of SPEDAS deployment are available: Source code. The full IDL code for SPEDAS is available as a zip file download. To use this, users must install and license IDL from Exelis. Save file. IDL save files can run in a free but restricted version of IDL, called IDL Virtual Machine (VM). Users have to download IDL VM from Exelis, install it and register with Exelis before they can use the SPEDAS save file. Executable file. This distribution contains executable files for Windows, Linux and Mac OS. In this case, users do not have to separately install and download anything else. Plugins One of the main goals of SPEDAS is to accommodate the needs of different NASA missions. Towards this goal, its architecture is modular. Users can develop plugins for loading data, for configuration and for specialized calculations or operations on the data. As of version 3.1, SPEDAS includes plugins for loading data from the following missions or data sets: THEMIS MMS GOES WIND ACE IUGONET ERG OMNI Geomagnetic/Solar indices Plugins for specialized calculations are: Generation of GOES overview plots Generation of THEMIS overview plots THEMIS particle distribution slices References External links SPEDAS wiki page SPEDAS blog page NASA software UC Berkeley, Space Sciences Lab Cross-platform free software Space science Space physics NASA
SPEDAS
Astronomy
517
1,780,465
https://en.wikipedia.org/wiki/Daniel%20Adamson
Daniel Adamson (30 April 1820 – 13 January 1890) was an English engineer who became a successful manufacturer of boilers and was the driving force behind the inception of the Manchester Ship Canal project during the 1880s. Early life Adamson was born in Shildon, County Durham, on 30 April 1820. He was the 13th of 15 children – seven boys and eight girls – born to Daniel Adamson, landlord of the Grey Horse public house in Shildon, and his wife, Ann. Adamson was educated at Edward Walton Quaker School, Old Shildon, until the age of thirteen, when he left to become an apprentice to Timothy Hackworth, engineer to the Stockton and Darlington Railway, with whom he went on to serve as a draughtsman and engineer. By 1850, he had risen to become general manager of the Stockton and Darlington engine works (Soho Works, Shildon), and moved to become manager of Heaton Foundry in Stockport. Business In 1851 he established a small iron works in Newton, Cheshire, expanding it a year later by building a new foundry called the Newton Moor Iron Works on Muslin Street (now Talbot Road), between Hyde and Dukinfield. He specialised in engine and boiler making, initially following designs created by Hackworth, making and exporting the renowned "Manchester Boilers". Adamson was able to experiment with the newfound wealth from the worldwide export of these boilers which incorporated ring joints in the form of his patented Anti-Collapsive Flange Seam. He was also one of the pioneers of explosive forming used in the foundry process. In 1872 he designed and built the Daniel Adamson and Co factory, a new premises in Dukinfield next to Dewsnap Farm (off Dewsnap Lane), with its entrance on Johnsonbrook Road. This new works was approximately from the old foundry but the site was large and had enough spare land around it for any planned expansion. Between 1885 and 1888, Daniel Adamson and Co. supplied four narrow gauge steam locomotives to the Oakeley Quarry in Blaenau Ffestiniog, North Wales. He improved the design and manufacturing process (pioneering the use of steel and taking out 19 patents in the process) over the 36 years he was involved with boiler and other foundry manufacturing. When he died in 1890 the business employed some 600 people. Adamson's other business interests included a mill building company in Hyde ('The Newton Moor Spinning Company'), the Yorkshire Steel and Iron Works at Penistone, the Northern Lincolnshire Iron Company at Frodingham near Scunthorpe, and large share-holdings in iron works in Cumberland and South Wales. The ship canal project Adamson was a champion of the Manchester Ship Canal project. He arranged a meeting in Didsbury at his home, The Towers, on 27 June 1882, attended by 68 people including the mayors of Manchester and surrounding towns, leaders of commerce and industry, banker and financiers. Also present at the meeting was the canal's eventual designer Edward Leader Williams. Adamson was elected chairman of the provisional committee promoting the ship canal, and was at the forefront in pushing the scheme through Parliament in the face of intense opposition from railway companies and port interests in Liverpool. The requisite Act of Parliament enabling the canal was finally passed on 6 August 1885, after which Adamson became the first chairman of the board of directors of the Manchester Ship Canal Company – a post he held until February 1887. As a result of his resignation, the first sod was cut by his successor, Lord Egerton of Tatton, the following November. Adamson remained a strong supporter of the project but did not live to see its completion in 1894. He died at home in Didsbury on 13 January 1890. Daniel Adamson and Co remained a family business until 1964, when it was sold to Acrow Engineers Ltd. Memorials There are blue plaques at The Towers (today the Shirley Institute), Wilmslow Road in Didsbury, and in Adamson Street, Dukinfield. Also in Dukinfield, St Luke's Church has a stained glass window in his memory. The Adamson Military Band was also named after him. The Daniel Adamson Coach House has been preserved in Shildon. The former Manchester Ship Canal Company steam-powered tug-tender Daniel Adamson (built in 1903 as Ralph Brocklebank but renamed in 1936) has been restored by the Daniel Adamson Preservation Society and entered passenger-carrying service under steam on 22 April 2017. Adamson is buried at Southern Cemetery, Manchester, in grave space "A-Church of England-40". He was buried three days after his death at his home in Didsbury, on 16 January 1890. He was a founder member of the Iron and Steel Institute and served as its president in 1887. He was awarded the institute's Bessemer Gold Medal in 1888 for his work on the properties of iron and steel and the use of steel for steam boilers and other purposes. He was also a Member of the Institution of Civil Engineers, the Cleveland Institution of Engineers, the British Iron Trades Association, the Railway and Canal Traders' Association, the Geological Society of London, the Society of Arts, the Manchester Geographical Society, the Manchester Literary and Philosophical Society, the Manchester Geological Society and others. References Notes Bibliography External links The Daniel Adamson Preservation Society Tameside Blue Plaque information Image of Daniel Adamson at The Transport Archive Daniel Adamson 1820–1890, a unique illustrated time-line 1820 births 1890 deaths English engineers People from Shildon People from Didsbury Bessemer Gold Medal Burials at Southern Cemetery, Manchester
Daniel Adamson
Chemistry
1,129
26,152,150
https://en.wikipedia.org/wiki/Immediacy%20%28philosophy%29
Immediacy is a philosophical concept related to time and temporal perspectives, both visual, and cognitive. Considerations of immediacy reflect on how we experience the world and what reality is. It implies a direct experience of an event or object bereft of any intervening medium. An example would be looking at a painting, losing awareness of the medium, and seeing the depiction as real. The medium is an important concept, and somewhat paradoxical, as it is both necessary and yet forgotten. Plato deals with a similar concept in the purity of experience. He tells us that speech is more immediate than writing, because the words emerge more directly from the speaker's mind. Immediacy also possesses characteristics of both of the homophonic heterographs 'immanent' and 'imminent', and what entails to both within ontology. Immediacy also relates to the philosophy of phenomenology, as they are schools of thought which both concern subjective perceptions of objects and time. Philosophy of time Metaphysical properties Ontology Dialectic
Immediacy (philosophy)
Physics
210
12,196,510
https://en.wikipedia.org/wiki/C3H8S2
{{DISPLAYTITLE:C3H8S2}} The molecular formula C3H8S2 (molar mass: 108.23 g/mol) may refer to: 2,4-Dithiapentane Propanedithiols 1,2-Propanedithiol 1,3-Propanedithiol Molecular formulas
C3H8S2
Physics,Chemistry
76
71,509,607
https://en.wikipedia.org/wiki/Pterocladiophilaceae
The Pterocladiophilaceae is a small family of red algae containing 2 (or 3) genera of thallus parasitic algae. Taxonomy The family Pterocladiophilaceae was originally established by Fan & Papenfuss (1959: 38) to accommodate their newly named genus and species, Pterocladiophila hemisphaerica, a parasite on Pterocladia lucida in New Zealand. This parastitic alga, which differs from all other parasitic red algae in having zonately divided tetrasporangia, was tentatively assigned to the Cryptonemiales. Fredericq & Hommersand (1990a), as an outgrowth of their detailed morphological study of Holmsella pachyderma , a parasite algae feeding on Gracilaria and Gracilariopsis in Great Britain and Ireland that had been assigned to the Choreocolacaceae family, noted that it had a lot of similarities among Holmsella , Gelidiocolax , and Pterocladiophila. These various similarities include the pattern of vegetative cell division, catenate (chain-like) spermatangia, two-celled carpogonial branch, and the apparent absence of an auxiliary cell. Fredericq & Hommersand grouped these three genera in the Pterocladiophilaceae family, arguing that the cruciate division of tetrasporangia in Gelidiocolax and Holmsella was not a serious objection to uniting these genera with Pterocladiophila. Moreover, on the basis of certain features of spermatangial initiation, gonimoblast development, and the pattern of concavo-convex divisions of apical and cortical cells, they assigned the family to the Gracilariales. Pterocladiophila hemisphaerica has shown due to Dna analysis, to be within the Ceramiales order, but the parasite was grouped with support as sister to the Gracilariales order. Genera As accepted by GBIF; Gelidiocolax N.L.Gardner, 1927 (2) Holmsella Sturch, 1926 (2) Figures in brackets are approx. how many species per genus. Note, Pterocladiophila is not accepted as a species by the GBIF. It has a sole species Pterocladiophila hemisphaerica , that was originally described from New Zealand, and has also been recorded from the Caribbean (Stegenga and Vroman 1986). The species is accepted by AlgaeBase, and WoRMS. Description and ecology They feed on members of the Gelidiaceae family or Gracilaria species of algae, forming small, white to pigmented, hemispherical to verrucose, pustules (raised structure containing necrotic inflammatory cells). They basally penetrate the tissue of the host and protrude above the host's surface. They are minute, generally more or less spherical (in shape), They have an internal structure of branched, endophytic, rhizoidal filaments pit-connected to host cells, and the pustules of outer, anticlinal, cortical chains and inner larger-celled, multinucleate, medullary cells with numerous secondary pit-connections. Reproduction is carried out by Gametangial thalli dioecious. The Carpogonial branches (female parts) are 2-celled, borne on a supporting cell in a cortical filament, with a straight trichogyne (slender, hair-like cell which receives the fertilizing particles). The auxiliary cells are absent; fertilized carpogonium possibly fusing with adjacent vegetative cells and developing the gonimoblast directly, consisting of horizontal filaments fusing with vegetative cells and clusters of erect filaments bearing chains of carposporangia, interspersed among cortical filaments; pericarp absent. Spermatangia formed in chains from surface cortical cells, cut off by intercalary divisions from initial cells. Tetrasporangia scattered in the cortex or in pits, cruciately or zonately divided. The spermatangia are produced in chains, cut off transversely at upper end of spermatangial parent cell. References Other sources Fredericq, S. & Hommersand, M.H. (1990a). Morphology and systematics of Holmsella pachyderma (Pterocladiophilaceae, Gracilariales). Br. phycol. J. 25, 39–51. Red algae families Edible algae Gracilariales
Pterocladiophilaceae
Biology
980
7,793,293
https://en.wikipedia.org/wiki/Andean%20wolf
The Andean wolf (previously described as Dasycyon hagenbecki, though this now not an accepted taxon) is a purported South American canine that is falsely labelled a wolf. Various tests on the singular pelt have failed to provide a conclusive identity. History In 1927, Lorenz Hagenbeck bought one of three pelts from a dealer in Buenos Aires who claimed that they had come from a wild dog of the Andes. The pelt ended up in Munich where the German mammalogist Ingo Krumbiegel examined it in 1940. Krumbiegel published two papers describing the animal and giving it the scientific name of Dasycyon hagenbecki. The American zoologist Howard J. Stains supported Krumbeigel's new genus Dasycyon. Other mammalogists believed that the skin was that of a domestic dog. In 1954 Fritz Dieterlen published results comparing samples of hair taken from the Munich pelt with hair from various canids. He found that there were significant similarities between the Munich pelt hair and German Shepherd hair. Skull In 1935 Krumbiegel is said to have studied a skull supposedly similar that of a maned wolf (Chrysocyon brachyurus) but larger and reportedly obtained from outside of the range of the maned wolf. This gave him confidence in his description of the Munich pelt as a new genus. The whereabouts of the skull are unknown. Current status In 2000 DNA analysis of the pelt was attempted but the samples were found to be contaminated with human, dog, wolf and pig DNA. See also Culpeo – Lycalopex culpaeus, a species of canid also known as the Andean wolf References Wolves Extinct canines Controversial mammal taxa Purported mammals
Andean wolf
Biology
356
66,261,739
https://en.wikipedia.org/wiki/List%20of%20plant%20genera%20named%20for%20people%20%28Q%E2%80%93Z%29
Since the first printing of Carl Linnaeus's Species Plantarum in 1753, plants have been assigned one epithet or name for their species and one name for their genus, a grouping of related species. Thousands of plants have been named for people, including botanists and their colleagues, plant collectors, horticulturists, explorers, rulers, politicians, clerics, doctors, philosophers and scientists. Even before Linnaeus, botanists such as Joseph Pitton de Tournefort, Charles Plumier and Pier Antonio Micheli were naming plants for people, sometimes in gratitude for the financial support of their patrons. Early works researching the naming of plant genera include an 1810 glossary by and an etymological dictionary in two editions (1853 and 1856) by Georg Christian Wittstein. Modern works include The Gardener's Botanical by Ross Bayton, Index of Eponymic Plant Names and Encyclopedia of Eponymic Plant Names by Lotte Burkhardt, Plants of the World by Maarten J. M. Christenhusz (lead author), Michael F. Fay and Mark W. Chase, The A to Z of Plant Names by Allan J. Coombes, the four-volume CRC World Dictionary of Plant Names by Umberto Quattrocchi, and Stearn's Dictionary of Plant Names for Gardeners by William T. Stearn; these supply the seed-bearing genera listed in the first column below. Excluded from this list are genus names not accepted (as of January 2021) at Plants of the World Online, which includes updates to Plants of the World (2017). Key Ba = listed in Bayton's The Gardener's Botanical Bt = listed in Burkhardt's Encyclopedia of Eponymic Plant Names Bu = listed in Burkhardt's Index of Eponymic Plant Names Ch = listed in Christenhusz's Plants of the World Co = listed in Coombes's The A to Z of Plant Names Qu = listed in Quattrocchi's CRC World Dictionary of Plant Names St = listed in Stearn's Dictionary of Plant Names for Gardeners In addition, Burkhardt's Index is used as a reference for every row in the table. Genera See also List of plant genus names with etymologies: A–C, D–K, L–P, Q–Z List of plant family names with etymologies Notes Citations References See http://creativecommons.org/licenses/by/4.0/ for license. See http://creativecommons.org/licenses/by/4.0/ for license. See http://www.plantsoftheworldonline.org/terms-and-conditions for license. Further reading Systematic Systematic Taxonomy (biology) Glossaries of biology Gardening lists Genera named for people (Q-Z) Named for people (Q-Z) Wikipedia glossaries using tables Lists of eponyms
List of plant genera named for people (Q–Z)
Biology
607
71,364,629
https://en.wikipedia.org/wiki/Leucocoprinus%20medioflavus
Leucocoprinus medioflavus is a species of mushroom producing fungus in the family Agaricaceae. Taxonomy It was first described in 1894 by the French mycologist Jean Louis Émile Boudier who classified it as Lepiota medioflava. Boudier also provided various illustrations of the mushroom in different stages of growth. In 1976 it was classified as Leucocoprinus medioflavus and as the synonym Leucoagaricus medioflavus by the French mycologist Marcel Bon. In 1999 the variant Leucocoprinus medioflavus var. niveus was described by the mycologists Vincenzo Migliozzi & Marcello Rava. This is now considered a synonym. Description Leucocoprinus medioflavus is a small dapperling mushroom with thin white flesh and a pronounced yellow umbo. Boudier described this mushroom in 1894 as follows: Cap: 2–3 cm wide. White, striated and with a powdery white coating or finely woolly (tomentose) to silky texture. Bulbous or cylindrical when immature expanding to flat with a depressed centre and a prominent yellow umbo. Cap edges lift upwards when mature. Gills: White, free, crowded. Stem: 4–7 cm tall (including the cap thickness). White but tapers up from the thicker base which is often yellow. The stem ring is in the middle of the stem (median) and curls upwards. The stem is slightly scaly (furfuraceous) above the ring and woolly (tomentose) below. Spore print: White Spores: Equilateral, ovate, obtuse, often filled with a small droplet. 5-6 x 3 μm. Habitat and distribution L. medioflavus is scarcely recorded and little known. Boudier's 1894 description says the specimens studied were found in France on moist earth in the heat of June, growing in large numbers inside a nursery greenhouse. Etymology The specific epithet medioflavus (originally medioflava) derives from the Latin medio meaning 'in the middle' and flavus meaning yellow, flaxen or blonde. This is a reference to the distinct yellow umbo in the centre of the mushroom. References Leucocoprinus Fungi described in 1894 Fungus species
Leucocoprinus medioflavus
Biology
488
9,691,039
https://en.wikipedia.org/wiki/N-Acetylglucosamine%20receptor
The N-Acetylglucosamine receptor is a receptor which binds N-Acetylglucosamine. Studies The N-Acetylglucosamine (GlcNAc) receptor has been recently found to interact and bind with vimentins at the cell surface. Research indicates that the GlcNAc receptor can therefore be used to target vimentin-expressing cells for gene delivery via receptor-mediated endocytosis. References External links Lectins
N-Acetylglucosamine receptor
Chemistry,Biology
100
13,664,959
https://en.wikipedia.org/wiki/Firefly%20%28computer%20program%29
Firefly, formerly named PC GAMESS, is an ab initio computational chemistry program for Intel-compatible x86, x86-64 processors based on GAMESS (US) sources. However, it has been mostly rewritten (60-70% of the code), especially in platform-specific parts (memory allocation, disk input/output, network), mathematic functions (e.g., matrix operations), and quantum chemistry methods (such as Hartree–Fock method, Møller–Plesset perturbation theory, and density functional theory). Thus, it is significantly faster than the original GAMESS. The main maintainer of the program was Alex Granovsky. Since October 2008, the project is no longer associated with GAMESS (US) and the Firefly rename occurred. Until October 17, 2009, both names could be used, but thereafter, the package should be referred to as Firefly exclusively. History On December 4, 2009, the support of any PC GAMESS versions earlier than the first PC GAMESS Firefly version 7.1.C was abandoned, and any and all licenses to use the code were revoked. Thus, users of the outdated PC GAMESS binaries (version 7.1.B and all earlier releases) were required to discontinue using the PC GAMESS and upgrade to Firefly. On July 25, 2012, a state of the art edition of Firefly, version 8.0.0 RC, was launched for public beta testing. A relative comparison has shown that it is far faster and more reliable than the prior edition, Firefly 7.1.G. Many changes were made to enhance its abilities. In the Quantum Chemistry Speed Test, Firefly's DFT code came second (losing only to commercial QChem), beating other free DFT codes by a large margin. Firefly's unique capabilities include XMCQDPT2, a reformulation of Nakano's multi-state multi-configuration quasi-degenerate perturbation theory (MCQDPT) correcting for some of its deficiencies. At the end of 2019, Firefly's main developer A. A. Granovsky unexpectedly died but the project continues. See also GAMESS (US) GAMESS (UK) Quantum chemistry computer programs References External links PC GAMESS SCF Benchmark Computational chemistry software
Firefly (computer program)
Chemistry
491
33,779
https://en.wikipedia.org/wiki/Worcestershire%20sauce
Worcestershire sauce or Worcester sauce (UK: ) is a fermented liquid condiment invented by pharmacists John Wheeley Lea and William Henry Perrins in the city of Worcester in Worcestershire, England, during the first half of the 19th century. The inventors went on to form the company Lea & Perrins. Worcestershire sauce has been a generic term since 1876, when the English High Court of Justice ruled that Lea & Perrins did not own a trademark for the name "Worcestershire". Worcestershire sauce is used directly as a condiment on steaks, hamburgers, and other finished dishes, and to flavour cocktails such as the Bloody Mary and Caesar. It is also frequently used to augment recipes such as Welsh rarebit, Caesar salad, Oysters Kirkpatrick, and devilled eggs. As both a background flavour and a source of umami (savoury), it is now also added to dishes that historically did not contain it, such as chili con carne, beef stew and baked beans. History Fish-based fermented sauces, such as garum, go back to antiquity. However, no direct link of Worcestershire sauce with such earlier sauces has been demonstrated and they were made very differently. In the seventeenth century, English recipes for sauces (typically to put on fish) already combined anchovies with other ingredients. The Lea & Perrins brand was commercialised in 1837 and was the first type of sauce to bear the Worcestershire name. The origin of the Lea & Perrins recipe is unclear. The packaging originally stated that the sauce came "from the recipe of a nobleman in the county". The company has also claimed that "Lord Sandys, ex-Governor of Bengal" encountered it while in India with the East India Company in the 1830s, and commissioned the local pharmacists (the partnership of John Wheeley Lea and William Perrins of 63 Broad Street, Worcester) to recreate it. However, neither Marcus Lord Sandys nor any Baron Sandys was ever a Governor of Bengal, nor as far as available records indicate had they ever visited India. According to company lore, when the recipe was first mixed, the resulting product was so strong that it was considered inedible and the barrel was abandoned in the basement. Looking to make space in the storage area some 18 months later, the chemists decided to try it and discovered that the long-fermented sauce had mellowed and become palatable. In 1838, the first bottles of Lea & Perrins Worcestershire sauce were released to the general public. Ingredients The ingredients in a bottle of Worcestershire sauce include: Barley malt vinegar Spirit vinegar Molasses Sugar Salt Anchovies Tamarind extract Shallots (later replaced by onions) Garlic Spices Flavourings Several anchovy-free vegetarian and vegan varieties are available for those who avoid or are allergic to fish. The Codex Alimentarius recommends that prepared food containing Worcestershire sauce with anchovies include a label warning of fish content, although this is not required in most jurisdictions. The US Department of Agriculture has required the recall of some products with undeclared Worcestershire sauce. Generally, Orthodox Jews refrain from eating fish and meat in the same dish, so they do not use traditional Worcestershire sauce to season meat. However, certain brands are certified to contain less than 1/60 of the fish product and can be used with meat. Although soy sauce is used in many variations of Worcestershire sauce since the 1880s, it is debated whether Lea & Perrins has ever used any in their preparation. According to William Shurtleff's SoyInfo Center, a 1991 letter from factory general manager J. W. Garnett describes the brand switching to hydrolyzed vegetable protein during World War II due to shortages. As of 2021, soy is not declared as an ingredient in the Lea & Perrins sauce. Varieties Lea & Perrins The Lea & Perrins brand was commercialised in 1837 and continues to be the leading global brand of Worcestershire sauce. On 16 October 1897, Lea & Perrins relocated manufacturing of the sauce from their pharmacy in Broad Street to a factory in the city of Worcester on Midland Road, where it is still made. The factory produces ready-mixed bottles for domestic distribution and a concentrate for bottling abroad. In 1930, the Lea & Perrins operation was purchased by HP Foods, which was in turn acquired by the Imperial Tobacco Company in 1967. HP was sold to Danone in 1988 and then to Heinz in 2005. Some sizes of bottles sold by Lea & Perrins in the United States come packaged in dark glass with a beige label and wrapped in paper. Lea & Perrins USA explains this practice as a vestige of shipping practices from the 19th century, when the product was imported from England, as a measure of protection for the bottles. The producer also claims that its Worcestershire sauce is the oldest commercially bottled condiment in the U.S. The ingredients in the US version of Lea & Perrins also differ somewhat, in that the US version (which include distilled white vinegar, molasses, sugar, water, salt, onions, anchovies, garlic, cloves, tamarind extract, natural flavorings, and chili pepper extract) replaces the malt vinegar used by the UK and Canadian versions with spirit vinegar. Brazil and Portugal In Brazil and Portugal, it is known as ('English sauce'). Costa Rica In Costa Rica, a local variation of the sauce is , created in 1920 and a staple condiment at homes and restaurants. El Salvador Worcestershire sauce, known as ('English sauce') or ('Perrins sauce'), is very popular in El Salvador. Many restaurants provide a bottle on each table, and the per capita annual consumption is , the highest in the world as of 1996. Germany A sweeter, less salty version of the sauce called was developed in the beginning of the 20th century in Dresden, Germany, where it is still being produced. It contains smaller amounts of anchovies. It is mostly consumed in the eastern part of the country. Mexico In Mexico, it is known as (English sauce). United Kingdom, Australia Holbrook's Worcestershire was produced in Birmingham, England, from 1875 but only the Australian subsidiary survives. United States Lea & Perrins Worcestershire Sauce is sold in the United States by Kraft Heinz following the Kraft & Heinz merger in 2015. Other Worcestershire sauce brands in the United States include French's, which was introduced in 1941. Venezuela It is commonly named ('English sauce') and is part of many traditional dishes such as (a traditional Christmas dish) and some versions of . Non-fish variations Some "Worcestershire sauces" are inspired by the original sauce but have deviated significantly from the original taste profile, most notably by the exclusion of fish. () Worcestershire sauce has been produced since 1917. It relies on soy sauce instead of anchovies for the umami flavour. The company makes two versions: Formula 1 for Asian taste, and Formula 2 for international taste. The two differ only in that Formula 2 contains slightly less soy sauce and slightly more spices. In Japan, Worcestershire sauce is labelled Worcester (rather than Worcestershire), rendered as . Many sauces are more of a vegetarian variety, with the base being water, syrup, vinegar, puree of apple and tomato puree, and the flavour less spicy and sweeter. Japanese Agricultural Standard defines Worcester-type sauces by viscosity, with Worcester sauce proper having a viscosity of less than 0.2 poiseuille, 0.2–2.0 poiseuille sauces categorised as , commonly used in Kantō region and northwards, and sauces over 2.0 poiseuille categorised as ; they are manufactured under brand names such as Otafuku and Bulldog, but these are brown sauces more similar to HP Sauce rather than Worcestershire sauce. Tonkatsu sauce is a thicker Worcester-style sauce made from vegetables and fruits and associated with the dish . Worcestershire sauce has a history of multiple introduction in Chinese-speaking areas. These sauces, each differently named, have diverged both from the original and from each other: Spicy soy sauce (), Shanghai Worcestershire sauce was first produced under this name in 1933 by Mailing Aquarius, then an English-owned company. With Mailing moving to Hong Kong in 1946, the Shanghai branch was nationalised in 1954. Sauce production was transferred to Taikang in 1960. The sauce was reformulated in 1981 under a "nine flavours in one" formula, and again changed in 1990 into two "Taikang Yellow" and "Taikang Blue" varieties. As of 2020, only the yellow variety remains available. The Taikang Yellow sauce contains no fish. It is used in Haipai cuisine, especially on pork chops and Shanghainese borscht. A descendant of an earlier form of the sauce is found in Taiwan as "Mailing spicy soy sauce", originally produced by the HK branch of Mailing. It is found in steakhouses. Gip-sauce (), Hong Kong This variety is of uncertain etymology: it may have come from catsup or the verb give. Save for the Lea & Perrins original sold as a gip-sauce, most varieties of this type have a stronger umami flavour with the addition of soy sauce, fish sauce, and/or MSG; some commercial varieties forgo fish altogether. This sauce is commonly used in dim sum dishes such as steamed meatball and spring rolls. Spicy vinegar (), Taiwan This variety is descended from the Japanese Worcester Sauce via the Kongyen company, originally founded by Japanese businesspeople. It is also known under the name Taiwan Black Vinegar due to confusion post-WW2. See also A.1. Steak Sauce Anglo-Indian cuisine Fish sauce Oyster sauce Soy sauce French's Henderson's Relish – similar sauce without fish List of sauces Sarson's References External links , abetted by Lea & Perrins, reports and debunks the myth, without unveiling Lady Sandys. . Song "Worcestershire Sauce" written for a "Ballad Documentary" put on by the Somers Folk Club (Malvern) in 1984 1837 introductions British condiments Sauce Sauce Fermented foods Fish sauces History of Worcester, England Umami enhancers Japanese condiments Food brands of the United Kingdom Steak sauces Anchovy dishes Sour foods
Worcestershire sauce
Biology
2,133
7,662,554
https://en.wikipedia.org/wiki/-oate
The suffix -oate is the IUPAC nomenclature used in organic chemistry to form names of compounds formed with ester. They are of two types: Formed by replacing the hydrogen atom in the –COOH by some other radical, usually an alkyl or aryl radical forming an ester. For example, methyl benzoate is a molecular compound with the structure C6H5–CO–O–CH3, and its condensed structural formula usually written as C6H5COOCH3. Formed by removing the hydrogen atom in the –COOH, producing an anion, which joins with a cation forming a salt. For example, the sodium benzoate is an ionic compound with the structure C6H5–CO–O− Na+, and its condensed structural formula usually written as C6H5CO2Na. The suffix comes from "-oic acid". The most common examples of compounds named with the "oate" suffix are esters, like ethyl acetate, . References oate English suffixes
-oate
Chemistry
217
1,481,468
https://en.wikipedia.org/wiki/Methazole
Methazole (C9H6Cl2N2O3) is an obsolete herbicide in the family of herbicides known as oxadiazolones. It was used as a post-emergent treatment for controlling weeds. References Herbicides Oxadiazolidines Chloroarenes
Methazole
Biology
64
21,011,166
https://en.wikipedia.org/wiki/WebOS
webOS, also known as LG webOS and previously known as Open webOS, HP webOS and Palm webOS, is a Linux kernel-based multitasking operating system for smart devices, such as smart TVs, that has also been used as a mobile operating system. Initially developed by Palm, Inc. (which was acquired by Hewlett-Packard), HP made the platform open source, at which point it became Open webOS. The operating system was later sold to LG Electronics, and was made primarily a smart TV operating system for LG televisions as a successor to NetCast. In January 2014, Qualcomm announced that it had acquired technology patents from HP, which included all the webOS and Palm patents; LG licenses them to use in their devices. Various versions of webOS have been featured on several devices since launching in 2009, including Pre, Pixi, and Veer smartphones, TouchPad tablet, LG's smart TVs since 2014, LG's smart refrigerators and smart projectors since 2017. History 2009–2010: Launch by Palm Palm launched webOS, then called Palm webOS, in January 2009 as the successor to Palm OS. The first webOS device was the original Palm Pre, released by Sprint in June 2009. The Palm Pixi followed. 2010–2013: Acquisition by HP; the launch of Open webOS In April 2010, HP acquired Palm. The acquisition of Palm was initiated while Mark Hurd was CEO, however he resigned shortly after the acquisition was completed. Later, webOS was described by new HP CEO Leo Apotheker as a key asset and motivation for the purchase. The $1.2 billion acquisition was finalized in June. HP indicated its intention to develop the webOS platform for use in multiple new products, including smartphones, tablets, and printers. In February 2011, HP announced that it would use webOS as the universal platform for all its devices. However, HP also made the decision that the Palm Pre, Palm Pixi, and the "Plus" revisions would not receive over-the-air updates to webOS 2.0, despite a previous commitment to an upgrade "in coming months." HP announced several webOS devices, including the HP Veer and HP Pre 3 smartphones, running webOS 2.2, and the HP TouchPad, a tablet computer released in July 2011 that runs webOS 3.0. In March 2011, HP announced plans for a version of webOS by the end of 2011 to run within Windows, and to be installed on all HP desktop and notebook computers in 2012. Neither ever materialized, although work had begun on an x86 port around this time involving a team in Fort Collins, Colorado; work was scrapped later in the year. In August 2011, HP announced that it was interested in selling its Personal Systems Group, responsible for all of its consumer PC products, including webOS, and that webOS device development and production lines would be halted. It remained unclear whether HP would consider licensing webOS software to other manufacturers. When HP reduced the price of the Touchpad to $99, the existing inventory quickly sold out. The HP Pre 3 was launched in select areas of Europe, and US-based units were available only through unofficial channels (both AT&T and Verizon canceled their orders just prior to delivery after Apotheker's (HP's CEO at the time) announcement. Notably, these US Pre 3 units, having been released through unofficial channels, lacked both warranties and carried no support obligation from HP; as a result parts are nearly impossible to come by. HP announced that it would continue to issue updates for the HP Veer and HP TouchPad, but these updates have failed to materialize for the former, and the latter saw a final, unofficial release called "webOS CE" that contained only open-sourced components of webOS meant for what remained of the developer community rather than a conventional, user-centric update to the operating system. The last HP webOS version, 3.0.5, was released on January 12, 2012. In December 2011, after abandoning the TouchPad and the proposed sale of the HP Personal Systems Group, HP announced it would release webOS source code in the near future under an open-source license. In August 2012, code specific to the existing devices was released as webOS Community Edition (CE), with support for the existing HP hardware. Open webOS includes open source libraries designed to target a wider range of hardware. HP renamed its webOS unit as "Gram". In February 2012, HP released Isis, a new web browser for Open webOS. Growth and decline of HP App Catalog The HP App Catalog was an app store for apps for the mobile devices running webOS. On June 6, 2009, webOS launched on the Palm Pre with 18 available apps. The number of apps grew to 30 by June 17, 2009, with 1 million cumulative downloads by June 27, 2009; 30 official and 31 unofficial apps by July 13, 2009; 1,000 official apps by January 1, 2010; 4,000 official apps September 29, 2010; and 10,002 official apps on December 9, 2011. Subsequently, the number of available apps decreased because many apps were withdrawn from the App Catalog by their owners. Examples include the apps for The New York Times and Pandora Radio. After a Catalog splash screen on November 11, 2014, announcing its deprecation, the HP App Catalog servers were permanently shut down on March 15, 2015. The number of functional apps remaining at that time is unknown but was probably much lower due to the imminent abandonment of the project. 2013–present: Acquisition by LG; open-source edition launch On February 25, 2013, HP announced that it was selling webOS to LG Electronics for use on its web-enabled smart TVs, replacing its previous NetCast platform. Under the agreement LG Electronics owns the documentation, source code, developers and all related websites. However, HP would still hold on to patents from Palm as well as cloud-based services such as the App Catalog. In 2014, HP sold its webOS patents to Qualcomm. As well as its use as an OS for smart TVs, LG has expanded its use to various Internet of things devices. As a starting point, LG showcased a LG Wearable Platform OS (webOS) smartwatch in early 2015. At CES 2017, LG announced a smart refrigerator with webOS. On March 19, 2018, LG announced an open-source edition of webOS. This edition would allow developers to download the source code for free as well as take advantage of related tools, guides, and forums on its new open source website to become more familiar with webOS and its inherent benefits as a smart device's platform. LG hopes that this will help its goal of advancing its philosophy of open platform, open partnership and open connectivity. Features The webOS mobile platform introduced some innovative features, such as the cards interface and the gesture navigation, that are now standard in mobile operating systems such as iOS, Windows Phone, and Android. HP/Palm webOS Multitasking interface Navigation uses multi-touch gestures on the touchscreen. The interface uses "cards" to manage multitasking and represent apps. The user switches between running apps with a flick from left and right on the screen. Apps are closed by flicking a "card" up—and "off"—the screen. The app "cards" can be rearranged for organization. webOS 2.0 introduced 'stacks', where related cards could be "stacked" together. Synergy Palm referred to integration of information from many sources as "Synergy." Users can sign into multiple email accounts from different providers and integrate all of these sources into a single list. Similar capabilities pull together calendars and also instant messages and SMS text messages from multiple sources. Over-the-air updates The OS can be updated without docking to a PC, instead receiving OS updates over the carrier connection. Notifications The notification area is located on the bottom portion of the screen on phones, and on the top status bar area on tablets. On phones, when a notification comes in, it slides in from the bottom of the screen. Due to the resizable nature of the Mojo and Enyo application frameworks, the app usually resizes itself to allow unhindered use while the notification is displayed. After the notification slides away, it usually remains as an icon. The user can then tap on the icons to expand them. Notifications can then be dismissed (sliding off the screen), acted upon (tapping), or left alone. Sync By default, data sync uses a cloud-based approach rather than using a desktop sync client. The first version of webOS shipped with the ability to sync with Apple's iTunes software by masquerading as an Apple device, but this feature was disabled by subsequent iTunes software updates. Third-party applications On HP webOS, officially vetted third-party apps are accessible to be installed on the device from the HP App Catalog. As HP webOS replaced Palm OS, Palm commissioned MotionApps to code and develop an emulator called Classic, to enable backward compatibility to Palm OS apps. This operates with webOS version 1.0. Palm OS emulation was discontinued in WebOS version 2.0. MotionApps disengaged from Classic in 2010, citing HP Palm as "disruptive." Another source of applications is homebrew software. Homebrew apps are not directly supported by HP. Programs used to distribute homebrew webOS apps include webOS Quick Install (Java-based sideloader for desktop computers) and Preware (a homebrew webOS app catalog, which must be sideloaded). If software problems do occur after installing homebrew programs, "webOS Doctor" (provided by HP) can restore a phone back to factory settings and remove changes made by homebrew apps and patches. Developer Mode Developer mode allows for developer access of the device and is also used for digital forensic investigations. It can be accessed by typing webos20090606 on the device’s keyboard, or on some devices typing upupdowndownleftrightleftrightbastart (a reference to the Konami code) on the cards view. Once in developer mode, data on the system partition can be accessed freely, even if the device was locked. LG webOS Smart TV features LG has redesigned the UI of webOS, maintaining the card UI as a feature called "Simple switching" between open TV apps. The other two features promoted by the company are a simple connection (using an animated Clippy-like character called Beanbird to aid the user through setup), and simple discovery. Platform Underneath the graphical user interface, webOS has much in common with mainstream Linux distributions. Versions 1.0 to 2.1 use a patched Linux 2.6.24 kernel. The list of open-source components used by the different releases of webOS, as well as the source code of and patches applied to each component, is available at the Palm Open Source webpage. This page also serves as a reference listing of the versions of webOS that have been publicly released. In 2011, Enyo replaced Mojo, released in June 2009, as the software development kit (SDK). Hardware See also List of smart TV platforms and middleware software Enyo Mobile platform Access Linux Platform LuneOS List of WebOS devices References External links webOS Open Source Edition (LG) webOS Developer Center LG webOS TV Developer Center LG webOS TV Israel webOS Auto Developer Center 2009 software ARM operating systems HP software LG Electronics Mobile Linux Mobile operating systems Palm, Inc. Smart TV Smartphone operating systems Software based on WebKit Tablet operating systems Television operating systems
WebOS
Technology
2,436
9,918,043
https://en.wikipedia.org/wiki/Red%20Queen%20hypothesis
The Red Queen's hypothesis is a hypothesis in evolutionary biology proposed in 1973, that species must constantly adapt, evolve, and proliferate in order to survive while pitted against ever-evolving opposing species. The hypothesis was intended to explain the constant (age-independent) extinction probability as observed in the paleontological record caused by co-evolution between competing species; however, it has also been suggested that the Red Queen hypothesis explains the advantage of sexual reproduction (as opposed to asexual reproduction) at the level of individuals, and the positive correlation between speciation and extinction rates in most higher taxa. Origin In 1973, Leigh Van Valen proposed the hypothesis as an "explanatory tangent" to explain the "law of extinction" known as "Van Valen's law", which states that the probability of extinction does not depend on the lifetime of the species or higher-rank taxon, instead being constant over millions of years for any given taxon. However, the probability of extinction is strongly related to adaptive zones, because different taxa have different probabilities of extinction. In other words, extinction of a species occurs randomly with respect to age, but nonrandomly with respect to ecology. Collectively, these two observations suggest that the effective environment of any homogeneous group of organisms deteriorates at a stochastically constant rate. Van Valen proposed that this is the result of an evolutionary zero-sum game driven by interspecific competition: the evolutionary progress (= increase in fitness) of one species deteriorates the fitness of coexisting species, but because coexisting species evolve as well, no one species gains a long-term increase in fitness, and the overall fitness of the system remains constant. Van Valen named the hypothesis "Red Queen" because under his hypothesis, species have to "run" or evolve in order to stay in the same place, or else go extinct as the Red Queen said to Alice in Lewis Carroll's Through the Looking-Glass in her explanation of the nature of Looking-Glass Land: Examples Positive correlation between speciation and extinction rates (Stanley's rule) Palaeontological data suggest that high speciation rates correlate with high extinction rates in almost all major taxa. This correlation has been attributed to a number of ecological factors, but it may result also from a Red Queen situation, in which each speciation event in a clade deteriorates the fitness of coexisting species in the same clade (provided that there is phylogenetic niche conservatism). Evolution of sex Discussions of the evolution of sex were not part of Van Valen's Red Queen hypothesis, which addressed evolution at scales above the species level. The microevolutionary version of the Red Queen hypothesis was proposed by Bell (1982), also citing Lewis Carroll, but not citing Van Valen. The Red Queen hypothesis is used independently by Hartung and Bell to explain the evolution of sex, by John Jaenike to explain the maintenance of sex and W. D. Hamilton to explain the role of sex in response to parasites. In all cases, sexual reproduction confers species variability and a faster generational response to selection by making offspring genetically unique. Sexual species are able to improve their genotype in changing conditions. Consequently, co-evolutionary interactions, between host and parasite, for example, may select for sexual reproduction in hosts in order to reduce the risk of infection. Oscillations in genotype frequencies are observed between parasites and hosts in an antagonistic coevolutionary way without necessitating changes to the phenotype. In multi-host and multi-parasite coevolution, the Red Queen dynamics could affect what host and parasite types will become dominant or rare. Science writer Matt Ridley popularized the term in connection with sexual selection in his 1993 book The Red Queen, in which he discussed the debate in theoretical biology over the adaptive benefit of sexual reproduction to those species in which it appears. The connection of the Red Queen to this debate arises from the fact that the traditionally accepted Vicar of Bray hypothesis only showed adaptive benefit at the level of the species or group, not at the level of the gene (although the protean "Vicar of Bray" adaptation is very useful to some species that belong to the lower levels of the food chain). By contrast, a Red-Queen-type thesis suggesting that organisms are running arms races with their parasites can explain the utility of sexual reproduction at the level of the gene by positing that the role of sex is to preserve genes that are currently disadvantageous, but that will become advantageous against the background of a likely future population of parasites. However, the assumption of the Red Queen hypothesis, that the primary factor in maintaining sexual reproduction is the generation of genetic variation does not appear to be generally applicable. Ruderfer et al. analyzed the ancestry of strains of the yeasts Saccharomyces cerevisiae and Saccharomyces paradoxus under natural conditions and concluded that outcrossing occurs only about once every 50,000 cell divisions. This low frequency of outcrossing implies that there is little opportunity for the production of recombinational variation. In nature, mating is likely most often between closely related yeast cells. Mating occurs when haploid cells of opposite mating type MATa and MATα come into contact, and Ruderfer et al. pointed out that such contacts are frequent between closely related yeast cells for two reasons. The first is that cells of opposite mating type are present together in the same ascus, the sac that contains the cells directly produced by a single meiosis, and these cells can mate with each other. The second reason is that haploid cells of one mating type, upon cell division, often produce cells of the opposite mating type with which they can mate. The relative rarity in nature of meiotic events that result from outcrossing is inconsistent with the idea that production of genetic variation is the main selective force maintaining meiosis in this organism (as would be expected by the Red Queen hypothesis). However, these findings in yeast are consistent with the alternative idea that the main selective force maintaining meiosis is enhanced recombinational repair of DNA damage, since this benefit is realized during each meiosis, whether or not out-crossing occurs. Further evidence of the Red Queen hypothesis was observed in allelic effects under sexual selection. The Red Queen hypothesis leads to the understanding that allelic recombination is advantageous for populations that engage in aggressive biotic interactions, such as predator-prey or parasite-host interactions. In cases of parasite-host relations, sexual reproduction can quicken the production of new multi-locus genotypes allowing the host to escape parasites that have adapted to the prior generations of typical hosts. Mutational effects can be represented by models to describe how recombination through sexual reproduction can be advantageous. According to the mutational deterministic hypothesis, if the deleterious mutation rate is high, and if those mutations interact to cause a general decline in organismal fitness, then sexual reproduction provides an advantage over asexually reproducing organisms by allowing populations to eliminate the deleterious mutations not only more rapidly, but also most effectively. Recombination is one of the fundamental means that explain why many organisms have evolved to reproduce sexually. Sexual organisms must spend resources to find mates. In the case of sexual dimorphism, usually one of the sexes contributes more to the survival of their offspring (usually the mother). In such cases, the only adaptive benefit of having a second sex is the possibility of sexual selection, by which organisms can improve their genotype. Evidence for this explanation for the evolution of sex is provided by the comparison of the rate of molecular evolution of genes for kinases and immunoglobulins in the immune system with genes coding other proteins. The genes coding for immune system proteins evolve considerably faster. Further evidence for the Red Queen hypothesis was provided by observing long-term dynamics and parasite coevolution in a mixed sexual and asexual population of snails (Potamopyrgus antipodarum). The number of sexuals, the number of asexuals, and the rates of parasitic infection for both were monitored. It was found that clones that were plentiful at the beginning of the study became more susceptible to parasites over time. As parasite infections increased, the once-plentiful clones dwindled dramatically in number. Some clonal types disappeared entirely. Meanwhile, sexual snail populations remained much more stable over time. On the other hand, Hanley et al. studied mite infestations of a parthenogenetic gecko species and its two related sexual ancestral species. Contrary to expectation based on the Red Queen hypothesis, they found that the prevalence, abundance and mean intensity of mites in sexual geckos was significantly higher than in asexuals sharing the same habitat. Critics of the Red Queen hypothesis question whether the constantly changing environment of hosts and parasites is sufficiently common to explain the evolution of sexual reproduction. In particular, Otto and Nuismer presented findings showing that species interactions (e.g. host vs parasite interactions) usually select against sexual reproduction. They concluded that, even though the Red Queen hypothesis favors sex under certain circumstances, it alone does not account for the ubiquity of sex. Otto and Gerstein further stated that "it seems doubtful to us that strong selection per gene is sufficiently commonplace for the Red Queen hypothesis to explain the ubiquity of sex". Parker reviewed numerous genetic studies on plant disease resistance and failed to uncover a single example consistent with the assumptions of the Red Queen hypothesis. In 2011, researchers used the microscopic roundworm Caenorhabditis elegans as a host and the pathogenic bacterium Serratia marcescens to generate a host–parasite coevolutionary system in a controlled environment, allowing them to conduct more than 70 evolution experiments testing the Red Queen hypothesis. They genetically manipulated the mating system of C. elegans, causing populations to mate either sexually, by self-fertilization, or a mixture of both within the same population. Then they exposed those populations to the S. marcescens parasite. It was found that the self-fertilizing populations of C. elegans were rapidly driven extinct by the coevolving parasites, while sex allowed populations to keep pace with their parasites, a result consistent with the Red Queen hypothesis. However, a study of the frequency of outcrossing in natural populations showed that self-fertilization is the predominant mode of reproduction in C. elegans, with infrequent outcrossing events occurring at a rate of around 1%. Although meioses that result in selfing are unlikely to contribute significantly to beneficial genetic variability, these meioses may provide the adaptive benefit of recombinational repair of DNA damages that arise, especially under stressful conditions. Currently, there is no consensus among biologists on the main selective forces maintaining sex. The competing models to explain the adaptive function of sex have been reviewed by Birdsell and Wills. Evolution of aging The Red Queen hypothesis has been invoked by some authors to explain evolution of aging. The main idea is that aging is favored by natural selection since it allows faster adaptation to changing conditions, especially in order to keep pace with the evolution of pathogens, predators and prey. Interspecies race A number of predator/prey species couple compete via running speed. "The rabbit runs faster than the fox, because the rabbit is running for his life while the fox is only running for his dinner." Aesop The predator-prey relationship can also be established in the microbial world, producing the same evolutionary phenomenon that occurs in the case of foxes and rabbits. A recently observed example has as protagonists M. xanthus (predator) and E. coli (prey) in which a parallel evolution of both species can be observed through genomic and phenotypic modifications, producing in future generations a better adaptation of one of the species that is counteracted by the evolution of the other, thus generating an arms race that can only be stopped by the extinction of one of the species. The interactions between parasitoid wasps and insect larvae, necessary for the parasitic wasp's life cycle, are also a good illustration of a race. Evolutionary strategy was found by both partners to respond to the pressure generated by the mutual association of lineages. For example, the parasitoid wasp group, Campoletis sonorensis, is able to fight against the immune system of its hosts, Heliothis virescens (Lepidopteran) with the association of a polydnavirus (PDV) (Campoletis sonorensis PDV). During the oviposition process, the parasitoid transmits the virus (CsPDV) to the insect larva. The CsPDV will alter the physiology, growth and development of the infected insect larvae to the benefit of the parasitoid. Competing evolutionary ideas A competing evolutionary idea is the court jester hypothesis, which indicates that an arms race is not the driving force of evolution on a large scale, but rather it is abiotic factors. The Black Queen hypothesis is a theory of reductive evolution that suggests natural selection can drive organisms to reduce their genome size. In other words, a gene that confers a vital biological function can become dispensable for an individual organism if its community members express that gene in a "leaky" fashion. Like the Red Queen hypothesis, the Black Queen hypothesis is a theory of co-evolution. Publication Van Valen originally submitted his article to the Journal of Theoretical Biology, where it was accepted for publication. However, because "the manner of processing depended on payment of page charges", Van Valen withdrew his manuscript and founded a new Journal called Evolutionary Theory, in which he published his manuscript as the first paper. Van Valen's acknowledgement to the National Science Foundation ran: "I thank the National Science Foundation for regularly rejecting my (honest) grant applications for work on real organisms, thus forcing me into theoretical work". See also Chaos theory Interspecific competition Macroevolution Punctuated equilibrium Red King hypothesis Survivorship curve References Further reading Francis Heylighen (2000): "The Red Queen Principle", in: F. Heylighen, C. Joslyn and V. Turchin (editors): Principia Cybernetica Web (Principia Cybernetica, Brussels), URL: http://pespmc1.vub.ac.be/REDQUEEN.html. Pearson, Paul N. (2001) Red Queen hypothesis Encyclopedia of Life Sciences http://www.els.net Ridley, M. (1995) The Red Queen: Sex and the Evolution of Human Nature, Penguin Books, Vermeij, G.J. (1987). Evolution and escalation: An ecological history of life. Princeton University Press, Princeton, NJ. Evolution of the biosphere Evolutionary biology concepts
Red Queen hypothesis
Biology
3,082
38,468,351
https://en.wikipedia.org/wiki/Democratization%20of%20technology
Democratization of technology refers to the process by which access to technology rapidly continues to become more accessible to more people, especially from a select group of people to the average public. New technologies and improved user experiences have empowered those outside of the technical industry to access and use technological products and services. At an increasing scale, consumers have greater access to use and purchase technologically sophisticated products, as well as to participate meaningfully in the development of these products. Industry innovation and user demand have been associated with more affordable, user-friendly products. This is an ongoing process, beginning with the development of mass production and increasing dramatically as digitization became commonplace. Thomas Friedman argued that the era of globalization has been characterized by the democratization of technology, democratization of finance, and democratization of information. Technology has been critical in the latter two processes, facilitating the rapid expansion of access to specialized knowledge and tools, as well as changing the way that people view and demand such access. A counter argument is that this is just a process of 'massification' - more people can use banks, technology, have access to information, but it does not mean there is any more democratic influence over its production, or that this massification promotes Democracy. History Scholars and social critics often cite the invention of the printing press as a major invention that changed the course of history. The force of the printing press rested not in its impact on the printing industry or inventors, but on its ability to transmit information to a broader public by way of mass production. This event is so widely recognized because of its social impact – as a democratizing force. The printing press is often seen as the historical counterpart to the Internet. After the development of the Internet in 1969, its use remained limited to communications between scientists and within government, although use of email and boards gained popularity among those with access. It did not become a popular means of communication until the 1990s. In 1993 the US federal government opened the Internet to commerce and the creation of HTML formed the basis for universal accessibility. Major innovations The Internet has played a critical role in modern life as a typical feature of most Western households, and has been key in the democratization of knowledge. It not only constitutes arguably the most critical innovation in this trend thus far; it has also allowed users to gain knowledge of and access to other technologies. Users can learn of new developments more quickly, and purchase high-tech products otherwise only actively marketed to recognized experts. Some have argued that cloud computing is having a major effect by allowing users greater access through mobility and pay-as-you-use capacity. Social media has also empowered and emboldened users to become contributors and critics of technological developments. Generative artificial intelligence tools have the potential to democratize the process of innovation by improving the ability of individuals to specify and visualize ideas. The open-source model allows users to participate directly in development of software, rather than indirect participation, through contributing opinions. By being shaped by the user, development is directly responsive to user demand and can be obtained for free or at a low cost. In a comparable trend, arduino and littleBits have made electronics more accessible to users of all backgrounds and ages. The development of 3D printers has the potential to increasingly democratize production. Cultural impact This trend is linked to the spread of knowledge of and ability to perform high-tech tasks, challenging previous conceptions of expertise. Widespread access to technology, including lower costs, was critical to the transition to the new economy. Similarly, democratization of technology was also fuelled by this economic transition, which produced demands for technological innovation and optimism in technology-driven progress. Since the 1980s, a spreading constructivist conception of technology has emphasized that the social and technical domains are critically intertwined. Scholars have argued that technology is non-neutral, defined contextually and locally by a certain relationship with society. Andrew Feenberg, a central thinker in the philosophy of technology, argued that democratizing technology means expanding technological design to include alternative interests and values. When successful in doing so, this can be a tool for increasing inclusiveness. This also suggests an important participatory role for consumers if technology is to be truly democratic. Feenberg asserts that this must be achieved by consumer intervention in a liberated design process. Improved access to specialized knowledge and tools has been associated with an increase in the "do it yourself" (DIY) trend. This has also been associated with consumerization, whereby personal or privately owned devices and software are also used for business purposes. Some have argued that this is linked to reduced dependence on traditional information technology departments. Astra Taylor, the author of the book The People's Platform: Taking Back Power and Culture in the Digital Age, argues, "The promotion of Internet-enabled amateurism is a lazy substitute for real equality of opportunity." Industry impact In some ways, democratization of technology has strengthened this industry. Markets have broadened and diversified. Consumer feedback and input is available at a very low or no cost. However, related industries are experiencing decreased demand for qualified professionals as consumers are able to fill more of their demands themselves. Users of a range of types and status have access to increasingly similar technology. Because of the decreased costs and expertise necessary to use products and software, professionals (e.g. in the audio industry) may experience loss of work. In some cases, technology is accessible but sufficiently complex that most users without specialized training are able to operate it without necessarily understanding how it works. Additionally, the process of consumerization has led to an influx in the number of devices in businesses and accessing private networks that IT departments cannot control or access. While this can lead to lowered operating costs and increased innovation, it is also associated with security concerns that most businesses are unable to address at the pace of the spread of technology. Political impact Some scholars have argued that technological change will bring about a third wave of democracy. The Internet has been recognized for its role in promoting increased citizen advocacy and government transparency. Jesse Chen, a leading thinker in democratic engagement technologies, distinguishes the democratizing effects of technology from democracy itself. Chen has argued that, while the Internet may have democratizing effects, the Internet alone cannot deliver democracy at all levels of society unless technologies are purposely designed for the nuances of democracy, specifically the engagement of large groups of people in between elections in and beyond government. The spread of the Internet and other forms of technology has led to increased global connectivity. Many scholars believe that it has been associated in the developing world not only with increased Western influence, but also with the spread of democracy through increased communication, efficiency, and access to information. Scholars have drawn associations between the level of technological connectedness and democracy in many nations. Technology can enhance democracy in the developed world as well. In addition to increased communication and transparency, some electorates have implemented online voting to accommodate an increased number of citizens. References External links Leadbeater and Miller: The Pro-Am Revolution National Democratic Institute The Open Source Initiative Cultural globalization Digital media Digital technology Internet culture Technological change
Democratization of technology
Technology
1,424
1,077,843
https://en.wikipedia.org/wiki/Yutaka%20Taniyama
was a Japanese mathematician known for the Taniyama–Shimura conjecture. Life Taniyama was born on 22 November 1927 in Kisai, a town in Saitama. He was the sixth of eight children born to a doctor's family. He studied at Urawa High School (present-day Saitama University) after graduating from Fudouoka Middle School. He suspended his college for two years due to his medical condition, but finally graduated in 1950. During Taniyama's college years, he aspired to be a mathematician after reading Teiji Takagi's work. In 1958, Taniyama worked as an Associate Professor after years of assistant at the University of Tokyo. He also obtained his doctorate from the University in May. In October, Taniyama was engaged to be married to , while the Institute for Advanced Study in Princeton, New Jersey offered him a position. On 17 November 1958, Taniyama committed suicide by poisoning himself with gas. He left a note explaining how far he had progressed with his teaching duties, and apologizing to his colleagues for the trouble he was causing them. The first paragraph of his suicide note read (quoted in Shimura, 1989): Until yesterday I had no definite intention of killing myself. But more than a few must have noticed that lately I have been tired both physically and mentally. As to the cause of my suicide, I don't quite understand it myself, but it is not the result of a particular incident, nor of a specific matter. Merely may I say, I am in the frame of mind that I lost confidence in my future. There may be someone to whom my suicide will be troubling or a blow to a certain degree. I sincerely hope that this incident will cast no dark shadow over the future of that person. At any rate, I cannot deny that this is a kind of betrayal, but please excuse it as my last act in my own way, as I have been doing my own way all my life. Although his note is mostly enigmatic it does mention tiredness and a loss of confidence in his future. Taniyama's ideas had been criticized as unsubstantiated and his behavior had occasionally been deemed peculiar. Goro Shimura mentioned that he suffered from depression. About a month later, Suzuki also committed suicide by gas, leaving a note reading: "We promised each other that no matter where we went, we would never be separated. Now that he is gone, I must go too in order to join him." After Taniyama's death, Goro Shimura stated that: He was always kind to his colleagues, especially to his juniors, and he genuinely cared about their welfare. He was the moral support of many of those who came into mathematical contact with him, including of course myself. Probably he was never conscious of this role he was playing. But I feel his noble generosity in this respect even more strongly now than when he was alive. And yet nobody was able to give him any support when he desperately needed it. Reflecting on this, I am overwhelmed by the bitterest grief. Contribution Taniyama was best known for conjecturing, in modern language, automorphic properties of L-functions of elliptic curves over any number field. A partial and refined case of this conjecture for elliptic curves over rationals is called the Taniyama–Shimura conjecture or the modularity theorem whose statement he subsequently refined in collaboration with Goro Shimura. The names Taniyama, Shimura and Weil have all been attached to this conjecture, but the idea is essentially due to Taniyama. Taniyama's interests were in algebraic number theory. His work has been influenced by André Weil, who had met Taniyama during the symposiums on algebraic number theory in 1955, in which he became famous after proposing his problems at it. Taniyama's problems proposed in 1955 form the basis of a Taniyama–Shimura conjecture, that "every elliptic curve defined over the rational field is a factor of the Jacobian of a modular function field". In 1986, Ken Ribet proved that if the Taniyama–Shimura conjecture held, then so would Fermat's Last Theorem, which inspired Andrew Wiles to work for a number of years in secrecy on it, and to prove enough of it to prove Fermat's Last Theorem. Owing to the pioneering contribution of Wiles and the efforts of a number of mathematicians, the Taniyama–Shimura conjecture was finally proven in 1999. The original Taniyama conjecture for elliptic curves over arbitrary number fields remains open. Goro Shimura stated: Taniyama was not a very careful person as a mathematician. He made a lot of mistakes. But he made mistakes in a good direction and so eventually he got right answers. I tried to imitate him, but I found out that it is very difficult to make good mistakes. See also Taniyama group Taniyama's problems Notes Publications This book is hard to find, but an expanded version was later published as References Singh, Simon (hardcover, 1998). Fermat's Enigma. Bantam Books. (previously published under the title Fermat's Last Theorem). External links 1927 births 1958 suicides People from Saitama Prefecture Japanese mathematicians 20th-century Japanese mathematicians Number theorists Japanese scientists University of Tokyo alumni Suicides in Japan 1958 deaths
Yutaka Taniyama
Mathematics
1,106
62,033,048
https://en.wikipedia.org/wiki/C29H52
{{DISPLAYTITLE:C29H52}} The molecular formula C29H52 (molar mass: 400.72 g/mol) may refer to: Fusidane (29-nor protostane) Poriferastane (24S-ethylcholestane) Stigmastane (24R-ethylcholestane) Molecular formulas
C29H52
Physics,Chemistry
81
1,485,104
https://en.wikipedia.org/wiki/Chemical%20ionization
Chemical ionization (CI) is a soft ionization technique used in mass spectrometry. This was first introduced by Burnaby Munson and Frank H. Field in 1966. This technique is a branch of gaseous ion-molecule chemistry. Reagent gas molecules (often methane or ammonia) are ionized by electron ionization to form reagent ions, which subsequently react with analyte molecules in the gas phase to create analyte ions for analysis by mass spectrometry. Negative chemical ionization (NCI), charge-exchange chemical ionization, atmospheric-pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) are some of the common variants of the technique. CI mass spectrometry finds general application in the identification, structure elucidation and quantitation of organic compounds as well as some utility in biochemical analysis. Samples to be analyzed must be in vapour form, or else (in the case of liquids or solids), must be vapourized before introduction into the source. Principles of operation The chemical ionization process generally imparts less energy to an analyte molecule than does electron impact (EI) ionization, resulting in less fragmentation and usually a simpler spectrum. The amount of fragmentation, and therefore the amount of structural information produced by the process can be controlled to some degree by selection of the reagent ion. In addition to some characteristic fragment ion peaks, a CI spectrum usually has an identifiable protonated molecular ion peak [M+1]+, allowing determination of the molecular mass. CI is thus useful as an alternative technique in cases where EI produces excessive fragmentation of the analyte, causing the molecular-ion peak to be weak or completely absent. Instrumentation The CI source design for a mass spectrometer is very similar to that of the EI source. To facilitate the reactions between the ions and molecules, the chamber is kept relatively gas tight at a pressure of about 1 torr. Electrons are produced externally to the source volume (at a lower pressure of 10−4 torr or below) by heating a metal filament which is made of tungsten, rhenium, or iridium. The electrons are introduced through a small aperture in the source wall at energies 200–1000 eV so that they penetrate to at least the centre of the box. In contrast to EI, the magnet and the electron trap are not needed for CI, since the electrons do not travel to the end of the chamber. Many modern sources are dual or combination EI/CI sources and can be switched from EI mode to CI mode and back in seconds. Mechanism A CI experiment involves the use of gas phase acid-base reactions in the chamber. Some common reagent gases include: methane, ammonia, water and isobutane. Inside the ion source, the reagent gas is present in large excess compared to the analyte. Electrons entering the source will mainly ionize the reagent gas because it is in large excess compared to the analyte. The primary reagent ions then undergo secondary ion/molecule reactions (as below) to produce more stable reagent ions which ultimately collide and react with the lower concentration analyte molecules to form product ions. The collisions between reagent ions and analyte molecules occur at close to thermal energies, so that the energy available to fragment the analyte ions is limited to the exothermicity of the ion-molecule reaction. For a proton transfer reaction, this is just the difference in proton affinity between the neutral reagent molecule and the neutral analyte molecule. This results in significantly less fragmentation than does 70 eV electron ionization (EI). The following reactions are possible with methane as the reagent gas. Primary ion formation CH4{} + e^- -> CH4^{+\bullet}{} + 2e^- Secondary reagent ions CH4{} + CH4^{+\bullet} -> CH5+{} + CH3^{\bullet} CH4 + CH3^+ -> C2H5+ + H2 Product ion formation M + CH5+ -> CH4 + [M + H]+    (protonation) AH + CH3+ -> CH4 + A+    (H^- abstraction) M + C2H5+ -> [M + C2H5]+    (adduct formation) A + CH4+ -> CH4 + A+    (charge exchange) If ammonia is the reagent gas, NH3{} + e^- -> NH3^{+\bullet}{} + 2e^- NH3{} + NH3^{+\bullet} -> NH4+{} + NH2 M + NH4^+ -> MH+ + NH3 For isobutane as the reagent gas, C3H7^+{} + C4H10^{+\bullet} -> C4H9^+{} + C3H8 M + C4H9^+ -> MH^+ + C4H8 Self chemical ionization is possible if the reagent ion is an ionized form of the analyte. Advantages and limitations One of the main advantages of CI over EI is the reduced fragmentation as noted above, which for more fragile molecules, results in a peak in the mass spectrum indicative of the molecular weight of the analyte. This proves to be a particular advantage for biological applications where EI often does not yield useful molecular ions in the spectrum. The spectra given by CI are simpler than EI spectra and CI can be more sensitive than other ionization methods, at least in part to the reduced fragmentation which concentrates the ion signal in fewer and therefore more intense peaks. The extent of fragmentation can be somewhat controlled by proper selection of reagent gases. Moreover, CI is often coupled to chromatographic separation techniques, thereby improving its usefulness in identification of compounds. As with EI, the method is limited compounds that can be vapourized in the ion source. The lower degree of fragmentation can be a disadvantage in that less structural information is provided. Additionally, the degree of fragmentation and therefore the mass spectrum, can be sensitive to source conditions such as pressure, temperature, and the presence of impurities (such as water vapour) in the source. Because of this lack of reproducibility, libraries of CI spectra have not been generated for compound identification. Applications CI mass spectrometry is a useful tool in structure elucidation of organic compounds. This is possible with CI, because formation of [M+1]+ eliminates a stable molecule, which can be used to guess the functional groups present. Besides that, CI facilitates the ability to detect the molecular ion peak, due to less extensive fragmentation. Chemical ionization can also be used to identify and quantify an analyte present in a sample, by coupling chromatographic separation techniques to CI such as gas chromatography (GC), high performance liquid chromatography (HPLC) and capillary electrophoresis (CE). This allows selective ionization of an analyte from a mixture of compounds, where accurate and precised results can be obtained. Variants Negative chemical ionization Chemical ionization for gas phase analysis is either positive or negative. Almost all neutral analytes can form positive ions through the reactions described above. In order to see a response by negative chemical ionization (NCI, also NICI), the analyte must be capable of producing a negative ion (stabilize a negative charge) for example by electron capture ionization. Because not all analytes can do this, using NCI provides a certain degree of selectivity that is not available with other, more universal ionization techniques (EI, PCI). NCI can be used for the analysis of compounds containing acidic groups or electronegative elements (especially halogens).Moreover, negative chemical ionization is more selective and demonstrates a higher sensitivity toward oxidizing agents and alkylating agents. Because of the high electronegativity of halogen atoms, NCI is a common choice for their analysis. This includes many groups of compounds, such as PCBs, pesticides, and fire retardants. Most of these compounds are environmental contaminants, thus much of the NCI analysis that takes place is done under the auspices of environmental analysis. In cases where very low limits of detection are needed, environmental toxic substances such as halogenated species, oxidizing and alkylating agents are frequently analyzed using an electron capture detector coupled to a gas chromatograph. Negative ions are formed by resonance capture of a near-thermal energy electron, dissociative capture of a low energy electron and via ion-molecular interactions such as proton transfer, charge transfer and hydride transfer. Compared to the other methods involving negative ion techniques, NCI is quite advantageous, as the reactivity of anions can be monitored in the absence of a solvent. Electron affinities and energies of low-lying valencies can be determined by this technique as well. Charge-exchange chemical ionization This is also similar to CI and the difference lies in the production of a radical cation with an odd number of electrons. The reagent gas molecules are bombarded with high energy electrons and the product reagent gas ions abstract electrons from the analyte to form radical cations. The common reagent gases used for this technique are toluene, benzene, NO, Xe, Ar and He. Careful control over the selection of reagent gases and the consideration toward the difference between the resonance energy of the reagent gas radical cation and the ionization energy of the analyte can be used to control fragmentation. The reactions for charge-exchange chemical ionization are as follows. He{} + e^- -> He^{+\bullet}{} + 2e^- He^{+\bullet}{} + M -> M^{+\bullet} Atmospheric-pressure chemical ionization Chemical ionization in an atmospheric pressure electric discharge is called atmospheric pressure chemical ionization (APCI), which usually uses water as the reagent gas. An APCI source is composed of a liquid chromatography outlet, nebulizing the eluent, a heated vaporizer tube, a corona discharge needle and a pinhole entrance to 10−3 torr vacuum. The analyte is a gas or liquid spray and ionization is accomplished using an atmospheric pressure corona discharge. This ionization method is often coupled with high performance liquid chromatography where the mobile phase containing eluting analyte sprayed with high flow rates of nitrogen or helium and the aerosol spray is subjected to a corona discharge to create ions. It is applicable to relatively less polar and thermally less stable compounds. The difference between APCI and CI is that APCI functions under atmospheric pressure, where the frequency of collisions is higher. This enables the improvement in sensitivity and ionization efficiency. See also Electrospray ionization Proton-transfer-reaction mass spectrometry References Bibliography External links Using Amines as Chemical Ionization Reagents and Building Custom Manifold Ion source Mass spectrometry Scientific techniques
Chemical ionization
Physics,Chemistry
2,329
2,269,546
https://en.wikipedia.org/wiki/Architectural%20mythology
Architectural mythology means the symbolism in real-world architecture, as well as the architecture described in mythological stories. In addition to language, a myth could be represented by a painting, a sculpture, or a building. It is about the overall story of an architectural work, often revealed through art.Mythology and symbolism has been a channel for architects to inject a deeper meaning for an indissoluble amount of time. The power of ancient myths and symbols is controlled to create a bridge between the past and the future. Mythology in architecture is a deliberate strategy, they try to design something timeless and universally relatable. The value of a built environment, therefore, is a conglomerate of its actual physical existence and the historical memories and myths people attach to it, bring to it, and project on it. Not all stories surrounding an architectural work incorporate a level of myth. These stories can also be well hidden from the casual viewer and are often built into the conceptual design of the architectural statement. Ancient Greek architecture Before 600 BC worship was done in the open, but when the Greeks began to represent their Gods by large statues, it was necessary to provide a building for this purpose. This led to the development of temples. With the greek god of architecture being Hephaestus (fire, metalworking, craftsmen, sculpture, metallurgy and volcanoes) and the greek goddess associated with architecture being Hestia (architecture, the hearth, and domesticity). The role temples are intended for worship to celebrate their god and receive comfort. But, Ancient Greek temples were meant to serve as homes for the gods and goddesses of that community. Their homes were the finest and came with a staff of servants. The ancient Greek temples were often enhanced with mythological decorations from the columns to the roof. The architectural functions of the temple mainly concentrated on the cella with the cult statue. The architectural elaboration served to stress the dignity of the cella.These statues of the god or goddess were usually represented standing up or sitting down in the central space of the temple. The early statues were made of wood and then were transitioned to be made out of stone or cast bronze. Two of the finest statues for temples built was the statue of Zeus at Olympia or Athena at the Parthenon, they were both a combination of gold and ivory with Zeus been considered as one of the seven wonder of the ancient world. The Parthenon is a greek temple located in Athens that was built in dedication to the greek goddess of wisdom, war, handicraft, and practical reason Athena.The Parthenon was a symbol of the Athenians' devotion and gratitude to her. At a time when the Athenians wanted to showcase their strength, civilization, and heroism to the world, the Parthenon’s sculptural reliefs reinforced these ideals. The South, West, and North sides of the Parthenon frieze show a procession of human figures. The East side contains Greek gods in various positions. The gods on the left side of the frieze tend to have stronger associations with the underworld while the gods on the right reside over spheres of fertility and optimism. This creates a story of life and death across the East Frieze. Ancient Egyptian Architecture The great pyramids are an architectural feat constructed as a means to house the remains of ancient Egyptian rulers. Inscribed on the interior pyramid walls are hieroglyphic texts describing the afterlife and ancient Egyptian mythology. There are as many as 900 individual compositions in each pyramid.The pyramid's smooth, angled sides symbolized the rays of the sun and were designed to help the king's soul ascend to heaven and join the gods, particularly the sun god Ra. It was believed by ancient Egyptians that when a king died part of his spirit remained in his body and later mummified to care for the spirit. The Spetted pyramid became known as the royal burial grounds. The sphinx was a mythical creature with the body of a lion and head of a man wearing a pharaoh headdress. The lion symbolizes strength, power, and the protection give nature to the pharaohs, it is considered a powerful guardian of the scared and royal realms. The human head symbolizes wisdom and intelligence. The position of the sphinx facing east towards the rising sun symbolizes the pharaoh’s role as the mediator between the gods and the people, and his connection to the sun god Ra. Sphinx statues were commonly found in or near ancient Egyptian temples and tombs. Sphinx statues were commonly found in or near ancient Egyptian temples and tombs. The sphinx was thought to be a guardian for the ancient rulers of Egypt. These sphinxes like the pyramids had inscriptions on their bases and bodies. These inscriptions were references to the Egyptian gods such as Horus, Nekhbet, Wadjet and many others. - Horus : Egyptian deity and pharaoh who represented the sky, sun, kingship, healing, and protection - Nekhbet : Egyptian goddess who protected the pharaohs, queens, children, pregnant people, and the dead - Wadjet : Goddess of serpents, the Nile Delta, the land of the living, and protector of Egyptian kings Ancient Roman Architecture Many ancient Roman temples were constructed for religious purposes. The most influential example is the Pantheon. Pantheon is a Greek adjective meaning “honor all Gods”, in fact it was first built as a temple to all gods. According to Roman legend, the original Pantheon was constructed on the very site where Romulus, their mythological founder, ascended to heaven. However, most historians attribute the first Pantheon, built in 27 BC, to Agrippa, a close associate of Emperor Augustus. The Pantheon serves as the final resting place for the famed artist Raphael, as well as several Italian kings and poets. While there is very little surviving written information about the building historian Cassius Dio remarked:Perhaps it has this name because, among the statues which embellished it, there were those of many gods, including Mars and Venus; but my own opinion on the origin of the name is that, because of its vaulted roof, it actually resembles the heavens. See also Folly References Books Giedion, S.: The Beginnings of Architecture: The Eternal Present: A Contribution on Constancy and Change, New Jersey: Princeton University Press, 1981 Lethaby, William Richard: Architecture, Mysticism and Myth Cosimo (first published 1892), English, 288 pages, (Online PDF) Mann, A.: Sacred Architecture, Shaftesbury: Element, 1993 Donald E. Strong, The Classical World, Paul Hamlyn, London (1965) External links Bruno Queysanne: Architecture and Mythology (Southern California Institute of Architecture: Media Archive) Architectural history Mythography
Architectural mythology
Engineering
1,363
39,585,389
https://en.wikipedia.org/wiki/Higher-order%20compact%20finite%20difference%20scheme
High-order compact finite difference schemes are used for solving third-order differential equations created during the study of obstacle boundary value problems. They have been shown to be highly accurate and efficient. They are constructed by modifying the second-order scheme that was developed by Noor and Al-Said in 2002. The convergence rate of the high-order compact scheme is third order, the second-order scheme is fourth order. Differential equations are essential tools in mathematical modelling. Most physical systems are described in terms of mathematical models that include convective and diffusive transport of some variables. Finite difference methods are amongst the most popular methods that have been applied most frequently in solving such differential equations. A finite difference scheme is compact in the sense that the discretised formula comprises at most nine point stencils which includes a node in the middle about which differences are taken. In addition, greater order of accuracy (more than two) justifies the terminology 'higher-order compact finite difference scheme' (HOC). This can be achieved in several ways. The higher-order compact scheme considered here is by using the original differential equation to substitute for the leading truncation error terms in the finite difference equation. Overall, the scheme is found to be robust, efficient and accurate for most computational fluid dynamics (CFD) applications discussed here further. The simplest problem for the validation of the numerical algorithms is the Lid Driven cavity problem. Computed results in form of tables, graphs and figures for a fluid with Prandtl number = 0.71 with Rayleigh number (Ra) ranging from 103 to 107 are available in the literature. The efficacy of the scheme is proved when it very clearly captures the secondary and tertiary vortices at the sides of the cavity at high values of Ra. Another milestone was the development of these schemes for solving two dimensional steady/unsteady convection diffusion equations. A comprehensive study of flow past an impulsively started circular cylinder was made. The problem of flow past a circular cylinder has continued to generate tremendous interest amongst researchers working in CFD mainly because it displays almost all the fluid mechanical phenomena for incompressible, viscous flows in the simplest of geometrical settings. It was able to analyze and visualize the flow patterns more accurately for Reynold's number (Re) ranging from 10 to 9500 compared to the existing numerical results. This was followed by its extension to rotating counterpart of the cylinder surface for Re ranging from 200 to 1000. More complex phenomenon that involves a circular cylinder undergoing rotational oscillations while translating in a fluid is studied for Re as high as 500. Another benchmark in the history is its extension to multiphase flow phenomena. Natural processes such as gas bubble in oil, ice melting, wet steam are observed everywhere in nature. Such processes also play an important role with the practical applications in the area of biology, medicine, environmental remediation. The scheme has been successively implemented to solve one and two dimensional elliptic and parabolic equations with discontinuous coefficients and singular source terms. These type of problems hold importance numerically because they usually lead to non-smooth or discontinuous solutions across the interfaces. Expansion of this idea from fixed to moving interfaces with both regular and irregular geometries is currently going on. References Finite differences Numerical differential equations
Higher-order compact finite difference scheme
Mathematics
670
299,408
https://en.wikipedia.org/wiki/Echo%20chamber
An echo chamber is a hollow enclosure used to produce reverberation, usually for recording purposes. A traditional echo chamber is covered in highly acoustically reflective surfaces. By using directional microphones pointed away from the speakers, echo capture is maximized. Some portions of the room can be moved to vary the room's decay time. Nowadays, effects units are more widely used to create such effects, but echo chambers are still used today, such as the famous echo chambers at Capitol Studios. In music, the use of acoustic echo and reverberation effects has taken many forms and dates back many hundreds of years. Sacred music of the Medieval and Renaissance periods relied heavily on the composers' extensive understanding and use of the complex natural reverberation and echoes inside churches and cathedrals. This early acoustical knowledge informed the design of opera houses and concert halls in the 17th, 18th, and 19th centuries. Architects designed these to create internal reflections that would enhance and project sound from the stage in the days before electrical amplification. Sometimes echo effects are the unintentional side effect of the architectural or engineering design, such as for the Hamilton Mausoleum in Scotland, which has one of the longest reverberation times of any building. Electro-acoustic Developments in electronics in the early 20th century—specifically the invention of the amplifier and the microphone—led to the creation of the first artificial echo chambers, built for radio and recording studios. Until the 1950s, echo and reverberation were typically created by a combination of electrical and physical methods. Acoustically speaking, the "classic novel" echo chamber creates echoes in the same way as they are created in churches or caves—they are all simply large, enclosed, empty spaces with floors and walls made of hard materials (such as polished stone or concrete) that reflect sound waves well. The basic purpose of such chambers is to add colour and depth to the original sound, and to simulate the rich natural reverberation that is a feature of large concert halls. The development of artificial echo and reverberation chambers was important for sound recording because of the limitations of early recording systems. Except in the case of live performances, most commercially popular recordings are made in specially constructed studios. These rooms were both heavily insulated to exclude external noises and internally somewhat anechoic—that is, they were designed not to produce any internal echoes or sound reverberation. Because virtually every sound in everyday life is a complex mixture of direct sound from the source and its echoes and reverberations, audiences naturally found the totally 'dry' and reverberation-free sound of early recordings unappealing. Consequently, record producers and engineers quickly came up with an effective method of adding "artificial" echo and reverberation that experts could control with a remarkable degree of accuracy. Producing echo and reverberation in this form of echo chamber is simple. A signal from the studio mixing desk—such as a voice or instrument—is fed to a large high-fidelity loudspeaker located at one end of the chamber. One or more microphones are placed along the length of the room, and these pick up both the sound from the speaker and its reflections off the walls of the chamber. The farther away from the loudspeaker, the more echo and reverberation the microphone(s) picks up, and the louder the reverberation becomes in relation to the source. The signal from the microphone line is then fed back to the mixing desk, where the echo/reverberation-enhanced sound can be blended with the original 'dry' input. An example of this physical effect can be heard on the 1978 David Bowie song "Heroes", from the album of the same name. The song, produced by Tony Visconti, was recorded in the large concert hall in the Hansa recording studio in Berlin, and Visconti has since been much praised for the striking sound he achieved on Bowie's vocals. Visconti placed three microphones at intervals along the length of the hall; one very close to Bowie, one halfway down the hall, and the third at the far end of the hall. During the recording, Bowie sang each verse progressively louder than the last, and as he increased volume in each verse, Visconti opened up each of the three microphones in turn, from closest to farthest. Thus, in the first verse, Bowie's voice sounds close, warm, and present; by the end of the song, Visconti has mixed in a large amount of signal from all three microphones, giving Bowie's voice a strikingly reverberant sound. The original echo chamber at EMI's Abbey Road Studios was improved by Clive Robinson, site foreman at the time of construction. His construction and engineering teams perfected the echo booth at Abbey Road Studios in London. It was one of the first studios in the world to be specially built for recording purposes when it was established in 1931; it remains in place and is a prime example of the early 20th-century electro-acoustic echo chamber. Buildings such as churches, church halls, and ballrooms have often been chosen as recording sites for classical and other music because of their rich, natural echo and reverberation characteristics. Famous examples include Sir George Martin's AIR Studios at Lyndhurst Hall in Belsize Park, London, a large, vaulted 19th-century building originally constructed as a church and missionary school. Montreal's Church of St. Eustache is the favored recording venue of the Montreal Symphony Orchestra and many others and is much sought after for classical recordings because of its unique acoustic characteristics. The distinctive reverberation on the early hit records by Bill Haley & His Comets was created by recording the band under the domed ceiling of Decca's studio in New York City, located in a former ballroom called The Pythian Temple. Some recording companies and many small independent labels could not afford large purpose-built echo chambers such as the Abbey Road Chamber, so enterprising producers and engineers often made use of any large reverberant space. Corridors, lift-wells, stairwells, and tiled bathrooms were all used as substitute echo chambers. Many famous soul music and R&B music recordings released by the New York-based Atlantic Records feature echo and reverb effects produced by simply placing a speaker and microphone in the office bathroom—a process also used by Producer/Engineer Bruce Botnick while recording The Doors for their 1970 album L.A. Woman. Electronic echo machines In the 1950s and 1960s, the development of magnetic audio tape technology made it possible to duplicate physical echo and reverberation effects entirely electronically. The Watkins Copicat, designed and built by renowned British electronics engineer Charlie Watkins in the late 1950s, is typical of this kind of electronic delay device. Tape echo units use an endless loop of magnetic tape, which is drawn across a series of recording and playback heads. When a signal from a voice or instrument is fed into the machine, it records the signal onto the tape loop as it passes over the record head. As the tape advances, the newly recorded signal is then picked up by a series of playback heads mounted in line with the record head. These play the sound back as the signal passes over each head in turn, creating the classic rippling or cascading echoes that are typical of tape echo units. The number of playback heads determines the number of repeats, and the physical distance between each playback head determines the ratio of delay between each repeat of the sound (usually some fraction of a second). The actual length of the delay between each repeat can be varied by a pitch control that alters the speed of the tape loop across the heads. Typically, the playback heads of tape echo machines are also connected to controls that allow the user to determine the volume of each echo relative to the original signal. Another control (sometimes called "regeneration") allows the signal from the playback heads to be fed back into and variably mixed with the original input signal, creating a distinctive "feedback" effect that adds more and more noise to the loop with each repeat. If fully activated, this control ultimately produces a continuous feedback loop of pure noise. Roland manufactured various models of magnetic tape echo and reverb sound effect machines from 1973 until the introduction of digital sound effect machines. A tape echo that has few repeats and a very short delay between each repeat is often referred to as a "slapback" echo. This distinctive sound is one of the key sonic characteristics of 1950s rock and roll and rockabilly, and can be heard on the classic mid-50s Sun Records recordings by Elvis Presley and others. This effect was a result of the unintentional combination of the recording and monitoring tape heads (physically located a few inches apart), which, on playback, created a gap that inadvertently produced the iconic "slap-back" effect. Digital echo With the advent of digital signal processing and other digital audio technologies, it has become possible to simulate almost every "echo chamber" effect by processing the signal digitally. Because digital devices are able to simulate an almost limitless variety of real reverberant spaces as well as replicate the classic tape-based echo effects, physical echo chambers fell into disuse. However, as noted above, naturally reverberant spaces such as churches continue to be used as recording venues for classical and other forms of acoustic music. See also Anechoic chamber Bathroom singing Delay (audio effect) Reverberation room – an echo chamber for scientific measurement (acoustics) Telephone game References Sound recording technology
Echo chamber
Technology
1,951
52,434,433
https://en.wikipedia.org/wiki/N%20band%20%28NATO%29
The NATO N band is the designation given to the radio frequencies from 100 to 200 GHz (equivalent to wavelengths between 3 mm and 1.5 mm) used by US armed forces and SACLANT in ITU Region 2. The NATO N band is also a subset of the EHF band as defined by the ITU. Particularities The NATO N band is not subject to the NATO Joint Civil/Military Frequency Agreement (NJFA). However, military requirement, which may apply to the NATO operations in ITU Region 1, are subject to coordination with the appropriate frequency administration concerned. References Radio spectrum
N band (NATO)
Physics
122
67,888,569
https://en.wikipedia.org/wiki/Urocystis%20alopecuri
Urocystis alopecuri is a fungal plant pathogen in the family Urocystidiaceae. Known as 'Foxtail Smut'. It is found on Alopecurus species; such as Alopecurus aequalis, Alopecurus arundinaceus, Alopecurus geniculatus, Alopecurus myosuroides and Alopecurus pratensis in Europe. References Fungal plant pathogens and diseases Wheat diseases Ustilaginomycotina Fungi described in 1877 Fungus species
Urocystis alopecuri
Biology
111
40,186,890
https://en.wikipedia.org/wiki/Missouri%20Public%20Interest%20Research%20Group
Missouri Public Interest Research Group (MoPIRG) is a non-profit organization that is part of the state PIRG organizations. MoPIRG began in March, 1971, after students at Saint Louis University heard a speech by Ralph Nader. Nader inspired the students to organize citizen action groups modeled after similar groups in Oregon and Minnesota. The Center for Student Action at Saint Louis University and the Missouri Public Action Council at Washington University in St. Louis lobbied to establish a public interest research organization funded by a small assessment added to student activities fees. Student referendums on both campuses supported the fee assessment. The two student groups combined their operations and formed MoPIRG. History The PIRGs emerged in the early 1970s on U.S. college campuses. The PIRG model was proposed in the book Action for a Change by Ralph Nader and Donald Ross. Among other early accomplishments, the PIRGs were responsible for much of the Container Deposit Legislation in the United States, also known as "bottle bills." MoPIRG began in March, 1971, after students at Saint Louis University heard a speech by Ralph Nader. Nader inspired the students to organize citizen action groups modeled after similar groups in Oregon and Minnesota. The Center for Student Action at Saint Louis University and the Missouri Public Action Council at Washington University in St. Louis lobbied to establish a public interest research organization funded by a small assessment added to student activities fees. Student referendums on both campuses supported the fee assessment. The two student groups combined their operations and formed MoPIRG. MoPIRG's earliest campaigns included a successful appeal to the Federal Trade Commission to investigate deceptive advertising and sales practices by some St. Louis used car dealers in 1972. The group was also represented on the St. Louis Advertising Review Board, a self-regulatory board of the Advertising Club and the Better Business Bureau. MoPIRG gained national attention for its criticism of self-regulation by the advertising industry. MoPIRG also championed the right to representation for St. Louis City Jail prisoners. Their proposal led to the establishment of an ombudsman position by the St. Louis Department of Welfare in August, 1973. MOPIRG was active in campaigns to stop legislation that would raise the legal ceiling of small loan interest rates in Missouri, drafted a consumer protection ordinance presented to the St. Louis Board of Aldermen in September, 1973, and published research on a variety of consumer and citizen related issues. It researched and developed legislation that helped improve workers' compensation laws in Missouri and was also successful in stopping an effort to eliminate the public display rating system for area restaurants. From 1975 to 1981, MOPIRG developed a comprehensive revision of the state landlord-tenant law and successfully worked for its passage twice in the Missouri House of Representatives, although the bill was defeated both times in the Senate. In 1977, President Jimmy Carter attempted to reform the patronage system of judicial selection for the Circuit Court of Appeals. MOPIRG responded by forming a coalition of eleven political organizations, including the League of Women Voters and the NAACP, to urge Senator Thomas Eagleton to establish a merit nominating process on the state level. Senator Eagleton's resistance to this idea led MoPIRG to work successfully for legislation requiring the President to develop merit selection guidelines. Other issues MoPIRG has been involved with include requiring school testing services to make test results available to students, curbing utility rate increases, reforming media practices, and passing a national advisory referendum that would allow a non-binding public vote on government policy questions. Affiliate organizations The Fund for Public Interest Research Environment America References External links U.S. Public Interest Research Group (U.S. PIRG) The Student PIRGs The Public Interest Network Non-profit organizations based in Missouri Public Interest Research Groups Renewable energy commercialization Environmental ethics Consumer rights organizations
Missouri Public Interest Research Group
Environmental_science
772
19,460,821
https://en.wikipedia.org/wiki/Rosser%27s%20trick
In mathematical logic, Rosser's trick is a method for proving a variant of Gödel's incompleteness theorems not relying on the assumption that the theory being considered is ω-consistent (Smorynski 1977, p. 840; Mendelson 1977, p. 160). This method was introduced by J. Barkley Rosser in 1936, as an improvement of Gödel's original proof of the incompleteness theorems that was published in 1931. While Gödel's original proof uses a sentence that says (informally) "This sentence is not provable", Rosser's trick uses a formula that says "If this sentence is provable, there is a shorter proof of its negation". Background Rosser's trick begins with the assumptions of Gödel's incompleteness theorem. A theory is selected which is effective, consistent, and includes a sufficient fragment of elementary arithmetic. Gödel's proof shows that for any such theory there is a formula which has the intended meaning that is a natural number code (a Gödel number) for a formula and is the Gödel number for a proof, from the axioms of , of the formula encoded by . (In the remainder of this article, no distinction is made between the number and the formula encoded by , and the number coding a formula is denoted .) Furthermore, the formula is defined as . It is intended to define the set of formulas provable from . The assumptions on also show that it is able to define a negation function , with the property that if is a code for a formula then is a code for the formula . The negation function may take any value whatsoever for inputs that are not codes of formulas. The Gödel sentence of the theory is a formula , sometimes denoted , such that proves  ↔. Gödel's proof shows that if is consistent then it cannot prove its Gödel sentence; but in order to show that the negation of the Gödel sentence is also not provable, it is necessary to add a stronger assumption that the theory is ω-consistent, not merely consistent. For example, the theory , in which PA is Peano axioms, proves . Rosser (1936) constructed a different self-referential sentence that can be used to replace the Gödel sentence in Gödel's proof, removing the need to assume ω-consistency. The Rosser sentence For a fixed arithmetical theory , let and be the associated proof predicate and negation function. A modified proof predicate is defined as: which means that This modified proof predicate is used to define a modified provability predicate : Informally, is the claim that is provable via some coded proof such that there is no smaller coded proof of the negation of . Under the assumption that is consistent, for each formula the formula will hold if and only if holds, because if there is a code for the proof of , then (following the consistency of ) there is no code for the proof of . However, and have different properties from the point of view of provability in . An immediate consequence of the definition is that if includes enough arithmetic, then it can prove that for every formula , implies . This is because otherwise, there are two numbers , coding for the proofs of and , respectively, satisfying both and . (In fact only needs to prove that such a situation cannot hold for any two numbers, as well as to include some first-order logic) Using the diagonal lemma, let be a formula such that proves . The formula is the Rosser sentence of the theory Rosser's theorem Let be an effective, consistent theory including a sufficient amount of arithmetic, with Rosser sentence . Then the following hold (Mendelson 1977, p. 160): does not prove does not prove In order to prove this, one first shows that for a formula and a number , if holds, then proves . This is shown in a similar manner to what is done in Gödel's proof of the first incompleteness theorem: proves , a relation between two concrete natural numbers; one then goes over all the natural numbers smaller than one by one, and for each , proves , again, a relation between two concrete numbers. The assumption that includes enough arithmetic (in fact, what is required is basic first-order logic) ensures that also proves in that case. Furthermore, if is consistent and proves , then there is a number coding for its proof in , and there is no number coding for the proof of the negation of in . Therefore holds, and thus proves . The proof of (1) is similar to that in Gödel's proof of the first incompleteness theorem: Assume proves ; then it follows, by the previous elaboration, that proves . Thus also proves . But we assumed proves , and this is impossible if is consistent. We are forced to conclude that does not prove . The proof of (2) also uses the particular form of . Assume proves ; then it follows, by the previous elaboration, that proves . But by the immediate consequence of the definition of Rosser's provability predicate, mentioned in the previous section, it follows that proves . Thus also proves . But we assumed proves , and this is impossible if is consistent. We are forced to conclude that does not prove . References Mendelson (1977), Introduction to Mathematical Logic Smorynski (1977), "The incompleteness theorems", in Handbook of Mathematical Logic, Jon Barwise, Ed., North Holland, 1982, External links Avigad (2007), "Computability and Incompleteness", lecture notes. Mathematical logic
Rosser's trick
Mathematics
1,154
1,398,166
https://en.wikipedia.org/wiki/Net%20neutrality
Network neutrality, often referred to as net neutrality, is the principle that Internet service providers (ISPs) must treat all Internet communications equally, offering users and online content providers consistent transfer rates regardless of content, website, platform, application, type of equipment, source address, destination address, or method of communication (i.e., without price discrimination). Net neutrality was advocated for in the 1990s by the presidential administration of Bill Clinton in the United States. Clinton signed of the Telecommunications Act of 1996, an amendment to the Communications Act of 1934. In 2025, an American court ruled that internet companies should not be regulated like utilities, which weakened net neutrality regulation and put the decision in the hands of the United States Congress and state legislatures. Supporters of net neutrality argue that it prevents ISPs from filtering Internet content without a court order, fosters freedom of speech and democratic participation, promotes competition and innovation, prevents dubious services, and maintains the end-to-end principle, and that users would be intolerant of slow-loading websites. Opponents argue that it reduces investment, deters competition, increases taxes, imposes unnecessary regulations, prevents the Internet from being accessible to lower income individuals, and prevents Internet traffic from being allocated to the most needed users, that large ISPs already have a performance advantage over smaller providers, and that there is already significant competition among ISPs with few competitive issues. Etymology The term was coined by Columbia University media law professor Tim Wu in 2003 as an extension of the longstanding concept of a common carrier which was used to describe the role of telephone systems. Regulatory considerations Net neutrality regulations may be referred to as common carrier regulations. Net neutrality does not block all abilities that ISPs have to impact their customers' services. Opt-in and opt-out services exist on the end user side, and filtering can be done locally, as in the filtering of sensitive material for minors. Research suggests that a combination of policy instruments can help realize the range of valued political and economic objectives central to the network neutrality debate. Combined with public opinion, this has led some governments to regulate broadband Internet services as a public utility, similar to the way electricity, gas, and the water supply are regulated, along with limiting providers and regulating the options those providers can offer. Proponents of net neutrality, which include computer science experts, consumer advocates, human rights organizations, and Internet content providers, assert that net neutrality helps to provide freedom of information exchange, promotes competition and innovation for Internet services, and upholds standardization of Internet data transmission which was essential for its growth. Opponents of net neutrality, which include ISPs, computer hardware manufacturers, economists, technologists and telecommunications equipment manufacturers, argue that net neutrality requirements would reduce their incentive to build out the Internet and reduce competition in the marketplace, and may raise their operating costs, which they would have to pass along to their users. Definition and related principles Internet neutrality Network neutrality is the principle that all Internet traffic should be treated equally. According to Columbia Law School professor Tim Wu, a public information network will be most useful when this is the case. Internet traffic consists of various types of digital data sent over the Internet between all kinds of devices (e.g., data center servers, personal computers, mobile devices, video game consoles, etc.), using hundreds of different transfer technologies. The data includes email messages; HTML, JSON, and all related web browser MIME content types; text, word processing, spreadsheet, database and other academic, business or personal documents in any conceivable format; audio and video files; streaming media content; and countless other formal, proprietary, or ad-hoc schematic formats—all transmitted via myriad transfer protocols. Indeed, while the focus is often on the type of digital content being transferred, network neutrality includes the idea that if all such types are to be treated equally, then it follows that any ostensibly arbitrary choice of protocol—that is, the technical details of the actual communications transaction itself—must be as well. For example, the same digital video file could be accessed by viewing it live while the data is being received (HLS), interacting with its playback from a remote server (DASH), by receiving it in an email message (SMTP), or by downloading it from either a website (HTTP), an FTP server, or via BitTorrent, among other means. Although all of these use the Internet for transport, and the content received locally is ultimately identical, the interim data traffic is dramatically different depending on which transfer method is used. To proponents of net neutrality, this suggests that prioritizing any one transfer protocol over another is generally unprincipled, or that doing so penalizes the free choices of some users. In sum, net neutrality is the principle that an ISP be required to provide access to all sites, content, and applications at the same speed, under the same conditions, without blocking or giving preference to any content. Under net neutrality, whether a user connects to Netflix, Wikipedia, YouTube, or a family blog, their ISP must treat them all the same. Without net neutrality, an ISP can influence the quality that each experience offers to end users, which suggests a regime of pay-to-play, where content providers can be charged to improve the exposure of their own products versus those of their competitors. Open Internet Under an open Internet system, the full resources of the Internet and means to operate on it should be easily accessible to all individuals, companies, and organizations. Applicable concepts include: net neutrality, open standards, transparency, lack of Internet censorship, and low barriers to entry. The concept of the open Internet is sometimes expressed as an expectation of decentralized technological power, and is seen by some observers as closely related to open-source software, a type of software program whose maker allows users access to the code that runs the program, so that users can improve the software or fix bugs. Proponents of net neutrality see neutrality as an important component of an open Internet, wherein policies such as equal treatment of data and open web standards allow those using the Internet to easily communicate, and conduct business and activities without interference from a third party. In contrast, a closed Internet refers to the opposite situation, wherein established persons, corporations, or governments favor certain uses, restrict access to necessary web standards, artificially degrade some services, or explicitly filter out content. Some countries such as Thailand block certain websites or types of sites, and monitor and/or censor Internet use using Internet police, a specialized type of law enforcement, or secret police. Other countries such as Russia, China, and North Korea also use similar tactics to Thailand to control the variety of internet media within their respective countries. In comparison to the United States or Canada for example, these countries have far more restrictive internet service providers. This approach is reminiscent of a closed platform system, as both ideas are highly similar. These systems all serve to hinder access to a wide variety of internet service, which is a stark contrast to the idea of an open Internet system. Dumb pipe The term dumb pipe was coined in the early 1990s and refers to water pipes used in a city water supply system. In theory, these pipes provide a steady and reliable source of water to every household without discrimination. In other words, it connects the user with the source without any intelligence or decrement. Similarly, a dumb network is a network with little or no control or management of its use patterns. Experts in the high-technology field will often compare the dumb pipe concept with smart pipes and debate which one is best applied to a certain portion of Internet policy. These conversations usually refer to these two concepts as being analogous to the concepts of open and closed Internet respectively. As such, certain models have been made that aim to outline four layers of the Internet with the understanding of the dumb pipe theory: Content Layer: Contains services such as communication as well as entertainment videos and music. Applications Layer: Contains services such as e-mail and web browsers. Logical Layer (also called the Code Layer): Contains various Internet protocols such as TCP/IP and HTTP. Physical Layer: Consists of services that provide all others such as cable or wireless connections. End-to-end principle The end-to-end principle of network design was first laid out in the 1981 paper End-to-end arguments in system design by Jerome H. Saltzer, David P. Reed, and David D. Clark. The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resources being controlled. According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization; hence, TCP retransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached. They argued that, in addition to any processing in the intermediate systems, reliable systems tend to require processing in the end-points to operate correctly. They pointed out that most features in the lowest level of a communications system impose costs for all higher-layer clients, even if those clients do not need the features, and are redundant if the clients have to re-implement the features on an end-to-end basis. This leads to the model of a minimal dumb network with smart terminals, a completely different model from the previous paradigm of the smart network with dumb terminals. Because the end-to-end principle is one of the central design principles of the Internet, and because the practical means for implementing data discrimination violate the end-to-end principle, the principle often enters discussions about net neutrality. The end-to-end principle is closely related and sometimes seen as a direct precursor to the principle of net neutrality. Traffic shaping Traffic shaping is the control of computer network traffic to optimize or guarantee performance, improve latency (i.e., decrease Internet response times), or increase usable bandwidth by delaying packets that meet certain criteria. In practice, traffic shaping is often accomplished by throttling certain types of data, such as streaming video or P2P file sharing. More specifically, traffic shaping is any action on a set of packets (often called a stream or a flow) that imposes additional delay on those packets such that they conform to some predetermined constraint (a contract or traffic profile). Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such as generic cell rate algorithm. Over-provisioning If the core of a network has more bandwidth than is permitted to enter at the edges, then good quality of service (QoS) can be obtained without policing or throttling. For example, telephone networks employ admission control to limit user demand on the network core by refusing to create a circuit for the requested connection. During a natural disaster, for example, most users will get a circuit busy signal if they try to make a call, as the phone company prioritizes emergency calls. Over-provisioning is a form of statistical multiplexing that makes liberal estimates of peak user demand. Over-provisioning is used in private networks such as WebEx and the Internet 2 Abilene Network, an American university network. David Isenberg believes that continued over-provisioning will always provide more capacity for less expense than QoS and deep packet inspection technologies. Device neutrality Device neutrality is the principle that to ensure freedom of choice and freedom of communication for users of network-connected devices, it is not sufficient that network operators do not interfere with their choices and activities; users must be free to use applications of their choice and hence remove the applications they do not want. Device vendors can establish policies for managing applications, but they, too, must be applied neutrally. An unsuccessful bill to enforce network and device neutrality was introduced in Italy in 2015 by Stefano Quintarelli. The law gained formal support at the European Commission from BEUC, the European Consumer Organisation, the Electronic Frontier Foundation and the Hermes Center for Transparency and Digital Human Rights. A similar law was enacted in South Korea. Similar principles were proposed in China. The French telecoms regulator ARCEP has called for the introduction of device neutrality in Europe. The principle has been incorporated in the EU's Digital Markets Act (Articles 6.3 an 6.4) Invoicing and tariffs ISPs can choose a balance between a base subscription tariff (monthly bundle) and a pay-per-use (pay by MB metering). The ISP sets an upper monthly threshold on data usage, just to be able to provide an equal share among customers, and a fair use guarantee. This is generally not considered to be an intrusion but rather allows for a commercial positioning among ISPs. Alternative networks Some networks like public Wi-Fi can take traffic away from conventional fixed or mobile network providers. This can significantly change the end-to-end behavior (performance, tariffs). Issues Discrimination by protocol Discrimination by protocol is the favoring or blocking of information based on aspects of the communications protocol that the computers are using to communicate. In the US, a complaint was filed with the Federal Communications Commission against the cable provider Comcast alleging they had illegally inhibited users of its high-speed Internet service from using the popular file-sharing software BitTorrent. Comcast admitted no wrongdoing in its proposed settlement of up to 16 dollars per share in December 2009. However, a U.S. appeals court ruled in April 2010 that the FCC exceeded its authority when it sanctioned Comcast in 2008 for deliberately preventing some subscribers from using peer-to-peer file-sharing services to download large files. However, the FCC spokeswoman Jen Howard responded, "The court in no way disagreed with the importance of preserving a free and open Internet, nor did it close the door to other methods for achieving this important end." Despite the ruling in favor of Comcast, a study by Measurement Lab in October 2011 verified that Comcast had virtually stopped its BitTorrent throttling practices. Discrimination by Internet Protocol (IP) Address During the 1990s, creating a non-neutral Internet was technically infeasible. Originally developed to filter harmful malware, the Internet security company NetScreen Technologies released network firewalls in 2003 with so-called deep packet inspection capabilities. Deep packet inspection helped make real-time discrimination between different kinds of data possible, and is often used for Internet censorship. In a practice called zero-rating, companies will not invoice data use related to certain IP addresses, favoring the use of those services. Examples include Facebook Zero, Wikipedia Zero, and Google Free Zone. These zero-rating practices are especially common in the developing world. Sometimes ISPs will charge some companies, but not others, for the traffic they cause on the ISP's network. French telecom operator Orange, complaining that traffic from YouTube and other Google sites consist of roughly 50% of total traffic on the Orange network, made a deal with Google, in which they charge Google for the traffic incurred on the Orange network. Some also thought that Orange's rival ISP Free throttled YouTube traffic. However, an investigation done by the French telecommunications regulatory body revealed that the network was simply congested during peak hours. Aside from the zero-rating method, ISPs will also use certain strategies to reduce the costs of pricing plans such as the use of sponsored data. In a scenario where a sponsored data plan is used, a third party will step in and pay for all the content that it (or the carrier or consumer) does not want around. This is generally used as a way for ISPs to remove out-of-pocket costs from subscribers. One of the criticisms regarding discrimination is that the system set up by ISPs for this purpose is capable of not only discriminating but also scrutinizing the full-packet content of communications. For instance, deep packet inspection technology installs intelligence within the lower layers in the work to discover and identify the source, type, and destination of packets, revealing information about packets traveling in the physical infrastructure so it can dictate the quality of transport such packets will receive. This is seen as an architecture of surveillance, one that can be shared with intelligence agencies, copyrighted content owners, and civil litigants, exposing the users' secrets in the process. Favoring private networks Proponents of net neutrality argue that without new regulations, Internet service providers would be able to profit from and favor their own private protocols over others. The argument for net neutrality is that ISPs would be able to pick and choose who they offer a greater bandwidth to. If one website or company is able to afford more, they will go with them. This especially stifles private up-and-coming businesses. ISPs are able to encourage the use of specific services by using private networks to discriminate what data is counted against bandwidth caps. For example, Comcast struck a deal with Microsoft that allowed users to stream television through the Xfinity app on their Xbox 360s without it affecting their bandwidth limit. However, using other television streaming apps, such as Netflix, HBO Go, and Hulu, counted towards the limit. Comcast denied that this infringed on net neutrality principles since "it runs its Xfinity for Xbox service on its own, private Internet protocol network." In 2009, when AT&T was bundling iPhone 3G with its 3G network service, the company placed restrictions on which iPhone applications could run on its network. According to proponents of net neutrality, this capitalization on which content producers ISPs can favor would ultimately lead to fragmentation, where some ISPs would have certain content that is not necessarily present in the networks offered by other ISPs. The danger behind fragmentation, as viewed by proponents of net neutrality, is the concept that there could be multiple Internets, where some ISPs offer exclusive internet applications or services or make it more difficult to gain access to internet content that may be more easily viewable through other internet service providers. An example of a fragmented service would be television, where some cable providers offer exclusive media from certain content providers. However, in theory, allowing ISPs to favor certain content and private networks would overall improve internet services since they would be able to recognize packets of information that are more time-sensitive and prioritize that over packets that are not as sensitive to latency. The issue, as explained by Robin S. Lee and Tim Wu, is that there are literally too many ISPs and internet content providers around the world to reach an agreement on how to standardize that prioritization. A proposed solution would be to allow all online content to be accessed and transferred freely, while simultaneously offering a fast lane for a preferred service that does not discriminate on the content provider. Peering discrimination There is disagreement about whether peering is a net neutrality issue. In the first quarter of 2014, streaming website Netflix reached an arrangement with ISP Comcast to improve the quality of its service to Netflix clients. This arrangement was made in response to increasingly slow connection speeds through Comcast over the course of 2013, where average speeds dropped by over 25% of their values a year before to an all-time low. After the deal was struck in January 2014, the Netflix speed index recorded a 66% increase in connection. Netflix agreed to a similar deal with Verizon in 2014, after Verizon DSL customers' connection speed dropped to less than 1 Mbit/s early in the year. Netflix spoke out against this deal with a controversial statement delivered to all Verizon customers experiencing low connection speeds, using the Netflix client. This sparked an internal debate between the two companies that led to Verizon's obtaining a cease and desist order on 5 June 2014, that forced Netflix to stop displaying this message. Favoring fast-loading websites Pro-net neutrality arguments have also noted that regulations are necessary due to research showing low tolerance to slow-loading content providers. In a 2009 research study conducted by Forrester Research, online shoppers expected the web pages they visited to download content instantly. When a page fails to load at the expected speed, many of them simply click out. A study found that even a one-second delay could lead to "11% fewer page views, a 16% decrease in customer satisfaction, and 7% loss in conversions." This delay can cause a severe problem to small innovators who have created new technology. If a website is slow by default, the general public will lose interest and favor a website that runs faster. This helps large corporate companies maintain power because they have the means to fund faster Internet speeds. On the other hand, smaller competitors have less financial capabilities making it harder for them to succeed in the online world. Legal aspects Legal enforcement of net neutrality principles takes a variety of forms, from provisions that outlaw anti-competitive blocking and throttling of Internet services, all the way to legal enforcement that prevents companies from subsidizing Internet use on particular sites. Contrary to popular rhetoric and statements by various individuals involved in the ongoing academic debate, research suggests that a single policy instrument (such as a no-blocking policy or a quality of service tiering policy) cannot achieve the range of valued political and economic objectives central to the debate. As Bauer and Obar suggest, "safeguarding multiple goals requires a combination of instruments that will likely involve government and nongovernment measures. Furthermore, promoting [rights and] goals such as the freedom of speech, political participation, investment, and innovation calls for complementary policies." By country Net neutrality is administered on a national or regional basis, though much of the world's focus has been on the conflict over net neutrality in the United States. Net neutrality in the US has been a topic since the early 1990s, as they were one of the world leaders in providing online services. However, they face the same problems as the rest of the world. In 2019, the Save the Internet Act to "guarantee broadband internet users equal access to online content" was passed by the US House of Representatives but not by the US Senate. Finding an appropriate solution by creating more regulations for ISPs has been a major work in progress. Net neutrality rules were repealed in the US in 2017 during the Trump administration and subsequent appeals upheld the ruling, until the FCC voted to reinstate them in 2024. In 2025, on January 2nd, however, "a US appeals court on Thursday ruled the Federal Communications Commission did not have the legal authority to reinstate landmark net neutrality rules." Governments of countries that comment on net neutrality usually support the concept. United States Net neutrality in the United States has been a point of conflict between network users and service providers since the 1990s. Much of the conflict over net neutrality arises from how Internet services are classified by the Federal Communications Commission (FCC) under the authority of the Communications Act of 1934. The FCC would have significant ability to regulate ISPs should Internet services be treated as a Title II "common carrier service", or otherwise the ISPs would be mostly unrestricted by the FCC if Internet services fell under Title I "information services". In 2009, the United States Congress passed the American Recovery and Reinvestment Act 2009, which granted a stimulus of $2.88 billion for extending broadband services into certain areas of the United States. It was intended to make the internet more accessible for under-served areas, and aspects of net neutrality and open access were written into the grant. However, the bill never set any significant precedents for net neutrality or influenced future legislation relating to net neutrality. Until 2017, the FCC had generally been favorable towards net neutrality, treating ISPs under Title II common carrier. With the onset of the Presidency of Donald Trump in 2017, and the appointment of Ajit Pai, an opponent of net neutrality, to the chairman of the FCC, the FCC has reversed many previous net neutrality rulings and reclassified Internet services as Title I information services. The FCC's decisions have been a matter of several ongoing legal challenges by both states supporting net neutrality, and ISPs challenging it. The United States Congress has attempted to pass legislation supporting net neutrality but has failed to gain sufficient support. In 2018, a bill cleared the U.S. Senate, with Republicans Lisa Murkowski, John Kennedy, and Susan Collins joining all 49 Democrats but the House majority denied the bill a hearing. Individual states have been trying to pass legislation to make net neutrality a requirement within their state, overriding the FCC's decision. California has successfully passed its own net neutrality act, which the United States Department of Justice challenged on a legal basis. On 8 February 2021, the U.S. Justice Department withdrew its challenge to California's data protection law. Federal Communications Commission Acting Chairwoman Jessica Rosenworcel voiced support for an open internet and restoring net neutrality. Vermont, Colorado, and Washington, among other states, have also enacted net neutrality. On 19 October 2023, the FCC voted 3–2 to approve a Notice of Proposed Rulemaking (NPRM) that seeks comments on a plan to restore net neutrality rules and regulation of Internet service providers. On 25 April 2024, the FCC voted 3–2 to reinstate net neutrality in the United States by reclassifying the Internet under Title II. However, legal challenges immediately filed by ISPs resulted in an appeals court issuing an order that stays the net neutrality rules until the court makes a final ruling, while issuing the opinion that the ISPs will likely prevail over the FCC on the merits. On 02 January 2025, Net neutrality rules, which disallow broadband providers from messing with internet speeds depending on the website, were struck down by the Sixth Circuit. Federal law shows that broadband must be classified as an "information service" and not the more heavily-regulated "telecommunications service" the Federal Communications Commission said it was when it adopted the rules in April 2024, a three-judge panel for the US Court of Appeals for the Sixth Circuit ruled. The FCC lacked the authority to impose its rules on the broadband providers, the court said. The ruling is one of the highest-profile examples yet of an appeals court relying on the newfound authority federal judges enjoy in the wake of Loper Bright Enters. v. Raimondo, which tossed a doctrine that bolstered agency interpretations of unclear laws. The judges also shot down a similar FCC classification affecting mobile broadband providers. The case is MCP No. 185 Open Internet Rule (FCC 24-52), 6th Cir., No. 24-7000, 1/2/25. Canada Net neutrality in Canada is a debated issue in that nation, but not to the degree of partisanship in other nations such as the United States in part because of its federal regulatory structure and pre-existing supportive laws that were enacted decades before the debate arose. In Canada, ISPs generally provide Internet service in a neutral manner. Some notable incidents otherwise have included Bell Canada's throttling of certain protocols and Telus's censorship of a specific website supporting striking union members. In the case with Bell Canada, the debate for net neutrality became a more popular topic when it was revealed that they were throttling traffic by limiting people's accessibility to view Canada's Next Great Prime Minister, which eventually led to the Canadian Association of Internet Providers (CAIP) demanding the Canadian Radio-Television and Telecommunications Commission (CRTC) to take action on preventing the throttling of third-party traffic. On 22 October 2009, the CRTC issued a ruling about internet traffic management, which favored adopting guidelines that were suggested by interest groups such as OpenMedia.ca and the Open Internet Coalition. However, the guidelines set in place require citizens to file formal complaints proving that their internet traffic is being throttled, and as a result, some ISPs still continue to throttle the internet traffic of their users. India In the year 2018, the Indian Government unanimously approved new regulations supporting net neutrality. The regulations are considered to be the "world's strongest" net neutrality rules, guaranteeing free and open Internet for nearly half a billion people, and are expected to help the culture of startups and innovation. The only exceptions to the rules are new and emerging services like autonomous driving and tele-medicine, which may require prioritized internet lanes and faster than normal speeds. China Net neutrality in China is not enforced, and ISPs in China play important roles in regulating the content that is available domestically on the internet. There are several ISPs filtering and blocking content at the national level, preventing domestic internet users from accessing certain sites or services or foreign internet users from gaining access to domestic web content. This filtering technology is referred to as the Great Firewall, or GFW. In an article published by the Cambridge University Press, they observed the political environment with net neutrality in China. Chinese ISPs have become a way for the country to control and restrict information rather than providing neutral internet content for those who use the internet. Philippines Net neutrality in the Philippines is not enforced. Mobile Internet providers like Globe Telecom and Smart Communications commonly offer data package promos tied to specific applications, games or websites like Facebook, Instagram, and TikTok. In the mid-2010s, Philippine telcos came under fire from the Department of Justice for throttling the bandwidth of subscribers of unlimited data plans if the subscribers exceeded arbitrary data caps imposed by the telcos under a supposed "fair use policy" on their "unlimited" plans. Certain adult sites like Pornhub, Redtube, and XTube have also been blocked by some Philippine ISPs at the request of the Philippine National Police to the National Telecommunications Commission, even without the necessary court orders required by the Supreme Court of the Philippines. Support Proponents of net neutrality regulations include consumer advocates, human rights organizations such as Article 19, online companies and some technology companies. Net neutrality tends to be supported by those on the political left, while opposed by those on the political right. Many major Internet application companies are advocates of neutrality, such as eBay, Amazon, Netflix, Reddit, Microsoft, Twitter, Etsy, IAC Inc., Yahoo!, Vonage, and Cogent Communications. In September 2014, an online protest known as Internet Slowdown Day took place to advocate for the equal treatment of internet traffic. Notable participants included Netflix and Reddit. Consumer Reports, the Open Society Foundations along with several civil rights groups, such as the ACLU, the Electronic Frontier Foundation, Free Press, SaveTheInternet, and Fight for the Future support net neutrality. Individuals who support net neutrality include World Wide Web inventor Tim Berners-Lee, Vinton Cerf, Lawrence Lessig, Robert W. McChesney, Steve Wozniak, Susan P. Crawford, Marvin Ammori, Ben Scott, David Reed, and former U.S. President Barack Obama. On 10 November 2014, Obama recommended that the FCC reclassify broadband Internet service as a telecommunications service to preserve net neutrality. On 31 January 2015, AP News reported that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 and section 706 of the Telecommunications act of 1996 to the Internet in a vote expected on 26 February 2015. Control of data Supporters of net neutrality in the United States want to designate cable companies as common carriers, which would require them to allow ISPs free access to cable lines, the same model used for dial-up Internet. They want to ensure that cable companies cannot screen, interrupt or filter Internet content without a court order. Common carrier status would give the FCC the power to enforce net neutrality rules. SaveTheInternet.com accuses cable and telecommunications companies of wanting the role of gatekeepers, being able to control which websites load quickly, load slowly, or do not load at all. According to SaveTheInternet.com, these companies want to charge content providers who require guaranteed speedy data deliveryto create advantages for their own search engines, Internet phone services, and streaming video servicesand slowing access or blocking access to those of competitors. Vinton Cerf, a co-inventor of the Internet Protocol and current vice president of Google, argues that the Internet was designed without any authorities controlling access to new content or new services. He concludes that the principles responsible for making the Internet such a success would be fundamentally undermined were broadband carriers given the ability to affect what people see and do online. Cerf has also written about the importance of looking at problems like Net Neutrality through a combination of the Internet's layered system and the multistakeholder model that governs it. He shows how challenges can arise that can implicate Net Neutrality in certain infrastructure-based cases, such as when ISPs enter into exclusive arrangements with large building owners, leaving the residents unable to exercise any choice in broadband provider. Digital rights and freedoms Proponents of net neutrality argue that a neutral net will foster free speech and lead to further democratic participation on the Internet. Former Senator Al Franken from Minnesota fears that without new regulations, the major Internet Service Providers will use their position of power to stifle people's rights. He calls net neutrality the "First Amendment issue of our time." The past two decades has been an ongoing battle of ensuring that all people and websites have equal access to an unrestricted platform, regardless of their ability to pay, proponents of net neutrality wish to prevent the need to pay for speech and the further centralization of media power. Lawrence Lessig and Robert W. McChesney argue that net neutrality ensures that the Internet remains a free and open technology, fostering democratic communication. Lessig and McChesney go on to argue that the monopolization of the Internet would stifle the diversity of independent news sources and the generation of innovative and novel web content. User intolerance for slow-loading sites Proponents of net neutrality invoke the human psychological process of adaptation where when people get used to something better, they would not ever want to go back to something worse. In the context of the Internet, the proponents argue that a user who gets used to the "fast lane" on the Internet would find the slow lane intolerable in comparison, greatly disadvantaging any provider who is unable to pay for the fast lane. Video providers Netflix and Vimeo in their comments to FCC in favor of net neutrality use the research of S.S. Krishnan and Ramesh Sitaraman that provides the first quantitative evidence of adaptation to speed among online video users. Their research studied the patience level of millions of Internet video users who waited for a slow-loading video to start playing. Users who had faster Internet connectivity, such as fiber-to-the-home, demonstrated less patience and abandoned their videos sooner than similar users with slower Internet connectivity. The results demonstrate how users can get used to faster Internet connectivity, leading to higher expectations of Internet speed, and lower tolerance for any delay that occurs. Author Nicholas Carr and other social commentators have written about the habituation phenomenon by stating that a faster flow of information on the Internet can make people less patient. Competition and innovation Net neutrality advocates argue that allowing cable companies the right to demand a toll to guarantee quality or premium delivery would create an exploitative business model based on the ISPs position as gatekeepers. Advocates warn that by charging websites for access, network owners may be able to block competitor Web sites and services, as well as refuse access to those unable to pay. According to Tim Wu, cable companies plan to reserve bandwidth for their own television services, and charge companies a toll for priority service. Proponents of net neutrality argue that allowing for preferential treatment of Internet traffic, or tiered service, would put newer online companies at a disadvantage and slow innovation in online services. Tim Wu argues that, without network neutrality, the Internet will undergo a transformation from a market ruled by innovation to one ruled by deal-making. SaveTheInternet.com argues that net neutrality puts everyone on equal terms, which helps drive innovation. They claim it is a preservation of the way the Internet has always operated, where the quality of websites and services determined whether they succeeded or failed, rather than deals with ISPs. Lawrence Lessig and Robert W. McChesney argue that eliminating net neutrality would lead to the Internet resembling the world of cable TV, so that access to and distribution of content would be managed by a handful of massive, near monopolistic companies, though there are multiple service providers in each region. These companies would then control what is seen as well as how much it costs to see it. Speedy and secure Internet use for such industries as healthcare, finance, retailing, and gambling could be subject to large fees charged by these companies. They further explain that a majority of the great innovators in the history of the Internet started with little capital in their garages, inspired by great ideas. This was possible because the protections of net neutrality ensured limited control by owners of the networks, maximal competition in this space, and permitted innovators from outside access to the network. Internet content was guaranteed a free and highly competitive space by the existence of net neutrality. For example, back in 2005, YouTube was a small startup company. Due to the absence of Internet fast lanes, YouTube had the ability to grow larger than Google Video. Tom Wheeler and Senators Ronald Lee Wyden (D-Ore.) and Al Franken (D-Minn.) said, "Internet service providers treated YouTube's videos the same as they did Google's, and Google couldn't pay the ISPs [Internet service providers] to gain an unfair advantage, like a fast lane into consumers' homes," they wrote. "Well, it turned out that people liked YouTube a lot more than Google Video, so YouTube thrived." The lack of competition among internet providers has been cited as a major reason to support net neutrality. The loss of net neutrality in 2017 in the U.S. increased the calls for public broadband. Preserving Internet standards Net neutrality advocates have sponsored legislation claiming that authorizing incumbent network providers to override transport and application layer separation on the Internet would signal the decline of fundamental Internet standards and international consensus authority. Further, the legislation asserts that bit-shaping the transport of application data will undermine the transport layer's designed flexibility. End-to-end principle Some advocates say network neutrality is needed to maintain the end-to-end principle. According to Lawrence Lessig and Robert W. McChesney, all content must be treated the same and must move at the same speed for net neutrality to be true. They say that it is this simple but brilliant end-to-end aspect that has allowed the Internet to act as a powerful force for economic and social good. Under this principle, a neutral network is a dumb network, merely passing packets regardless of the applications they support. This point of view was expressed by David S. Isenberg in his paper, The Rise of the Stupid Network. He states that the vision of an intelligent network is being replaced by a new network philosophy and architecture in which the network is designed for always-on use, not intermittence and scarcity. Rather than intelligence being designed into the network itself, the intelligence would be pushed out to the end-user devices; and the network would be designed simply to deliver bits without fancy network routing or smart number translation. The data would be in control, telling the network where it should be sent. End-user devices would then be allowed to behave flexibly, as bits would essentially be free and there would be no assumption that the data is of a single data rate or data type. Contrary to this idea, the research paper titled End-to-end arguments in system design by Saltzer, Reed, and Clark argues that network intelligence does not relieve end systems of the requirement to check inbound data for errors and to rate-limit the sender, nor for wholesale removal of intelligence from the network core. Criticism Opponents of net neutrality regulations include ISPs, broadband and telecommunications companies, computer hardware manufacturers, economists, and notable technologists. Many of the major hardware and telecommunications companies specifically oppose the reclassification of broadband as a common carrier under Title II. Corporate opponents of this measure include Comcast, AT&T, Verizon, IBM, Intel, Cisco, Nokia, Qualcomm, Broadcom, Juniper, D-Link, Wintel, Alcatel-Lucent, Corning, Panasonic, Ericsson, Oracle, Akamai, and others. The US Telecom and Broadband Association, which represents a diverse array of small and large broadband providers, is also an opponent. A 2006 campaign against net neutrality was funded by AT&T and members included BellSouth, Alcatel, Cingular, and Citizens Against Government Waste. Nobel Memorial Prize-winning economist Gary Becker's paper titled, "Net Neutrality and Consumer Welfare", published by the Journal of Competition Law & Economics, argues that claims by net neutrality proponents "do not provide a compelling rationale for regulation" because there is "significant and growing competition" among broadband access providers. Google chairman Eric Schmidt states that, while Google views that similar data types should not be discriminated against, it is okay to discriminate across different data types—a position that both Google and Verizon generally agree on, according to Schmidt. According to the Journal, when President Barack Obama announced his support for strong net neutrality rules late in 2014, Schmidt told a top White House official the president was making a mistake. Google once strongly advocated net-neutrality–like rules prior to 2010, but their support for the rules has since diminished; the company however still remains "committed" to net neutrality. Individuals who opposed net neutrality rules include Bob Kahn, Marc Andreessen, Scott McNealy, Peter Thiel and Max Levchin, David Farber, David Clark, Louis Pouzin, MIT Media Lab co-founder Nicholas Negroponte, Rajeev Suri, Jeff Pulver, Mark Cuban, Robert Pepper and former FCC chairman Ajit Pai. Nobel Prize laureate economists who opposed net neutrality rules include Princeton economist Angus Deaton, Chicago economist Richard Thaler, MIT economist Bengt Holmström, and the late Chicago economist Gary Becker. Others include MIT economists David Autor, Amy Finkelstein, and Richard Schmalensee; Stanford economists Raj Chetty, Darrell Duffie, Caroline Hoxby, and Kenneth Judd; Harvard economist Alberto Alesina; Berkeley economists Alan Auerbach and Emmanuel Saez; and Yale economists William Nordhaus, Joseph Altonji and Pinelopi Goldberg. Some civil rights groups, such as the National Urban League, Jesse Jackson's Rainbow/PUSH, and League of United Latin American Citizens, also opposed Title II net neutrality regulations, citing concerns over stifling investment in underserved areas. The Wikimedia Foundation, which runs Wikipedia, told The Washington Post in 2014 that it had a "complicated relationship" with net neutrality. The organization partnered with telecommunications companies to provide free access to Wikipedia for people in developing countries, under a program called Wikipedia Zero, without requiring mobile data to access information. The concept is known as zero rating. Said Wikimedia Foundation officer Gayle Karen Young, "Partnering with telecom companies in the near term, it blurs the net neutrality line in those areas. It fulfills our overall mission, though, which is providing free knowledge." Farber has written and spoken strongly in favor of continued research and development on core Internet protocols. He joined academic colleagues Michael Katz, Christopher Yoo, and Gerald Faulhaber in an op-ed for The Washington Post critical of network neutrality, stating that while the Internet is in need of remodeling, congressional action aimed at protecting the best parts of the current Internet could interfere with efforts to build a replacement. Reduction in investment According to a letter to FCC commissioners and key congressional leaders sent by 60 major ISP technology suppliers including IBM, Intel, Qualcomm, and Cisco, Title II regulation of the Internet "means that instead of billions of broadband investment driving other sectors of the economy forward, any reduction in this spending will stifle growth across the entire economy. This is not idle speculation or fear mongering...Title II is going to lead to a slowdown, if not a hold, in broadband build out, because if you don't know that you can recover on your investment, you won't make it." According to the Wall Street Journal, in one of Google's few lobbying sessions with FCC officials, the company urged the agency to craft rules that encourage investment in broadband Internet networks—a position that mirrors the argument made by opponents of strong net neutrality rules, such as AT&T and Comcast. Opponents of net neutrality argue that prioritization of bandwidth is necessary for future innovation on the Internet. Telecommunications providers such as telephone and cable companies, and some technology companies that supply networking gear, argue telecom providers should have the ability to provide preferential treatment in the form of tiered services, for example by giving online companies willing to pay the ability to transfer their data packets faster than other Internet traffic. The added income from such services could be used to pay for the building of increased broadband access to more consumers. Opponents say that net neutrality would make it more difficult for ISPs and other network operators to recoup their investments in broadband networks. John Thorne, senior vice president and deputy general counsel of Verizon, a broadband and telecommunications company, has argued that they will have no incentive to make large investments to develop advanced fibre-optic networks if they are prohibited from charging higher preferred access fees to companies that wish to take advantage of the expanded capabilities of such networks. Thorne and other ISPs have accused Google and Skype of freeloading or free riding for using a network of lines and cables the phone company spent billions of dollars to build. Marc Andreessen states that "a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you're a large telco right now, you spend on the order of $20 billion a year on capex [capital expenditure]. You need to know how you're going to get a return on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you're not ever going to get a return on continued network investment – which means you'll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we're getting today." Proponents of net neutrality regulations say network operators have continued to under-invest in infrastructure. However, according to Copenhagen Economics, U.S. investment in telecom infrastructure is 50 percent higher than in the European Union. As a share of GDP, the United States' broadband investment rate per GDP trails only the UK and South Korea slightly, but exceeds Japan, Canada, Italy, Germany, and France sizably. On broadband speed, Akamai reported that the US trails only South Korea and Japan among its major trading partners, and trails only Japan in the G-7 in both average peak connection speed and percentage of the population connection at 10 Mbit/s or higher, but are substantially ahead of most of its other major trading partners. The White House reported in June 2013 that U.S. connection speeds are "the fastest compared to other countries with either a similar population or land mass." Akamai's report on "The State of the Internet" in the 2nd quarter of 2014 says "a total of 39 states saw 4K readiness rate more than double over the past year." In other words, as ZDNet reports, those states saw a major increase in the availability of the 15 Mbit/s speed needed for 4K video. According to the Progressive Policy Institute and ITU data, the United States has the most affordable entry-level prices for fixed broadband in the OECD. In Indonesia, there is a very high number of Internet connections that are subject to exclusive deals between the ISP and the building owner. Representatives of Google, Inc claim that changing this dynamic could unlock much more consumer choices and higher speeds. Former FCC Commissioner Ajit Pai and Federal Election Commission's Lee Goldman also wrote in a Politico piece in February 2015, "Compare Europe, which has long had utility-style regulations, with the United States, which has embraced a light-touch regulatory model. Broadband speeds in the United States, both wired and wireless, are significantly faster than those in Europe. Broadband investment in the United States is several multiples that of Europe. And broadband's reach is much wider in the United States, despite its much lower population density." VOIP pioneer Jeff Pulver states that the uncertainty of the FCC imposing Title II, which experts said would create regulatory restrictions on using the Internet to transmit a voice call, was the "single greatest impediment to innovation" for a decade. According to Pulver, investors in the companies he helped found, like Vonage, held back investment because they feared the FCC could use Title II to prevent VOIP startups from bypassing telephone networks. Significant and growing competition, investment A 2010 paper on net neutrality by Nobel Prize economist Gary Becker and his colleagues stated that "there is significant and growing competition among broadband access providers and that few significant competitive problems have been observed to date, suggesting that there is no compelling competitive rationale for such regulation." Becker and fellow economists Dennis Carlton and Hal Sidler found that "Between mid-2002 and mid-2008, the number of high-speed broadband access lines in the United States grew from 16 million to nearly 133 million, and the number of residential broadband lines grew from 14 million to nearly 80 million. Internet traffic roughly tripled between 2007 and 2009. At the same time, prices for broadband Internet access services have fallen sharply." The PPI reports that the profit margins of U.S. broadband providers are generally one-sixth to one-eighth of companies that use broadband (such as Apple or Google), contradicting the idea of monopolistic price-gouging by providers. When FCC chairman Tom Wheeler redefined broadband from 4 Mbit/s to 25 Mbit/s (3.125 MB/s) or greater in January 2015, FCC commissioners Ajit Pai and Mike O'Reilly believed the redefinition was to set up the agency's intent to settle the net neutrality fight with new regulations. The commissioners argued that the stricter speed guidelines painted the broadband industry as less competitive, justifying the FCC's moves with Title II net neutrality regulations. A report by the Progressive Policy Institute in June 2014 argues that nearly every American can choose from at least 2–4 broadband Internet service providers, despite claims that there are only a "small number" of broadband providers. Citing research from the FCC, the Institute wrote that 90 percent of American households have access to at least one wired and one wireless broadband provider at speeds of at least 4 Mbit/s (500 kbyte/s) downstream and 1 Mbit/s (125 kbyte/s) upstream and that nearly 88 percent of Americans can choose from at least two wired providers of broadband disregarding speed (typically choosing between a cable and telco offering). Further, three of the four national wireless companies report that they offer 4G LTE to 250–300 million Americans, with the fourth (T-Mobile) sitting at 209 million and counting. Similarly, the FCC reported in June 2008 that 99.8% of ZIP codes in the United States had two or more providers of high-speed Internet lines available, and 94.6% of ZIP codes had four or more providers, as reported by University of Chicago economists Gary Becker, Dennis Carlton, and Hal Sider in a 2010 paper. Deterring competition FCC commissioner Ajit Pai states that the FCC completely brushes away the concerns of smaller competitors who are going to be subject to various taxes, such as state property taxes and general receipts taxes. As a result, according to Pai, that does nothing to create more competition within the market. According to Pai, the FCC's ruling to impose Title II regulations is opposed by the country's smallest private competitors and many municipal broadband providers. In his dissent, Pai noted that 142 wireless ISPs (WISPs) said that FCC's new "regulatory intrusion into our businesses ... would likely force us to raise prices, delay deployment expansion, or both." He also noted that 24 of the country's smallest ISPs, each with fewer than 1,000 residential broadband customers, wrote to the FCC stating that Title II "will badly strain our limited resources" because they "have no in-house attorneys and no budget line items for outside counsel." Further, another 43 municipal broadband providers told the FCC that Title II "will trigger consequences beyond the Commission's control and risk serious harm to our ability to fund and deploy broadband without bringing any concrete benefit for consumers or edge providers that the market is not already proving today without the aid of any additional regulation." According to a Wired magazine article by TechFreedom's Berin Szoka, Matthew Starr, and Jon Henke, local governments and public utilities impose the most significant barriers to entry for more cable broadband competition: "While popular arguments focus on supposed 'monopolists' such as big cable companies, it's government that's really to blame." The authors state that local governments and their public utilities charge ISPs far more than they actually cost and have the final say on whether an ISP can build a network. The public officials determine what requirements an ISP must meet to get approval for access to publicly owned rights of way (which lets them place their wires), thus reducing the number of potential competitors who can profitably deploy Internet services—such as AT&T's U-Verse, Google Fiber, and Verizon FiOS. Kickbacks may include municipal requirements for ISPs such as building out service where it is not demanded, donating equipment, and delivering free broadband to government buildings. According to a research article from MIS Quarterly, the authors stated their findings subvert some of the expectations of how ISPs and CPs act regarding net neutrality laws. The paper shows that even if an ISP is under restrictions, it still has the opportunity and the incentive to act as a gatekeeper over CPs by enforcing priority delivery of content. Counterweight to server-side non-neutrality Those in favor of forms of non-neutral tiered Internet access argue that the Internet is already not a level playing field, and that large companies achieve a performance advantage over smaller competitors by providing more and better-quality servers and buying high-bandwidth services. Should scrapping of net neutrality regulations precipitate a price drop for lower levels of access, or access to only certain protocols, for instance, such would make Internet usage more adaptable to the needs of those individuals and corporations who specifically seek differentiated tiers of service. Network expert Richard Bennett has written, "A richly funded Web site, which delivers data faster than its competitors to the front porches of the Internet service providers, wants it delivered the rest of the way on an equal basis. This system, which Google calls broadband neutrality, actually preserves a more fundamental inequality." Potentially increased taxes FCC commissioner Ajit Pai, who opposed the 2015 Title II reclassification of ISPs, says that the ruling allows new fees and taxes on broadband by subjecting them to telephone-style taxes under the Universal Service Fund. Net neutrality proponent Free Press writes, "the average potential increase in taxes and fees per household would be far less" than the estimate given by net neutrality opponents, and that if there were to be additional taxes, the tax figure may be around US$4 billion. Under favorable circumstances, "the increase would be exactly zero." Meanwhile, the Progressive Policy Institute claims that Title II could trigger taxes and fees up to $11 billion a year. Financial website Nerd Wallet did their own assessment and settled on a possible US$6.25 billion tax impact, estimating that the average American household may see their tax bill increase US$67 annually. FCC spokesperson Kim Hart said that the ruling "does not raise taxes or fees. Period." Unnecessary regulations According to PayPal founder and Facebook investor Peter Thiel in 2011, "Net neutrality has not been necessary to date. I don't see any reason why it's suddenly become important, when the Internet has functioned quite well for the past 15 years without it. ... Government attempts to regulate technology have been extraordinarily counterproductive in the past." Max Levchin, the other co-founder of PayPal, echoed similar statements, telling CNBC, "The Internet is not broken, and it got here without government regulation and probably in part because of lack of government regulation." FCC Commissioner Ajit Pai, who was one of the two commissioners who opposed the net neutrality proposal, criticized the FCC's ruling on Internet neutrality, stating that the perceived threats from ISPs to deceive consumers, degrade content, or disfavor the content that they dislike are non-existent: "The evidence of these continuing threats? There is none; it's all anecdote, hypothesis, and hysteria. A small ISP in North Carolina allegedly blocked VoIP calls a decade ago. Comcast capped BitTorrent traffic to ease upload congestion eight years ago. Apple introduced Facetime over Wi-Fi first, cellular networks later. "FCC chairman Pai wants to switch ISP rules from proactive restrictions to after-the-fact litigation, which means a lot more leeway for ISPs that don't particularly want to be treated as impartial utilities connecting people to the internet." (Atherton, 2017). Examples this picayune and stale aren't enough to tell a coherent story about net neutrality. The bogeyman never had it so easy." FCC Commissioner Mike O'Reilly, the other opposing commissioner, also claims that the ruling is a solution to a hypothetical problem, "Even after enduring three weeks of spin, it is hard for me to believe that the Commission is establishing an entire Title II/net neutrality regime to protect against hypothetical harms. There is not a shred of evidence that any aspect of this structure is necessary. The D.C. Circuit called the prior, scaled-down version a 'prophylactic' approach. I call it guilt by imagination." In a Chicago Tribune article, FCC Commissioner Pai and Joshua Wright of the Federal Trade Commission argue that "the Internet isn't broken, and we don't need the president's plan to 'fix' it. Quite the opposite. The Internet is an unparalleled success story. It is a free, open and thriving platform." Inability to make the Internet accessible to the poor Opponents argue that net neutrality regulations prevent service providers from providing more affordable Internet access to those who can't afford it. A concept known as zero-rating, ISPs would be unable to provide Internet access for free or at a reduced cost to the poor under net neutrality rules. For example, low-income users who can't afford bandwidth-hogging Internet services such as video streams could be exempted from paying through subsidies or advertising. However, under the rules, ISPs would not be able to discriminate traffic, thus forcing low-income users to pay for high-bandwidth usage like other users. The Wikimedia Foundation, which runs Wikipedia, created Wikipedia Zero to provide Wikipedia free-of-charge on mobile phones to low-income users, especially those in developing countries. However, the practice violates net neutrality rules as traffic would have to be treated equally regardless of the users' ability to pay. In 2014, Chile banned the practice of Internet service providers giving users free access to websites like Wikipedia and Facebook, saying the practice violates net neutrality rules. In 2016, India banned Free Basics application run by Internet.org, which provides users in less developed countries with free access to a variety of websites like Wikipedia, BBC, Dictionary.com, health sites, Facebook, ESPN, and weather reports—ruling that the initiative violated net neutrality. Inability to allocate Internet traffic efficiently Net neutrality rules would prevent traffic from being allocated to the most needed users, according to David Farber. Because net neutrality regulations prevent a discrimination of traffic, networks would have to treat critical traffic equally with non-critical traffic. According to Farber, "When traffic surges beyond the ability of the network to carry it, something is going to be delayed. When choosing what gets delayed, it makes sense to allow a network to favor traffic from, say, a patient's heart monitor over traffic delivering a music download. It also makes sense to allow network operators to restrict traffic that is downright harmful, such as viruses, worms and spam." Related issues Data discrimination Tim Wu, though a proponent of network neutrality, claims that the current Internet is not neutral as its implementation of best effort generally favors file transfer and other non-time-sensitive traffic over real-time communications. Generally, a network which blocks some nodes or services for the customers of the network would normally be expected to be less useful to the customers than one that did not. Therefore, for a network to remain significantly non-neutral requires either that the customers not be concerned about the particular non-neutralities or the customers not have any meaningful choice of providers, otherwise they would presumably switch to another provider with fewer restrictions. While the network neutrality debate continues, network providers often enter into peering arrangements among themselves. These agreements often stipulate how certain information flows should be treated. In addition, network providers often implement various policies such as blocking of port 25 to prevent insecure systems from serving as spam relays, or other ports commonly used by decentralized music search applications implementing peer-to-peer networking models. They also present terms of service that often include rules about the use of certain applications as part of their contracts with users. Most consumer Internet providers implement policies like these. The MIT Mantid Port Blocking Measurement Project is a measurement effort to characterize Internet port blocking and potentially discriminatory practices. However, the effect of peering arrangements among network providers are only local to the peers that enter into the arrangements and cannot affect traffic flow outside their scope. Jon Peha from Carnegie Mellon University believes it is important to create policies that protect users from harmful traffic discrimination while allowing beneficial discrimination. Peha discusses the technologies that enable traffic discrimination, examples of different types of discrimination, and the potential impacts of regulation. Google chairman Eric Schmidt aligns Google's views on data discrimination with Verizon's: "I want to be clear what we mean by Net neutrality: What we mean is if you have one data type like video, you don't discriminate against one person's video in favor of another. But it's okay to discriminate across different types. So you could prioritize voice over video. And there is general agreement with Verizon and Google on that issue." Echoing similar comments by Schmidt, Google's Chief Internet Evangelist and "father of the Internet", Vint Cerf, says that "it's entirely possible that some applications needs far more latency, like games. Other applications need broadband streaming capability in order to deliver real-time video. Others don't really care as long as they can get the bits there, like e-mail or file transfers and things like that. But it should not be the case that the supplier of the access to the network mediates this on a competitive basis, but you may still have different kinds of service depending on what the requirements are for the different applications." Content caching Content caching is the process by which frequently accessed contents are temporarily stored in strategic network positions (e.g., in servers close to the end-users) to achieve several performance objectives. For example, caching is commonly used by ISPs to reduce network congestion and results in a superior quality of experience (QoE) perceived by the final users. Since the storage available in cache servers is limited, caching involves a process of selecting the contents worth storing. Several cache algorithms have been designed to perform this process which, in general, leads to storing the most popular contents. The cached contents are retrieved at a higher QoE (e.g., lower latency), and caching can be therefore considered a form of traffic differentiation. However, caching is not generally viewed as a form of discriminatory traffic differentiation. For example, the technical writer Adam Marcus states that "accessing content from edge servers may be a bit faster for users, but nobody is being discriminated against and most content on the Internet is not latency-sensitive". In line with this statement, caching is not regulated by legal frameworks that are favourable to Net Neutrality, such as the Open Internet Order issued by the FCC in 2015. Even more so, the legitimacy of caching has never been put in doubt by opponents of Net Neutrality. On the contrary, the complexity of caching operations (e.g., extensive information processing) has been successively regarded by the FCC as one of the technical reasons why ISPs should not be considered common carriers, which legitimates the abrogation of Net Neutrality rules. Under a Net Neutrality regime, prioritization of a class of traffic with respect to another one is allowed only if several requirements are met (e.g., objectively different QoS requirements). However, when it comes to caching, a selection of contents of the same class has to be performed (e.g., set of videos worth storing in cache servers). In the spirit of general deregulation with regard to caching, there is no rule that specifies how this process can be carried out in a non-discriminatory way. Nevertheless, the scientific literature considers the issue of caching as a potentially discriminatory process and provides possible guidelines to address it. For example, a non-discriminatory caching might be performed considering the popularity of contents, or with the aim of guaranteeing the same QoE to all the users, or, alternatively, to achieve some common welfare objectives. As far as content delivery networks (CDNs) are concerned, the relationship between caching and Net Neutrality is even more complex. In fact, CDNs are employed to allow scalable and highly-efficient content delivery rather than to grant access to the Internet. Consequently, differently from ISPs, CDNs are entitled to charge content providers for caching their content. Therefore, although this may be regarded as a form of paid traffic prioritization, CDNs are not subject to Net Neutrality regulations and are rarely included in the debate. Despite this, it is argued by some that the Internet ecosystem has changed to such an extent that all the players involved in the content delivery can distort competition and should be therefore also included in the discussion around Net Neutrality. Among those, the analyst Dan Rayburn suggested that "the Open Internet Order enacted by the FCC in 2015 was myopically focussed on ISPs". Quality of service Internet routers forward packets according to the different peering and transport agreements that exist between network operators. Many internets using Internet protocols now employ quality of service (QoS), and Network Service Providers frequently enter into Service Level Agreements with each other embracing some sort of QoS. There is no single, uniform method of interconnecting networks using IP, and not all networks that use IP are part of the Internet. IPTV networks are isolated from the Internet and are therefore not covered by network neutrality agreements. The IP datagram includes a 3-bit wide Precedence field and a larger DiffServ Code Point (DSCP) that are used to request a level of service, consistent with the notion that protocols in a layered architecture offer service through Service Access Points. This field is sometimes ignored, especially if it requests a level of service outside the originating network's contract with the receiving network. It is commonly used in private networks, especially those including Wi-Fi networks where priority is enforced. While there are several ways of communicating service levels across Internet connections, such as SIP, RSVP, IEEE 802.11e, and MPLS, the most common scheme combines SIP and DSCP. Router manufacturers now sell routers that have logic enabling them to route traffic for various Classes of Service at wire-speed. Quality of service is sometimes taken as a measurement through certain tools to test a user's connection quality, such as Network Diagnostic Tools (NDT) and services on speedtest.net. These tools are known to be used by National Regulatory Authorities (NRAs), who use these QoS measurements as a way of detecting Net Neutrality violations. However, there are very few examples of such measurements being used in any significant way by NRAs, or in network policy for that matter. Often, these tools are used not because they fail at recording the results they are meant to record, but because said measurements are inflexible and difficult to exploit for any significant purpose. According to Ioannis Koukoutsidis, the problems with the current tools used to measure QoS stem from a lack of a standard detection methodology, a need to be able to detect various methods in which an ISP might violate Net Neutrality, and the inability to test an average measurement for a specific population of users. With the emergence of multimedia, VoIP, IPTV, and other applications that benefit from low latency, various attempts to address the inability of some private networks to limit latency have arisen, including the proposition of offering tiered service levels that would shape Internet transmissions at the network layer based on application type. These efforts are ongoing and are starting to yield results as wholesale Internet transport providers begin to amend service agreements to include service levels. Advocates of net neutrality have proposed several methods to implement a net-neutral Internet that includes a notion of quality-of-service: An approach offered by Tim Berners-Lee allows discrimination between different tiers while enforcing strict neutrality of data sent at each tier: "If I pay to connect to the Net with a given quality of service, and you pay to connect to the net with the same or higher quality of service, then you and I can communicate across the net, with that quality and quantity of service." "[We] each pay to connect to the Net, but no one can pay for exclusive access to me." United States lawmakers have introduced bills that would now allow quality of service discrimination for certain services as long as no special fee is charged for higher-quality service. Wireless networks There are also some discrepancies in how wireless networks affect the implementation of net neutrality policy, some of which are noted in the studies of Christopher Yoo. In one research article, he claimed that "...bad handoffs, local congestion, and the physics of wave propagation make wireless broadband networks significantly less reliable than fixed broadband networks." Pricing models Broadband Internet access has most often been sold to users based on Excess Information Rate or maximum available bandwidth. If ISPs can provide varying levels of service to websites at various prices, this may be a way to manage the costs of unused capacity by selling surplus bandwidth (or "leverage price discrimination to recoup costs of 'consumer surplus). However, purchasers of connectivity on the basis of Committed Information Rate or guaranteed bandwidth capacity must expect the capacity they purchase in order to meet their communications requirements. Various studies have sought to provide network providers with the necessary formulas for adequately pricing such a tiered service for their customer base. But while network neutrality is primarily focused on protocol-based provisioning, most of the pricing models are based on bandwidth restrictions. Many Economists have analyzed Net Neutrality to compare various hypothetical pricing models. For instance, economic professors Michael L. Katz and Benjamin E. Hermalin at the University of California Berkeley co-published a paper titled, "The Economics of Product-Line Restrictions with an Application to the Network Neutrality Debate" in 2007. In this paper, they compared the single-service economic equilibrium to the multi-service economic equilibriums under Net Neutrality. Reactions to removing net neutrality in the US On 12 July 2017, an event called the Day of Action was held to advocate net neutrality in the United States in response to Ajit Pai's plans to remove government policies that upheld net neutrality. Several websites participated in this event, including ones such as Amazon, Netflix, Google, and several other just as well-known websites. The gathering was called "the largest online protest in history." Websites chose many different ways to convey their message. The founder of the web, Tim Berners-Lee, published a video defending FCC's rules. Reddit made a pop-up message that loads slowly to illustrate the effect of removing net neutrality. Other websites also put up some less obvious notifications, such as Amazon, which put up a hard-to-notice link, or Google, which put up a policy blog post as opposed to a more obvious message. A poll conducted by Mozilla showed strong support for net neutrality across US political parties. Out of the approximately 1,000 responses received by the poll, 76% of Americans, 81% of Democrats, and 73% of Republicans, support net neutrality. The poll also showed that 78% of Americans do not think that Trump's government can be trusted to protect access to the Internet. Net neutrality supporters had also made several comments on the FCC website opposing plans to remove net neutrality, especially after a segment by John Oliver regarding this topic was aired on his show Last Week Tonight. He urged his viewers to comment on the FCC's website, and the flood of comments that were received crashed the FCC's website, with the resulting media coverage of the incident inadvertently helping it to reach greater audiences. However, in response, Ajit Pai selected one particular comment that specifically supported removal of net neutrality policies. At the end of August, the FCC released more than 13,000 pages of net neutrality complaints filed by consumers, one day before the deadline for the public to comment on Ajit Pai's proposal to remove net neutrality. It has been implied that the FCC ignored evidence against their proposal to remove the protection laws faster. It has also been noted that nowhere was it mentioned how FCC made any attempt to resolve the complaints made. Regardless, Ajit Pai's proposal has drawn more than 22 million comments, though a large amount was spam. However, there were 1.5 million personalized comments, 98.5% of them protesting Ajit Pai's plan.  , fifty senators had endorsed a legislative measure to override the Federal Communications Commission's decision to deregulate the broadband industry. The Congressional Review Act paperwork was filed on 9 May 2018, which allowed the Senate to vote on the permanence of the new net neutrality rules proposed by the Federal Communications Commission. The vote passed and a resolution was approved to try to remove the FCC's new rules on net neutrality; however, officials doubted there was enough time to completely repeal the rules before the Open Internet Order officially expired on 11 June 2018. A September 2018 report from Northeastern University and the University of Massachusetts, Amherst found that U.S. telecom companies are indeed slowing Internet traffic to and from those two sites in particular along with other popular apps. In March 2019, congressional supporters of net neutrality introduced the Save the Internet Act in both the House and Senate, which if passed would reverse the FCC's 2017 repeal of net neutrality protections. Rural digital divide A digital divide is referred to as the difference between those who have access to the internet and those using digital technologies based on urban against rural areas. In the U.S, government city tech leaders warned in 2017 that the FCC's repeal of net neutrality will widen the digital divide, negatively affect small businesses, and job opportunities for middle class and low-income citizens. The FCC reports on their website that Americans in rural areas reach only 65 percent, while in urban areas reach 97 percent of access to high-speed Internet. Public Knowledge has stated that this will have a larger impact on those living in rural areas without internet access. In developing countries like India that don't have reliable electricity or internet connections has only 9 percent of those living in rural areas that have internet access compared to 64 percent of those in urban areas that have access. See also Concentration of media ownership Economic rent Industrial information economy Killswitch (film) Media regulation Search neutrality Switzerland (software) References External links The WIRED Guide to Net Neutrality by WIRED. 5 May 2020. Net Neutrality a 2014 comic by The Oatmeal Internet access
Net neutrality
Technology,Engineering
15,696
48,831,673
https://en.wikipedia.org/wiki/Alectinib
Alectinib (INN), sold under the brand name Alecensa, is an anticancer medication that is used to treat non-small-cell lung cancer (NSCLC). It blocks the activity of anaplastic lymphoma kinase (ALK). It is taken by mouth. It was developed by Chugai Pharmaceutical Co. Japan, which is part of the Hoffmann-La Roche group. The most common side effects include constipation, muscle pain and edema (swelling) including of the ankles and feet, the face, the eyelids and the area around the eyes. Alectinib was approved for medical use in Japan in 2014, the United States in 2015, Canada in 2016, Australia in 2017, the European Union in 2017, and the United Kingdom in 2021. Medical uses In the European Union, alectinib is indicated for the first-line treatment of adults with anaplastic lymphoma kinase (ALK)-positive advanced non-small cell lung cancer (NSCLC); and for the treatment of adults with ALK‑positive advanced NSCLC previously treated with crizotinib. In the United States, it is indicated for the treatment of people with anaplastic lymphoma kinase (ALK)-positive metastatic non-small cell lung cancer (NSCLC) as detected by an FDA-approved test. In April 2024, the US Food and Drug Administration (FDA) expanded the indication of alectinib to include adjuvant treatment following tumor resection in people with anaplastic lymphoma kinase (ALK)-positive non-small cell lung cancer (NSCLC), as detected by an FDA-approved test. Contraindications There are no reported contraindications. Side effects Apart from unspecific gastrointestinal effects such as constipation (in 34% of patients) and nausea (22%), common adverse effects in studies included oedema (swelling; 34%), myalgia (muscle pain; 31%), anaemia (low red blood cell count), sight disorders, light sensitivity and rashes (all below 20%). Serious side effects occurred in 19% of patients; fatal ones in 2.8%. Interactions Alectinib has a low potential for interactions. While it is metabolised by the liver enzyme CYP3A4, and blockers of this enzyme accordingly increase its concentrations in the body, they also decrease concentrations of the active metabolite M4, resulting in only a small overall effect. Conversely, CYP3A4 inducers decrease alectinib concentrations and increase M4 concentrations. Interactions via other CYP enzymes and transporter proteins cannot be excluded but are unlikely to be of clinical significance. Pharmacology Mechanism of action The substance potently and selectively blocks two receptor tyrosine kinase enzymes: anaplastic lymphoma kinase (ALK) and the RET proto-oncogene. The active metabolite M4 has similar activity against ALK. Inhibition of ALK subsequently blocks cell signalling pathways, including STAT3 and the PI3K/AKT/mTOR pathway, and induces death (apoptosis) of tumour cells. Pharmacokinetics When taken with a meal, the absolute bioavailability of the drug is 37%, and highest blood plasma concentrations are reached after four to six hours. Steady state conditions are reached within seven days. Plasma protein binding of alectinib and M4 is over 99%. The enzyme mainly responsible for alectinib metabolism is CYP3A4; other CYP enzymes and aldehyde dehydrogenases only play a small role. Alectinib and M4 account for 76% of the circulating substance, while the rest are minor metabolites. Plasma half-life of alectinib is 32.5 hours, and that of M4 is 30.7 hours. 98% are excreted via the faeces, of which 84% are unchanged alectinib and 6% are M4. Less than 1% are found in the urine. Chemistry Alectinib has a pKa of 7.05. It is used in form of the hydrochloride, which is a white to yellow-white lumpy powder. History The approvals were based mainly on two trials: In a Japanese Phase I–II trial, after approximately 2 years, 19.6% of patients had achieved a complete response, and the 2-year progression-free survival rate was 76%. In February 2016 the J-ALEX phase III study comparing alectinib with crizotinib was terminated early because an interim analysis showed that progression-free survival was longer with alectinib. In November 2017, the FDA approved alectinib for the first-line treatment of people with ALK-positive metastatic non-small cell lung cancer. This based on the phase 3 ALEX trial comparing it with crizotinib. Efficacy was demonstrated in a global, randomized, open-label trial (ALINA, NCT03456076) in participants with ALK-positive NSCLC who had complete tumor resection. Eligible participants were required to have resectable stage IB (tumors ≥ 4 cm) to IIIA NSCLC (by AJCC 7th edition) with ALK rearrangements identified by a locally performed FDA-approved ALK test or by a centrally performed VENTANA ALK (D5F3) CDx assay. A total of 257 participants were randomized (1:1) to receive alectinib 600 mg orally twice daily or platinum-based chemotherapy following tumor resection. The application was granted priority review and orphan drug designations. In April 2024, the FDA approved alectinib as an adjuvant treatment for people with ALK-positive early-stage lung cancer. This was based on the Phase III ALINA study [NCT03456076]. In April 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion for the use of alectinib for adjuvant treatment of resected non‑small cell lung cancer (NSCLC). In June 2024, the EU approved alectinib as an adjuvant treatment for people in the EU with ALK-positive early-stage lung cancer. This was based on the Phase III ALINA study [NCT03456076]. In October 2024, the UK`s NICE recommended alectinib as an adjuvant treatment for adults for the treatment of stage 1B to 3A ALK-positive non-small-cell lung cancer. Society and culture Legal status Alectinib was approved in Japan in July 2014, for the treatment of ALK fusion-gene positive, unresectable, advanced or recurrent non-small-cell lung cancer (NSCLC). Alectinib was granted an accelerated approval by the US Food and Drug Administration (FDA) in December 2015, to treat people with advanced ALK-positive NSCLC whose disease worsened after, or who could not tolerate, treatment with crizotinib (Xalkori). It received conditional approval by the European Medicines Agency in February 2017, for the same indication. The approval was upgraded from conditional to full approval in December 2017. References External links Drugs developed by Hoffmann-La Roche Drugs developed by Genentech Carbazoles Ketones 4-Morpholinyl compounds Nitriles Piperidines Receptor tyrosine kinase inhibitors Orphan drugs
Alectinib
Chemistry
1,573
37,403,328
https://en.wikipedia.org/wiki/Ramaria%20rasilispora
Ramaria rasilispora, commonly known as the yellow coral, is a coral mushroom in the family Gomphaceae. Described as new to science in 1974, it is found in western North America south to Mexico, and in the eastern Himalaya. Taxonomy The species was first described scientifically in 1974 by American mycologists Currie Marr and Daniel Stuntz. The specific epithet rasilispora is derived from the roots rasil- (shaved, scraped, or worn smooth) and spora (spore). It is commonly known as the "yellow coral". Description The fruit bodies are large and broad, measuring or more tall and wide. They originate from a single thick, conical stem measuring long by wide; this base is branched up to seven times, and the branches are themselves polychotomously (multiply) or dichotomously (divided into two) branched. The branches are smooth and cream to pale yellow in color, except in young specimens that lack coloration. Primary branches are thick, from in diameter, while upper branches are usually thick. The context is fleshy to fibrous, but when dry has a consistency similar to bendable chalk. In young fruit bodies, the stipe and lower branches are whitish to light yellowish. Upper branches are light orange to apricot-yellow, maturing to a pale grayish-orange. Branch tips are initially the same color as the branch, but darken to brown in maturity or when dry. Fruit bodies have no distinctive taste or odor. Spores are cylindrical, with a surface texture ranging from smooth to finely warted, and measure 8–11 by 3–4 μm. The basidia (spore-bearing cells) are club-shaped, two- to four-spored (most have four spores), and measure 47–60 by 8–10 μm. The variety Ramaria rasilispora var. scatesina differs from the main type in the color of its fruit bodies, which, in both young and mature specimens, have branches that range from yellowish-white to light yellow. The fruit bodies are edible, and "quite popular" according to David Arora, who reports its use raw in salads, or candied like grapefruit rinds. Some people report a negative reaction to eating the mushroom. The fungi are sold in traditional markets in the Mexican municipalities of Ozumba and Chalco. Similar species Similar species include Ramaria flavigelatinosa and R. magnipes, the latter of which is close in appearance to var. rasilispora. Habitat and distribution The fruit bodies of Ramaria rasilispora grow on the ground in coniferous forests. Fruiting occurs in spring and summer. Common in western North America, its range extends south to Mexico and north to Alaska. Variety rasilispora is found in the Pacific Northwest. Variety scatesina, originally collected in coniferous forests of Idaho, has since been reported growing in a deciduous forest in the eastern Himalaya. References Gomphaceae Edible fungi Fungi described in 1974 Fungi of Asia Fungi of North America Fungus species
Ramaria rasilispora
Biology
640
5,165,846
https://en.wikipedia.org/wiki/Insectary%20plant
Insectary plants are those that attract insects. As such, beneficial insectary plants are intentionally introduced into an ecosystem to increase pollen and nectar resources required by the natural enemies of the harmful or unwanted insects pests. Beyond an effective natural control of pests, the beneficial insects also assist in pollination. The "friendly insects" include ladybeetles, bees, ground beetles, hoverflies, and parasitic wasps. Other animals that are frequently considered beneficial include lizards, spiders, toads, and hummingbirds. Beneficial insects are as much as ten times more abundant in the insectary plantings area. Mortality of scale insects (caused by natural enemies) can be double with insectary plantings. In addition, a diversity of insectary plants can increase the population of beneficial insects such that these levels can be sustained even when the insectary plants are removed or die off. For maximum benefit in the garden, insectary plants can be grown alongside desired garden plants that do not have this benefit. The insects attracted to the insectary plants will also help the other nearby garden plants. Many members of the family Apiaceae (formerly known as Umbelliferae) are excellent insectary plants. Fennel, angelica, coriander (cilantro), dill, and wild carrot all provide in great number the tiny flowers required by parasitic wasps. Various clovers, yarrow, and rue also attract parasitic and predatory insects. Low-growing plants, such as thyme, rosemary, or mint, provide shelter for ground beetles and other beneficial insects. Composite flowers (daisy and chamomile) and mints (spearmint, peppermint, or catnip) will attract predatory wasps, hoverflies, and robber flies. The wasps will catch caterpillars and grubs to feed their young, while the predatory and parasitic flies attack many kinds of insects, including leaf hoppers and caterpillars. Other insectary plants include: mustard plants such as Brassica juncea, Phacelia tanacetifolia, buckwheat (Fagopyrum esculentum), marigold (Tagetes patula), elderberry, Korean licorice mint (Agastache rugosa), blackberry, Convolvulus, Crataegus, Anthriscus sylvestris, Chrysanthemum segetum, Scrophularia, Rosa canina, Hedera helix, Centaurea cyanus, Eschscholzia californica, Prunus spinosa, Lobularia maritima. See also Biological pest control Companion planting Beneficial weed Insect hotel References External links Enhancing Biological Control with Beneficial Insectary Plants - July 1998 - description Benefits of Insectary Plants Natural Pest Control See: tables http://eartheasy.com/grow_garden_insectary.htm See: tables http://www.agroecology.org/cases/insectaryplants.htm case study Beneficial Insects for Gardens Those Amazing Hover Flies: Order Diptera, family Syrphidae Video: Efficient Intercropping with Sweet Alyssum for Aphid Control in Lettuce Insect ecology Plant ecology
Insectary plant
Biology
658
78,937,491
https://en.wikipedia.org/wiki/Tremella%20compacta
Tremella compacta is a species of fungus in the order Tremellales. It produces large, ochraceous yellow, compactly lobed, cartilaginous-gelatinous basidiocarps (fruit bodies) on dead branches of broadleaved trees. It was originally described from Brazil and is distributed in northern South America, Central America, and the Caribbean. Taxonomy The species was first published in 1895 by German mycologist Alfred Möller based on a collection from Blumenau. As a probable parasite of Stereum fruit bodies, Tremella compacta belongs in the genus Naematelia, but the species has not as yet undergone DNA sequencing to confirm this. Description Fruit bodies are tough-gelatinous, compactly lobed to cerebriform (brain-like), 35 to 60 mm across, the lobes hollow, ochraceous to apricot or pale orange-brown when fresh, drying hard and rigid. Microscopically, the hyphae have clamp connections. The basidia are tremelloid (ellipsoid, with oblique to vertical septa) and normally stalked, 2 to 4-celled, 10 to 16 by 7.5 to 14 μm. The basidiospores are ellipsoid, smooth, 7 to 9.5 by 5 to 6.5 μm. Similar species Naematelia aurantia occurs on Stereum hirsutum on broadleaved trees but typically has more leaf-like lobes and is bright yellow to yellow-orange. Habitat and distribution Tremella compacta occurs on broadleaved trees and appears to be a parasite on fruit bodies of Stereum species. The type collection was from Brazil, but it has also been reported from Belize, the Dominican Republic, Trinidad, Puerto Rico, Colombia, and Venezuela. References compacta Fungi described in 1895 Fungi of South America Fungi of Central America Fungus species
Tremella compacta
Biology
399
2,518,328
https://en.wikipedia.org/wiki/Herbrand%27s%20theorem
Herbrand's theorem is a fundamental result of mathematical logic obtained by Jacques Herbrand (1930). It essentially allows a certain kind of reduction of first-order logic to propositional logic. Herbrand's theorem is the logical foundation for most automatic theorem provers. Although Herbrand originally proved his theorem for arbitrary formulas of first-order logic, the simpler version shown here, restricted to formulas in prenex form containing only existential quantifiers, became more popular. Statement Let be a formula of first-order logic with quantifier-free, though it may contain additional free variables. This version of Herbrand's theorem states that the above formula is valid if and only if there exists a finite sequence of terms , possibly in an expansion of the language, with and , such that is valid. If it is valid, it is called a Herbrand disjunction for Informally: a formula in prenex form containing only existential quantifiers is provable (valid) in first-order logic if and only if a disjunction composed of substitution instances of the quantifier-free subformula of is a tautology (propositionally derivable). The restriction to formulas in prenex form containing only existential quantifiers does not limit the generality of the theorem, because formulas can be converted to prenex form and their universal quantifiers can be removed by Herbrandization. Conversion to prenex form can be avoided, if structural Herbrandization is performed. Herbrandization can be avoided by imposing additional restrictions on the variable dependencies allowed in the Herbrand disjunction. Proof sketch A proof of the non-trivial direction of the theorem can be constructed according to the following steps: If the formula is valid, then by completeness of cut-free sequent calculus, which follows from Gentzen's cut-elimination theorem, there is a cut-free proof of . Starting from leaves and working downwards, remove the inferences that introduce existential quantifiers. Remove contraction inferences on previously existentially quantified formulas, since the formulas (now with terms substituted for previously quantified variables) might not be identical anymore after the removal of the quantifier inferences. The removal of contractions accumulates all the relevant substitution instances of in the right side of the sequent, thus resulting in a proof of , from which the Herbrand disjunction can be obtained. However, sequent calculus and cut-elimination were not known at the time of Herbrand's proof, and Herbrand had to prove his theorem in a more complicated way. Generalizations of Herbrand's theorem Herbrand's theorem has been extended to higher-order logic by using expansion-tree proofs. The deep representation of expansion-tree proofs corresponds to a Herbrand disjunction, when restricted to first-order logic. Herbrand disjunctions and expansion-tree proofs have been extended with a notion of cut. Due to the complexity of cut-elimination, Herbrand disjunctions with cuts can be non-elementarily smaller than a standard Herbrand disjunction. Herbrand disjunctions have been generalized to Herbrand sequents, allowing Herbrand's theorem to be stated for sequents: "a Skolemized sequent is derivable if and only if it has a Herbrand sequent". See also Herbrand structure Herbrand interpretation Herbrand universe Compactness theorem Notes References . Proof theory Theorems in the foundations of mathematics Metatheorems
Herbrand's theorem
Mathematics
748
46,995,389
https://en.wikipedia.org/wiki/Epsilonpapillomavirus
Epsilonpapillomavirus is a genus of viruses in the family Papillomaviridae. Cattle serve as the natural hosts of these bovine papillomaviruses. There are two species in this genus. Diseases associated with this genus include fibropapillomas and true epithelial papillomas of the skin. Taxonomy The following two species are assigned to the genus: Epsilonpapillomavirus 1 Epsilonpapillomavirus 2 Structure Viruses in Epsilonpapillomavirus are non-enveloped, with icosahedral geometries, and T=7 symmetry. The diameter is around 60 nm. Genomes are circular, around 8kb in length. The genome codes for 6 proteins. Life cycle Viral replication is nuclear. Entry into the host cell is achieved by attachment of the viral proteins to host receptors, which mediates endocytosis. Replication follows the dsDNA bidirectional replication model. Dna templated transcription, with some alternative splicing mechanism is the method of transcription. The virus exits the host cell by nuclear envelope breakdown. Bovine serve as the natural host. Transmission routes are contact. References External links ICTV Report Papillomaviridae Viralzone: Epsilonpapillomavirus Papillomavirus Virus genera
Epsilonpapillomavirus
Biology
266
6,413,932
https://en.wikipedia.org/wiki/Qualitative%20Research%20Reports%20in%20Communication
Qualitative Research Reports in Communication is a peer-reviewed annual academic journal sponsored by the Eastern Communication Association. The journal publishes brief qualitative and critical research essays of 2,500 words or less on a wide range of topics extending and enhancing the understanding of human communication. Research essays relating to human communication covering studies of intercultural, media, political, organizations, rhetorical, interpersonal and legal communication are typical submissions. References Publisher's Website Cultural journals Human communication Qualitative research journals Academic journals established in 1999 English-language journals Communication journals
Qualitative Research Reports in Communication
Biology
112
51,871,142
https://en.wikipedia.org/wiki/Nest%20Wifi
Nest Wifi, its predecessor the Google Wifi, and the Nest Wifi's successor, the Nest Wifi Pro, are a line of mesh-capable wireless routers and add-on points developed by Google as part of the Google Nest family of products. The first generation was announced on October 4, 2016, and released in the United States on December 5, 2016. The second generation, distinct in being released as two separate offerings, a "router" and "point", were announced at the Pixel 4 hardware event on October 15, 2019, and was released in the United States on November 4, 2019. The third generation was announced on October 4, 2022, two days prior to the Pixel 7 Fall 2022 event. This generation returned to a single model, doing away with the "router/point" variants, and was released in the United States on October 27, 2022. The Nest Wifi aims to provide enhanced Wi-Fi coverage through the setup of multiple Nest Wifi devices in a home. Nest Wifi automatically switches between access points depending on signal strength. History First generation Android Police reported in September 2016 that Google was preparing to introduce a mesh-capable wireless router with enhanced range, along with its October 4 date of announcement and US$129 price point. Google Wifi was officially announced on October 4, 2016, with expected availability in the United States in December. The device became available in the United States on December 5, 2016, in the United Kingdom on April 6, 2017, in Canada on April 28, 2017, in France and Germany on June 26, 2017, in Australia on July 20, 2017, in Hong Kong and Singapore on August 30, 2017, and in Philippines on June 26, 2018. The first generation Google Wifi features 802.11ac connectivity with 2.4 GHz and 5 GHz channels, 2x2 antennas, and support for beamforming. It has two gigabit Ethernet ports, and contains a quad-core processor with 512 MB RAM and 4 GB flash memory. Wi-Fi access can be controlled through a companion mobile app. In 2020, Google relaunched the first-generation Google Wifi, with minor hardware changes and at a lower price. Second generation The second generation of the product was officially announced at the Pixel 4 hardware event on October 15, 2019, renamed as Google Nest Wifi as part of the company's shift towards its rebranding of all its smart home products to the Google Nest name. It adds a smart speaker equipped add-on point. Internally, a few changes were made, such as a quad-core 64-bit ARM CPU 1.4 GHz and a machine learning hardware engine for both the router and point, as well as IEEE 802.15.4 Thread support. The router has 1 GB RAM and 4 GB flash memory and supports AC2200 4x4 MU-MIMO whereas the point has 768 MB RAM and 512 MB flash memory and supports AC1200 2x2 MU-MIMO. Feature comparison Reception Technology websites Engadget and CNET praised the device's ease of setup, design and speed, but criticized its lack of customizable options, such as no settings for MAC filtering, content filtering, or Dynamic DNS. The Verge also praised its design and ease of use. See also Google OnHub References External links Google Wifi routers have been going offline at random Wifi Wireless networking hardware
Nest Wifi
Technology
705
4,014,228
https://en.wikipedia.org/wiki/Filamentation
Filamentation is the anomalous growth of certain bacteria, such as Escherichia coli, in which cells continue to elongate but do not divide (no septa formation). The cells that result from elongation without division have multiple chromosomal copies. In the absence of antibiotics or other stressors, filamentation occurs at a low frequency in bacterial populations (4–8% short filaments and 0–5% long filaments in 1- to 8-hour cultures). The increased cell length can protect bacteria from protozoan predation and neutrophil phagocytosis by making ingestion of cells more difficult. Filamentation is also thought to protect bacteria from antibiotics, and is associated with other aspects of bacterial virulence such as biofilm formation. The number and length of filaments within a bacterial population increases when the bacteria are exposed to different physical, chemical and biological agents (e.g. UV light, DNA synthesis-inhibiting antibiotics, bacteriophages). This is termed conditional filamentation. Some of the key genes involved in filamentation in E. coli include sulA, minCD and damX. Filament formation Antibiotic-induced filamentation Some peptidoglycan synthesis inhibitors (e.g. cefuroxime, ceftazidime) induce filamentation by inhibiting the penicillin binding proteins (PBPs) responsible for crosslinking peptidoglycan at the septal wall (e.g. PBP3 in E. coli and P. aeruginosa). Because the PBPs responsible for lateral wall synthesis are relatively unaffected by cefuroxime and ceftazidime, cell elongation proceeds without any cell division and filamentation is observed. DNA synthesis-inhibiting and DNA damaging antibiotics (e.g. metronidazole, mitomycin C, the fluoroquinolones, novobiocin) induce filamentation via the SOS response. The SOS response inhibits septum formation until the DNA can be repaired, this delay stopping the transmission of damaged DNA to progeny. Bacteria inhibit septation by synthesizing protein SulA, an FtsZ inhibitor that halts Z-ring formation, thereby stopping recruitment and activation of PBP3. If bacteria are deprived of the nucleobase thymine by treatment with folic acid synthesis inhibitors (e.g. trimethoprim), this also disrupts DNA synthesis and induces SOS-mediated filamentation. Direct obstruction of Z-ring formation by SulA and other FtsZ inhibitors (e.g. berberine) induces filamentation too. Some protein synthesis inhibitors (e.g. kanamycin), RNA synthesis inhibitors (e.g. bicyclomycin) and membrane disruptors (e.g. daptomycin, polymyxin B) cause filamentation too, but these filaments are much shorter than the filaments induced by the above antibiotics. Stress-induced filamentation Filamentation is often a consequence of environmental stress. It has been observed in response to temperature shocks, low water availability, high osmolarity, extreme pH, and UV exposure. UV light damages bacterial DNA and induces filamentation via the SOS response. Starvation can also cause bacterial filamentation. For example, if bacteria are deprived of the nucleobase thymine, this disrupts DNA synthesis and induces SOS-mediated filamentation. Nutrient-induced filamentation Several macronutrients and biomolecules can cause bacterial cells to filament, including the amino acids glutamine, proline and arginine, and some branched-chain amino acids. Certain bacterial species, such as Paraburkholderia elongata, will also filament as a result of a tendency to accumulate phosphate in the form of polyphosphate, which can chelate metal cofactors needed by division proteins. In addition, filamentation is induced by nutrient-rich conditions in the intracellular pathogen Bordetella atropi. This occurs via the highly conserved UDP-glucose pathway. UDP-glucose biosynthesis and sensing suppresses bacterial cell division, with the ensuing filamentation allowing B. atropi to spread to neighboring cells. Intrinsic dysbiosis-induced filamentation Filamentation can also be induced by other pathways affecting thymidylate synthesis. For instance, partial loss of dihydrofolate reductase (DHFR) activity causes reversible filamentation. DHFR has a critical role in regulating the amount of tetrahydrofolate, which is essential for purine and thymidylate synthesis. DHFR activity can be inhibited by mutations or by high concentrations of the antibiotic trimethoprim (see antibiotic-induced filamentation above). Overcrowding of the periplasm or envelope can also induce filamentation in Gram-negative bacteria by disrupting normal divisome function. Filamentation and biotic interactions Several examples of filamentation that result from biotic interactions between bacteria and other organisms or infectious agents have been reported. Filamentous cells are resistant to ingestion by bacterivores, and environmental conditions generated during predation can trigger filamentation. Filamentation can also be induced by signalling factors produced by other bacteria. In addition, Agrobacterium spp. filament in proximity to plant roots, and E. coli filaments when exposed to plant extracts. Lastly, bacteriophage infection can result in filamentation via the expression of proteins that inhibit divisome assembly. See also Bacterial morphological plasticity Filamentous bacteriophage Filamentous cyanobacteria Segmented filamentous bacteria References Cellular processes Microbiology
Filamentation
Chemistry,Biology
1,233
200,992
https://en.wikipedia.org/wiki/Asynchronous%20serial%20communication
Asynchronous serial communication is a form of serial communication in which the communicating endpoints' interfaces are not continuously synchronized by a common clock signal. Instead of a common synchronization signal, the data stream contains synchronization information in form of start and stop signals, before and after each unit of transmission, respectively. The start signal prepares the receiver for arrival of data and the stop signal resets its state to enable triggering of a new sequence. A common kind of start-stop transmission is ASCII over RS-232, for example for use in teletypewriter operation. Origin Mechanical teleprinters using 5-bit codes (see Baudot code) typically used a stop period of 1.5 bit times. Very early electromechanical teletypewriters (pre-1930) could require 2 stop bits to allow mechanical impression without buffering. Hardware which does not support fractional stop bits can communicate with a device that uses 1.5 bit times if it is configured to send 2 stop bits when transmitting and requiring 1 stop bit when receiving. The format is derived directly from the design of the teletypewriter, which was designed this way because the electromechanical technology of its day was not precise enough for synchronous operation: thus the systems needed to be re-synchronized at the start of each character. Having been re-synchronized, the technology of the day was good enough to preserve bit-sync for the remainder of the character. The stop bits gave the system time to recover before the next start bit. Early teleprinter systems used five data bits, typically with some variant of the Baudot code. Very early experimental printing telegraph devices used only a start bit and required manual adjustment of the receiver mechanism speed to reliably decode characters. Automatic synchronization was required to keep the transmitting and receiving units "in step". This was finally achieved by Howard Krum, who patented the start-stop method of synchronization (, granted September 19, 1916, then , granted December 3, 1918). Shortly afterward a practical teleprinter was patented (, granted July 3, 1917). Operation Before signaling will work, the sender and receiver must agree on the signaling parameters: Full or half-duplex operation The number of bits per character -- currently almost always 8-bit characters, but historically some transmitters have used a five-bit character code, six-bit character code, or a 7-bit ASCII. Endianness: the order in which the bits are sent The speed or bits per second of the line (equal to the Baud rate when each symbol represents one bit). Some systems use automatic speed detection, also called automatic baud rate detection. Whether to use or not use parity Odd or even parity, if used The number of stop bits sent must be chosen (the number sent must be at least what the receiver needs) Mark and space symbols (current directions in early telegraphy, later voltage polarities in EIA RS-232 and so on, frequency-shift polarities in frequency-shift keying and so on) Asynchronous start-stop signaling was widely used for dial-up modem access to time-sharing computers and BBS systems. These systems used either seven or eight data bits, transmitted least-significant bit first, in accordance with the ASCII standard. Between computers, the most common configuration used was "8N1": eight-bit characters, with one start bit, one stop bit, and no parity bit. Thus 10 Baud times are used to send a single character, and so dividing the signaling bit-rate by ten results in the overall transmission speed in characters per second. Asynchronous start-stop is the lower data-link layer used to connect computers to modems for many dial-up Internet access applications, using a second (encapsulating) data link framing protocol such as PPP to create packets made up out of asynchronous serial characters. The most common physical layer interface used is RS-232D. The performance loss relative to synchronous access is negligible, as most modern modems will use a private synchronous protocol to send the data between themselves, and the asynchronous links at each end are operated faster than this data link, with flow control being used to throttle the data rate to prevent overrun. See also Comparison of synchronous and asynchronous signalling Degree of start-stop distortion Synchronous serial communication Universal asynchronous receiver/transmitter (UART) References Further reading Nelson, R. A. and Lovitt, K. M. History of Teletypewriter Development (October 1963), Teletype Corporation, retrieved April 14, 2005 Hobbs, Allan G. (1999) Five-unit codes, accessed 20 December 2007 Edward E. Kleinschmidt. Printing Telegraphy ... A New Era Begins, 1967, released Nov. 9, 2016 by Project Gutenberg. External links Synchronization Data transmission Digital electronics Physical layer protocols Broadcast engineering
Asynchronous serial communication
Engineering
1,058
38,517,015
https://en.wikipedia.org/wiki/Demisexuality
Demisexuality is sexual orientation in which individual does not experience primary sexual attraction – type of attraction that is based on immediately observable characteristics such as appearance or smell, and is experienced immediately after first encounter. A demisexual person can only experience secondary sexual attraction – type of attraction that occurs after development of emotional bond. The amount of time that demisexual individual needs to know another person before developing sexual attraction towards them varies from person to person. Demisexuality is generally categorized on the asexuality spectrum. History The term was coined in the Asexual Visibility and Education Network Forums in February 2006. Based on the theory that allosexuals experience both primary and secondary sexual attraction and asexuals do not experience either, the term demisexual was proposed for people who experience the latter without the former. However, David Jay suggested a similar word in 2003, called semisexual. Demisexuality, as a component of the asexuality spectrum, is included in queer activist communities such as GLAAD and The Trevor Project. Demisexuality also has finer divisions within itself. The word gained entry to the Oxford English Dictionary in March 2022, with its earliest usage recorded in 2006 as a noun. Since 2019, the app Tinder includes demisexual as an option for self-descriptors of sexual orientation on profiles. Definition Demisexuality has been described as a sexual orientation where a person feels sexually attracted to someone only after developing a close or strong emotional bond with them. Some demisexuals will also feel romantic attraction, while others do not. The duration of time and the degree of interpersonal knowledge and bonding required for a demisexual person to develop sexual attraction may be highly variable between individuals. There is a lack of clear definitions for what qualifies as a close or strong bond in this context, which can cause confusion. Unlike other words used to describe sexual orientations, the term "demisexuality" does not indicate which gender or genders a person finds attractive. Primary vis-à-vis secondary sexual attraction model Primary sexual attraction: sexual attraction towards people based on instantly available information (such as their appearance or smell). Primary sexual attraction is characterized as being experienced at first sight. Secondary sexual attraction: sexual attraction towards people based on information that is not instantly available (such as personality, life experiences, talents, etc.); how much a person needs to know about the other and for how long they need to know about them before secondary sexual attraction develops varies from person to person. After secondary sexual attraction is developed, demisexuals are not only aroused by personality traits. They also may or may not experience arousal or desire based on the physical traits of the persons whom they have already experience secondary sexual attraction towards. Common misconceptions and sexual activities A misconception is that demisexual individuals cannot engage in casual sex. Demisexuality refers to how an individual experiences sexual attraction; it does not describe a choice or an action, but describes a feeling instead. While it is common for demisexuals to not desire sex without feeling sexually attracted to the other person, this is not required to be considered demisexual. Many demisexuals may choose to engage in casual sex even without experiencing sexual attraction towards their sexual partner. Demisexuals may experience aesthetic attraction and can have an aesthetic preference. An aesthetic attraction is an attraction to another person's appearance that is not connected to any sexual or romantic desire; it is so called because of its similarity to other aesthetic desires. Demisexuals can be attracted to fictional characters, and can also be attracted to a character played by an actor without experiencing attraction towards the actor when out of character. Attitudes towards sex Some demisexual, gray-asexual and asexual individuals (all included under the "ace umbrella") use the terms positive, favorable, neutral or indifferent, averse, or repulsed to describe how they feel about sex. Nonetheless, these terms can be used by anyone, regardless of if they are asexual spectrum or not. Sex-repulsed: feeling repulsed or uncomfortable towards the thought of engaging in sex. Sex-indifferent: no particular positive or negative feelings towards sex. Sex-indifferent individuals might partake in sex or avoid it. Sex-favourable: sex-favourable individuals enjoy sex and may seek it out. Sex-ambivalent: experiencing mixed or complicated feelings regarding the act or concept of sexual interaction, usually fluctuating between sex-neutral, sex-favorable or sex-positive and sex-repulsed, sex-negative or sex-averse. These terms are generally used to refer to someone's opinion about engaging in sexual activities themself. However, they might also be used to describe how they feel reading, watching, hearing about, or imagining these activities. The term -repulsed in particular is often used to refer to one's feelings about engaging in sexual activities or being around them. One's feelings can vary depending on the situation or other factors such as identity, societal context, common social understanding or intent of actions or comfort level with another individual. For example, someone who is aegosexual may enjoy thinking about sexual activities involving others but may feel repulsed upon the thought of personally participating in such activities. In fiction Demisexuality is a common theme (or trope) in romantic novels that has been termed "compulsory demisexuality". In this genre, the paradigm or trope of sex being only truly pleasurable and fulfilling when the partners are in love is a trait most commonly associated with female characters. The added requirements for a connection to occur may engender or reinforce feelings that the connection is unique or special. See also Demigender Pansexuality Sexual fluidity Unlabeled sexuality References Human sexuality Demisexuality Asexuality 2006 neologisms
Demisexuality
Biology
1,204
22,124,835
https://en.wikipedia.org/wiki/Multiphoton%20lithography
Multiphoton lithography (also known as direct laser lithography or direct laser writing) is similar to standard photolithography techniques; structuring is accomplished by illuminating negative-tone or positive-tone photoresists via light of a well-defined wavelength. The main difference is the avoidance of photomasks. Instead, two-photon absorption is utilized to induce a change in the solubility of the resist for appropriate developers. Hence, multiphoton lithography is a technique for creating small features in a photosensitive material, without the use of excimer lasers or photomasks. This method relies on a multi-photon absorption process in a material that is transparent at the wavelength of the laser used for creating the pattern. By scanning and properly modulating the laser, a chemical change (usually polymerization) occurs at the focal spot of the laser and can be controlled to create an arbitrary three-dimensional pattern. This method has been used for rapid prototyping of structures with fine features. Two-photon absorption (TPA) is a third-order with respect to the third-order optical susceptibility and a second-order process with respect to light intensity. For this reason it is a non-linear process several orders of magnitude weaker than linear absorption, thus very high light intensities are required to increase the number of such rare events. For example, tightly-focused laser beams provide the needed intensities. Here, pulsed laser sources, with pulse widths of around 100 fs, are preferred as they deliver high-intensity pulses while depositing a relatively low average energy. To enable 3D structuring, the light source must be adequately adapted to the liquid photoresin in that single-photon absorption is highly suppressed. TPA is thus essential for creating complex geometries with high resolution and shape accuracy. For best results, the photoresins should be transparent to the excitation wavelength λ, which is between 500-1000 nm and, simultaneously, absorbing in the range of λ/2. As a result, a given sample relative to the focused laser beam can be scanned while changing the resist's solubility only in a confined volume. The geometry of the latter mainly depends on the iso-intensity surfaces of the focus. Concretely, those regions of the laser beam which exceed a given exposure threshold of the photosensitive medium define the basic building block, the so-called voxel. Voxels are thus the smallest, single volumes of cured photopolymer. They represent the basic building blocks of 3D-printed objects. Other parameters which influence the actual shape of the voxel are the laser mode and the refractive-index mismatch between the resist and the immersion system leading to spherical aberration. It was found that polarization effects in laser 3D nanolithography can be employed to fine-tune the feature sizes (and corresponding aspect ratio) in the structuring of photoresists. This proves polarization to be a variable parameter next to laser power (intensity), scanning speed (exposure duration), accumulated dose, etc. In addition, a plant-derived renewable pure bioresins without additional photosensitization can be employed for the optical rapid prototyping. Materials for multiphoton polymerization The materials employed in multiphoton lithography are those normally used in conventional photolithography techniques. They can be found in liquid-viscous, gel or solid state, in relation to the fabrication need. Liquid resins imply more complex sample fixing processes, during the fabrication step, while the preparation of the resins themselves may be easier and faster. In contrast, solid resists can be handled in an easier way, but they require complex and time-consuming processes. The resin always include a prepolymer (the monomer) and, considering the final application, a photoinitiator. In addition, we can find such polymerization inhibitors (useful to stabilize resins both reducing the obtained voxel), solvents (which may simplify casting procedures), thickens (so called "fillers") and other additives (as pigments and so on) which aim to functionalize the photopolymer. Acrylates The acrylates are the most diffused resin components. They can be found in many traditional photolithography processes which imply a radical reaction. They are largely diffused and commercially available in a wide range of products, having different properties and composition. The main advantages of this kind of liquid resins are found in the excellent mechanical properties and in the high reactivity. Acrylates exhibit slightly more shrinkage compared to epoxies, but their rapid iteration capability allows for close alignment with the design. Moreover, Acrylates offer enhanced usability as they eliminate the need for spin coating or baking steps during processing. Finally the polymerization steps are faster than other kind of photopolymers. Methacrylates are largely diffused due to their biocompatibility. The majority of materials for Two-Photon Polymerization are supplied by companies that also provide printers. Nevertheless, there are third-party resins available like ORMOCER, alongside numerous self-made resins. Epoxy resins These are the most employed resins into the MEMS and microfluidic fields. They exploit cationic polymerization. One of the best known epoxy resin is SU-8, which allows thin film deposition (up to 500 μm) and polymerization of structures with a high aspect ratio. We can find many others epoxy resins such as: SCR-701, largely employed in micro moving objects, and the SCR-500. Inorganic glass/ceramics Inorganic glass and ceramics have better thermal and chemical stabilities than photopolymers do, and they also offer improved durability due to their high resistance to corrosion, degradation, and wear. Therefore, there has been continuous interest in the development of resins and techniques that allow using multiphoton lithography for 3D printing of glasses and ceramics in recent years. It has been demonstrated that using hybrid inorganic-organic resins and high-temperature thermal treatments, one can achieve 3D printing of glass-ceramics with sub-micrometer resolution. Recently, multiphoton lithography of an entirely inorganic resin for 3D printing of glasses without involving thermal treatments has also been shown, enabling 3D printing of glass micro-optics on the tips of optical fibers without causing damage to the optical fiber. Applications Nowadays there are several application fields for microstructured devices, made by multiphoton polymerization, such as: regenerative medicine, biomedical engineering, micromechanic, microfluidic, atomic force microscopy, optics and telecommunication science. Regenerative medicine and biomedical engineering By the arrival of biocompatible photopolymers (as SZ2080 and OMOCERs) many scaffolds have been realized by multiphoton lithography, to date. They vary in key parameters as geometry, porosity and dimension to control and condition, in a mechanical and chemical fashion, fundamental cues in in vitro cell cultures: migration, adhesion, proliferation and differentiation. The capability to fabricate structures having a feature size smaller than the cells' one, have dramatically improved the mechanobiology field, giving the possibility to combine mechanical cues directly into cells microenvironment. Their final application range from stemness maintenance in adult mesenchymal stem cells, such as into the NICHOID scaffold which mimics in vitro a physiological niche, to the generation of migration engineered scaffolds. Micromechanic and microfluidic The multiphoton polymerization can be suitable to realize microsized active (as pumps) or passive (as filters) devices that can be combined with Lab-on-a-chip. These devices can be widely used coupled to microchannels with the advantage to polymerize in pre-sealed channels. Considering filters, they can be used to separate the plasma from the red blood cells, to separate cell populations (in relation to the single cell dimension) or basically to filter solutions from impurity and debris. A porous 3D filter, which can only be fabricated by 2PP technology, offers two key advantages compared to filters based on 2D pillars. First, the 3D filter has increased mechanical resistance to shear stresses, enabling a higher void ratio and hence more efficient operation. Second, the 3D porous filter can efficiently filter disk-shaped elements without reducing the pore size to the minimum dimension of the cell. Considering the integrated micropumps, they can be polymerized as two-lobed independent rotors, confined into the channel by their own shaft, to avoid unwanted rotations. Such systems are simply activated by using focalized CW laser system. Atomic force microscopy To date, atomic force microscopy microtips are realized with standard photolithographic techniques on hard materials, such as gold, silicon, and its derivatives. Nonetheless, the mechanical properties of such materials require time-consuming and expensive production processes to create or bend the tips. Multiphoton lithography can be used to prototype and modify, thus avoiding the complex fabrication protocol. Optics With the ability to create 3D planar structures, multiphoton polymerization can build optical components for optical waveguides, resonators, photonic crystals, and lenses. References External links Nano sculptures, the first nano-scale human form. Sculpture made by artist Jonty Hurwitz using multiphoton lithography, November 2014. Nonlinear optics Lithography (microfabrication) Computer printing Printing processes
Multiphoton lithography
Materials_science
1,969
18,832,302
https://en.wikipedia.org/wiki/Hall%E2%80%93Littlewood%20polynomials
In mathematics, the Hall–Littlewood polynomials are symmetric functions depending on a parameter t and a partition λ. They are Schur functions when t is 0 and monomial symmetric functions when t is 1 and are special cases of Macdonald polynomials. They were first defined indirectly by Philip Hall using the Hall algebra, and later defined directly by Dudley E. Littlewood (1961). Definition The Hall–Littlewood polynomial P is defined by where λ is a partition of at most n with elements λi, and m(i) elements equal to i, and Sn is the symmetric group of order n!. As an example, Specializations We have that , and where the latter is the Schur P polynomials. Properties Expanding the Schur polynomials in terms of the Hall–Littlewood polynomials, one has where are the Kostka–Foulkes polynomials. Note that as , these reduce to the ordinary Kostka coefficients. A combinatorial description for the Kostka–Foulkes polynomials was given by Lascoux and Schützenberger, where "charge" is a certain combinatorial statistic on semistandard Young tableaux, and the sum is taken over the set of all semi-standard Young tableaux T with shape λ and type μ. See also Hall polynomial References External links Orthogonal polynomials Algebraic combinatorics Symmetric functions
Hall–Littlewood polynomials
Physics,Mathematics
274
74,580,752
https://en.wikipedia.org/wiki/Mystery%20Flesh%20Pit%20National%20Park
The Mystery Flesh Pit National Park is an ongoing science fiction/horror project by artist Trevor Roberts that blends multimedia illustrations, writings and immersive world building. The story revolves around the fictional Mystery Flesh Pit, a colossal, ancient superorganism discovered beneath the town of Gumption, Texas, during an oil excavation. The pit was then transformed into a tourism destination and harvested for raw materials, until a catastrophic disaster in 2007 forced its closure. Format and multimedia elements Roberts has since posted fictional letters, diagrams, posters, and advertisements that emulate the style of National Park Service publications. The artworks are in a realistic style, which plays into the grotesque nature of a living superorganism being utilised as a national park. The park operated for about thirty years, before being shut down due to the events of the Fourth of July in 2007. During the evening celebrations, unseasonably rainy weather and an electrical fault caused the Permian Basin Superorganism to choke and 'swallow' the structures inside the park; subsequent attempts to subdue the Superorganism caused it to vomit. The incident was said to have taken the lives of over 750 people. Reception and impact A tabletop RPG is currently being developed in a partnership with Ganza Gaming. A book is also currently being created by Roberts. Once the book is published, which is set to contain expanded lore and art, Roberts says he will be done with the project. A video game based on Mystery Flesh Pit was in the process of creation, however the project was scrapped for numerous reasons, including fan feedback and creative differences. Features of the Pit The Permian Basin Superorganism is home to a wide variety of "Geo-biological" structures that were popular hiking destinations during the park's tenure. Examples included the Bronchial forests, or the lungs, the Gastric seas, or the digestive areas, the throat of the organism, and more exotic organs such as the 'Ballast pods', which contained a potent aphrodisiac and were operated by the park as hot springs. A troglobitic ecosystem of organisms lives inside the Permian Basin Superorganism, having been completely cut off from the rest of the world. The majority of the parks fauna include arthropods, echinoderms, mollusks, cnidarians, worms, and several vertebrate species. See also Roadside attraction — one of the influences on the aesthetic of the project References External links 2019 hoaxes fictional locations in the United States fictional monsters Internet hoaxes speculative evolution
Mystery Flesh Pit National Park
Biology
520
1,933,320
https://en.wikipedia.org/wiki/Urea-formaldehyde
Urea-formaldehyde (UF), also known as urea-methanal, so named for its common synthesis pathway and overall structure, is a nontransparent thermosetting resin or polymer. It is produced from urea and formaldehyde. These resins are used in adhesives, plywood, particle board, medium-density fibreboard (MDF), and molded objects. In agriculture, urea-formaldehyde compounds are one of the most commonly used types of slow-release fertilizer. UF and related amino resins are a class of thermosetting resins of which urea-formaldehyde resins make up 80% produced worldwide. Examples of amino resins use include in automobile tires to improve the bonding of rubber, in paper for improving tear strength, and in molding electrical devices, jar caps, etc. History UF was first synthesized in 1884 by Dr Hölzer, who was working with Bernhard Tollens, neither of whom realized that the urea and formaldehyde were polymerizing. In the following years a large number of authors worked on the structure of these resins. In 1896, Carl Goldschmidt investigated the reaction further. He also obtained an amorphous, almost insoluble precipitate, but he did not realize that polymerization was occurring; he thought that two molecules of urea were combining with three molecules of formaldehyde. In 1897 Carl Goldschmidt patented the use of UF-resins as a disinfectant. General commercialisation followed this and in the following decades, more and more applications were described in the literature. In 1919, Hanns John (1891–1942) of Prague, Czechoslovakia, obtained the first patent for UF resin in Austria. Urea-formaldehyde was object matter of judgment via the European Court of Justice (now CJEU) of 5 February 1963, Case 26–62 Van Gend & Loos v Netherlands Inland Revenue Administration. Properties Urea-formaldehyde resin's attributes include high tensile strength, flexural modulus, high heat-distortion temperature, low water absorption, mould shrinkage, high surface hardness, elongation at break, and volume resistance. It has a refractive index of 1.55. Chemical structure The chemical structure of UF polymer consists of [(O)CNHCH2NH]n repeat units. In contrast, melamine-formaldehyde resins feature NCH2OCH2N repeat units. Depending on the polymerization conditions, some branching can occur. Early stages in the reaction of formaldehyde and urea produce bis(hydroxymethyl)urea. Production About 20 million metric tons of UF are produced annually. Over 70% of this production is then put into use by the forest-products industry for bonding particleboard, MDF, hardwood plywood, and laminating adhesive. General uses Urea-formaldehyde is pervasive. Urea-formaldehyde is widely utilized due to its inexpensive cost, quick reaction time, high bonding strength, moisture resistance, lack of color, and resistance to abrasion and microbes. Examples include decorative laminates, textiles, paper, foundry sand molds, wrinkle-resistant fabrics, cotton blends, rayon, corduroy, etc. It is also used as wood glue. In the wood industry, it is utilized as a thermosetting adhesive to bond wood to create plywood and particleboard. It is also used as wood glue. UF was commonly used when producing electrical appliances casing (e.g. desk lamps). Foams have been used as artificial snow in movies. Urea-formaldehyde is widely used in agriculture as a slow-release fertilizer, which release small amounts of the active ingredient over time. Agricultural use Urea-formaldehyde compounds are a widely used as slow-release sources of nitrogen in agriculture. The rate of decomposition into and depends on the length of the urea-formaldehyde chains and it relies on the action of microbes found naturally in most soils. The activity of these microbes, and the rate of ammonia release, is temperature-dependent. The optimum temperature for microbe activity is around . Foam insulation Urea-formaldehyde foam insulation (UFFI) commercialisation dates to the 1930s as a synthetic insulation with thermal conductivity of 0.0343 to 0.0373 W/m⋅K, equating to U values for 50 mm thickness of between 0.686 W/m2K and 0.746 W/m2K or R-values between 1.46 m2K/W and 1.34 m2K/W (0.26 °F⋅ft2⋅h/BTU and 0.24 °F⋅ft2⋅h/BTU for 1.97-inch thickness). UFFI is a foam with similar consistency to shaving cream, that is easily injected or pumped into voids. It is normally made on site using a pump set and hose with a mixing gun to mix the foaming agent, resin, and compressed air. The fully expanded foam is pumped into areas in need of insulation. It becomes firm within minutes, but cures within a week. UFFI is generally found in homes built or retrofitted from the 1930s to the 1970s, often in basements, wall cavities, crawl spaces and attics. Visually, it looks like oozing liquid that has been hardened. Over time, it tends to vary in shades of butterscotch, but new UFFI is a light yellow colour. Early forms of UFFI tended to shrink significantly. Modern UF insulation with updated catalysts and foaming technology have reduced shrinkage to minimal levels (between 2 and 4%). The foam dries with a dull matte colour with no shine. When cured, it often has a dry and crumbly texture. Formaldehyde emissions Agricultural emissions Emissions from UF-based fertilizer application have been found to temporarily increase localized atmospheric formaldehyde concentration and contribute to tropospheric ozone. Application of UF fertilizers in greenhouses has been found to cause significantly higher air formaldehyde concentrations within the building. Conditions impacting emission levels Environmental conditions, such as temperature and humidity, can impact the levels of formaldehyde released from urea-formaldehyde products. Exposure to higher humidity and higher temperatures can both significantly increase the amount of formaldehyde emissions from UF products, such as wood-based panel boards. Reducing emissions Due to concerns of free formaldehyde emissions and environmental pollution from urea-formaldehyde products, there have been effective efforts to lower the formaldehyde content in UF resins. A lower molar ratio of formaldehyde decreases the emission of free formaldehyde from UF products. There is a significant decrease in formaldehyde emissions from UF-based particleboard from F/U molar ratio of 2.0 to 1.0. The German standard for UF resins require the F/U molar ratio to be below 1.2. The U.S. NPA standard is an F/U molar ratio below 1.3. Health concerns Health effects occur when UF-based materials and products release formaldehyde into the air. Generally, no health effects from formaldehyde are seen when air concentrations are below 1.0 ppm. The onset of respiratory irritation and other health effects, and even increased cancer risk, begin when air concentrations exceed 3.0–5.0 ppm. Health concerns led to banning of UFFI in the U.S. state of Massachusetts, and Connecticut in 1981. In 1982, the U.S. Consumer Product Safety Commission banned UFFI nationwide, but this ban was reversed in 1983. UFFI was banned in Canada in 1980, which remains in effect. See also Phenol formaldehyde resin References External links Urea formaldehyde (Plastics Historical Society) History of urea-formaldehyde: Chapter 1 of: Carl Meyer, Urea-Formaldehyde Resins (Reading, Massachusetts: Addison-Wesley, 1979) Urea-Formaldehyde Foam Insulation (Canada Mortgage and Housing Corporation) Indoor Air Quality: Formaldehyde (US Environmental Protection Agency) Formaldehyde.... its safe use in foundries (UK Health and Safety Executive) United States Environmental Protection Agency: Formaldehyde (Environmental and Occupational Health Assessment Program|Connecticut Department of Public Health) Consumer Product Safety Commission (Forest Products Laboratory: USDA Forest Service) [Dunky, M., "Urea-formaldehyde (UF) adhesive resins for wood," International Journal of Adhesion and Adhesives, 1998. (18:2).] (Encyclopædia Britannica) (PropEx.com) (U.S. Dept. of Labor, Occupational Safety and Health Administration (OSHA)) Polyamides Synthetic resins Plastics Thermosetting plastics
Urea-formaldehyde
Physics,Chemistry
1,872
20,036,243
https://en.wikipedia.org/wiki/Bert%20Bolle%20Barometer
The Bert Bolle Barometer is a large water barometer. At over 12.5 metres tall, it is recognized as the largest barometer in the world by The International Guinness Book of Records. The instrument was created in 1985 in the Netherlands; in 2007 it was reinstalled in the new Visitor Centre of Denmark, Western Australia and was removed from there in 2011. History The Netherlands The Dutch writer and barometer specialist Bert Bolle (born 1947) designed and built the water barometer in 1985 as the focal point of the Barometer Museum, which he ran with his wife Ethne in the 18th-century country house ‘Rustenhoven’ at Maartensdijk in central Netherlands. In 1978 Bolle wrote a book titled Barometers, which was translated into German and English. In 1983 he wrote a scientific sequel to his first book and developed some modifications of the mercury barometer system. In 1985 Bolle and his wife set up a barometer museum in their country house. Their aim was to create a collection based on loans of barometers from private collectors and museums in the Netherlands. To obtain these loans, a massive publicity campaign was undertaken. Bolle wanted something to make the museum's launch spectacular, an appliance that would be impressive and definitive which would serve as the centre point of the Barometer Museum. He decided to design and make a water barometer, paying homage to the 17th-century scientists, such as Evangelista Torricelli and Gasparo Berti, who produced some of the first and most crucial vacuum experiments, and created the first water barometers alongside their houses between 1640 and 1660. Bolle's old three-story country house had ample height, the highest point of which was the roof of the main hall: a leaded glass cupola. The apex of the hall was over 12 metres from the hall floor; a perfect environment for such an enormous instrument. Bolle decided to make a construction of four borosilicate glass (e.g. Pyrex) pipes of 90 mm diameter. He fitted the pipes to a nine-metre-long solid oak plank, which was one metre wide at the base. For the top three metres of the barometer, a 25 mm thick polymethylmethacrylate (e.g. Perspex) sheet was used. The reservoir chamber was also made of borosilicate glass; with a diameter of 600 mm the capacity of this reservoir was enough to hold 150 litres of water, which was necessary to make the barometer work properly. The first successful test runs took place in November and December 1985. Bolle designed the top end of the water barometer to be connected to a rotary vane pump, which was governed by timer relays. At ten-minute intervals, the pump evacuated the air from the glass pipe, causing the 12-metre-tall instrument to fill with 55 litres of water within one minute. Visitors were invited to climb the stairs and follow the water to the top, where it started to boil spontaneously (see below). The huge register plate had two scales: centimeters of water and millibar. Water vapour pressure depresses the reading of water barometers, and the magnitude of this error increases with temperature. Thus, a rule of thumb was provided to make a correction for temperature. After a reading period of five minutes, air was admitted to the top area of the pipe. Within a couple of minutes all the water would return to the cistern downstairs, after which the ten-minute pump cycle would start again. Visitors were able to watch a real living instrument the whole day. The instrument proved to be a massive drawcard and appeared several times in the media during the subsequent twelve-year period during which Bolle's Barometer Museum operated. In 1998 the museum was closed down. Australia Bolle and his wife migrated to Australia in 1999, but the maker didn’t want to part with his creation, so the barometer was brought with them to Australia, where Bolle donated it to the community of Denmark, a small town in Western Australia. The town didn’t have a building high enough to house the enormous instrument, but in 2004 plans were adopted for a new multi-function Visitors Centre, the centre part of which would be The Barometer Tower, built especially for the instrument. The Shire of Denmark made the water barometer a local monument, named The Bert Bolle Barometer. Furthermore, the shire announced that the tower would be dedicated to the water barometer and the history of weather instruments in general and access would be free of charge. In 2007 the Denmark Visitor Centre was finished. It was officially opened on 10 August by the Minister for Tourism in Western Australia Mrs Sheila McHale. In the Barometer Tower the Bert Bolle Barometer stood on a stainless steel pedestal. The vacuum pump cycle became shorter than it was in the Netherlands; reduced to six minutes from the previous ten. The timer relays had been replaced by a PLC, which now governs a refined and modernized vacuum system with 11 solenoid valves. Operating eight hours per day, seven days a week, the water barometer was constantly ‘on the move’. Visitors could walk up the stairs and take a reading in the Reading Room atop the Tower. At the moment when the water reached its highest possible point in the glass pipe, visitors could witness an interesting physical phenomenon for about a minute. The air pressure above the water had lowered dramatically. Therefore, the evaporation of the water happened so vigorously that the water started to boil spontaneously, although its temperature rarely exceeded 20 °C. This ‘cold boiling’ is contributed to by air bubbles that were formed in the water column. As soon as the pump was disconnected, the evacuating of the pipe stopped and the water level became calm again, enabling people to take a reading. During the time the water level was calm, there was still some turbulence at the surface due to air bubbles rising to the top of the apparatus. Although water vapour pressure depressed the barometers pressure reading, visitors were told how to correct for this error and thus calculate the real air pressure, and could then compare it with the accurate Vaisala digital standard barometer in the Tower. After a reading period of two minutes, air was gradually admitted to the vacuum in the top area of the pipe. Within another two minutes all the water had returned to the reservoir downstairs, after which the six-minute pump cycle of the Bert Bolle Barometer repeated. Climbing the stairs in the tower, a selection of antique barometers from Europe was displayed, along with five murals depicting the oldest barometer experiments dating from the 17th century. In the Reading Room, tribute was paid to the pioneers of the barometer, the Italian scientists Galileo Galilei and Evangelista Torricelli. On the ground floor, Bolle had created several interesting physical experiments such as the Atmosphere Simulator, in which artificial highs and lows were created. There was also a bell jar showing interesting vacuum experiments with sound and air. Record Australia is known as a country of bizarre records for the sake of tourism, like the Big Banana or the Giant Ram. The size of the water barometer in Denmark was, however, a result of necessity, rather than a tourist gimmick. In order to function properly, a water barometer has to be quite large, with greater height producing greater accuracy. The Bert Bolle Barometer is thus a very accurate and genuine working instrument, as well as an impressive monument. Recognition In April 2008 the Bert Bolle Barometer was listed among the Top Hundred Australian ‘must see’ topics. Australian Traveller magazine revealed a list of 100 Things You Can Only Do In Australia. During its first year of its existence, the Denmark Visitor Centre recorded its 100,000th visitor. 25th anniversary In December 2010 the 25th anniversary of the barometer was celebrated. Bolle had written a booklet titled "Weird and Wonderful Weather Predictors", of which 1,000 copies were printed and given to visitors in December as a present. The loss of the barometer Shortly after the celebration of the 25th anniversary of the Bert Bolle Barometer the Denmark Visitor Centre lost its world attraction. Differences of opinion with the Board of Denmark Tourism Incorporated and the management of the Denmark Visitor Centre about promoting and signposting the water barometer and the Barometer Tower at the Denmark Visitor Centre lay at the bottom of an ongoing conflict. Eventually Bolle and his wife asked Denmark Council for the barometer to be given back to them, which was unanimously approved on 21 December 2010. The Barometer Tower was dismantled mid February 2011. New location Negotiations with a possible future owner of the barometer are in an advanced phase, albeit the location will not be in Denmark anymore. The plan is to house the water barometer in a 6x6 m brick tower with a small meteorological museum attached. Copy The Otto von Guericke Museum in Magdeburg in Germany erected a copy of the water barometer in 1995, after Bolle had been asked for his expertise. The barometer was situated in the centre of a spiral staircase. No attempt was made to outdo Bolle’s record. A narrower pipe was used, made of polycarbonate and the instrument was named the ‘Bert Bolle Wasserbarometer’ after the Dutch record holder. Labour-intensive Maintenance is an ongoing concern for the barometer, with one of the major issues being water vapour, which constantly enters the pump and could easily emulsify with the pump oil. Special provisions are made to prevent this. Since water vapour is extracted continuously, the water level in the reservoir needs to be topped up every day. In addition, the abundance of sunshine in the barometer's environment, combined with the presence of algae in the rainwater that is employed, leads to a risk of algae growing within the water pipe. To address this, the owners have employed chlorine to kill the algae, but the evaporation of the chlorine had to be specifically catered for. Wear and tear also places considerable strain on the vacuum system's vulnerable pump, its 11 solenoid valves and its relays. Finally, the barometer needs to be exclusively filled with pure rainwater, as tap water contains many minerals which may be detrimental to the barometer. Because the water barometer has always been treated with great care, the instrument still looks almost new despite its age. A well-considered choice of durable materials like oak and borosilicate glass certainly play an important role in its continued longevity. Other attempts to copy the instrument have floundered due to the use of inferior material, a lack of constant supervision, pollution of the pipe system and other factors reducing durability. External links The 18th-century country house ‘Rustenhoven’, formerly the Barometer Museum (English) (Dutch) References Bolle, B. (1982) Barometers. Watford: Argus Books. Bolle, B. (1983) Barometers in Beeld. Lochem: Tijdstroom. Bolle, B. (2008) Il Barometro di Bert Bolle, estratto da Torricelliana, Bollettino della Società Torricelliana di Scienze e Lettere, Faenza, No. 59. Bolle, B. (2010) Weird and Wonderful Weather Predictors, private limited edition. Flammarion, C. (1888) L’Atmosphère - Météorologie Populaire. Paris: Librairie Hachette et Cie. Middleton, W.E. Knowles. (1964) The History of the Barometer. Baltimore: The Johns Hopkins Press. Pressure gauges Meteorological instrumentation and equipment Glass applications
Bert Bolle Barometer
Technology,Engineering
2,397
54,441,937
https://en.wikipedia.org/wiki/Langgan
Langgan () is the ancient Chinese name of a gemstone which remains an enigma in the history of mineralogy; it has been identified, variously, as blue-green malachite, blue coral, white coral, whitish chalcedony, red spinel, and red jade. It is also the name of a mythological langgan tree of immortality found in the western paradise of Kunlun Mountain, and the name of the classic waidan alchemical elixir of immortality langgan huadan 琅玕華丹 "Elixir Efflorescence of Langgan". Word The Chinese characters 琅 and 玕 used to write the gemstone name lánggān are classified as radical-phonetic characters that combine the semantically significant "jade radical" 玉 or 王 (commonly used to write names of jades or gemstones) and phonetic elements hinting at pronunciation. Láng 琅 combines the "jade radical" with liáng 良 "good; fine" (interpreted to denote "fine jade") and gān 玕 combines it with the phonetic gān 干 "stem; trunk". The Chinese word yù 玉 is usually translated as "jade" but in some contexts translates as "fine ornamental stone; gemstone; precious stone", and can refer to a variety of rocks that carve and polish well, including jadeite, nephrite, agalmatolite, bowenite, and serpentine. Modern written Chinese láng 琅 and gān 玕 have variant Chinese characters. Láng 琅 is occasionally transcribed as láng 瑯 (with láng 郞 "gentleman") or lán 瓓 (lán 闌 "railing"); and gān 玕 is rarely written as gān 玵 (with a gān 甘 "sweet" phonetic). Guwen "ancient script" variants were láng 𤨜 or 𤦴 and gān 𤥚. Berthold Laufer proposed that langgan was an onomatopoetic word "descriptive of the sound yielded by the sonorous stone when struck". Lang occurs in several imitative words meaning "tinkling of jade pendants/ornaments": lángláng 琅琅 "tinkling/jingling sound", língláng 玲琅 "tinkling/jangling of jade", línláng 琳琅 "beautiful jade; sound of jade", and lángdāng 琅璫 "tinkling sound". Laufer further suggests this etymology would explain the transference of the name langgan from a stone to a coral; Du Wan's 杜綰 Yunlin shipu 雲林石譜 "Stone Catalogue of the Cloudy Forest" (below) expressly states that the coral langgan "when struck develops resonant properties". Classical descriptions The name langgan has undergone remarkable semantic change. The first references to langgan are found in Chinese classics from the Warring States period (475-221 BCE) and Han dynasty (206 BCE-220 CE), which describe it as a valuable gemstone and mineral drug, as well as the mythological fruit of the langgan tree of immortality on Kunlun Mountain. Texts from the turbulent Six Dynasties period (220-589) and Sui dynasty (581-618) used langgan gemstone as a literary metaphor, and an ingredient in alchemical elixirs of immortality, many of which were poisonous. During the Tang dynasty (618-907), langgan was reinterpreted as a type of coral. Several early texts (including the Shujing, Guanzi, and Erya below) recorded langgan in context with the obscure gemstone(s) qiúlín 璆琳. In Classical Chinese syntax, 璆琳 can be parsed as two qiu and lin types of jade or as one qiulin type. A recent dictionary of Classical Chinese says qiú 璆 "fine jade, jade lithophone" is cognate with qiú 球 "precious gem, fine jade; jade chime or lithophone" (which later came to mean "ball; sphere"), and lín 琳 "blue-gem; sapphire". In what may be the earliest record, the c. 5th-3rd centuries BCE Yu Gong "Tribute of Yu the Great" chapter of the Shujing "Classic of Documents" says the tributary products from Yong Province (located in the Wei River plain, one of the ancient Nine Provinces) included qiulin and langgan jade-like gemstones: "Its articles of tribute were the k'ew and lin gem-stones, and the lang-kan precious stones". Legge quotes Kong Anguo's commentary that langgan is "a stone, but like a pearl", and suggests it was possibly lazulite or lapis lazuli, which Laufer calls "purely conjectural". The c. 4th-3rd centuries BCE Guanzi encyclopedic text, named for and attributed to the 7th century BCE philosopher Guan Zhong, who served as Prime Minister to Duke Huan of Qi (r. 685-643 BCE), uses bi 璧 "a flat jade disc with a hole in the center", qiulin 璆琳 "lapis lazuli", and langgan 琅玕 as examples of how establishing diverse local commodities as fiat currencies will encourage foreign economic cooperation. When Duke Huan asks Guanzi about how to politically control the "Four Yi" (meaning "all foreigners" on China's borders), he replies: Since the Yuzhi [i.e., Yuezhi/Kushans in Central Asia] have not paid court, I request our use of white jade discs [白璧] as money. Since those in the Kunlun desert (modern-day Xinjiang and Tibet) have not paid court, I request our use of lapis lazuli and langgan gems as money. … Since a white jade held tight unseen against one's chest or under one's armpit will be used as a thousand pieces of gold, we can obtain the Yuezhi eight thousand li away and make them pay court. Since a lapis lazuli and langgan gem (fashioned in) a hair clasp and earring will be used as a thousand gold pieces, we can obtain [i.e., defeat] [the inhabitants] of the Kunlun deserts eight thousand li away and make them pay court. Therefore if resources are not commandeered, economies will not connect, those distant from each other will have nothing to use for their common interest and the four yi will not be obtained and come to court. Xun Kuang's 3rd century BCE Confucian classic Xunzi has a context criticizing elaborate burials that uses dan'gan 丹矸 (with dān 丹 "cinnabar" and gān 矸 "waste rock", with the "stone radical" and same gān 干 phonetic) and langgan 琅玕. In these ancient times, the body was covered with pearls and jades, the inner coffin was filled with beautifully ornamented embroideries, and he outer coffin was filled with yellow gold and decorated with cinnabar [丹矸] with added layers of laminar verdite. [In the outer tomb chamber were] rhinoceros and elephant ivory fashioned into trees, with precious rubies [琅玕], magnetite lodestones, and flowering aconite for their fruit." (18.7) John Knoblock translates langgan as "rubies", noting perhaps the genuine ruby or balas spinel, were connected with the cult of immortality, and cites the Shanhaijing saying they grow on Mount Kunlun's Fuchang trees, and the Zhen'gao saying that adepts swallow "ruby blossoms" to feign death and become transcendents. Early Chinese dictionaries define langgan. The c. 4th-3rd century BCE Erya geography section (9 Shidi 釋地) lists valuable products from the various regions of ancient China: "The beautiful things of the northwest are the qiulin [璆琳] and langgan gemstones from the wastelands [虛] of Kunlun Mountain". The 121 CE Shuowen jiezi (Jade Radical section 玉部) has two consecutive definitions for lang 琅 and gan 玕. Lang is [used in] langgan, which "resembles a pearl [似珠者]", Gan is [used in] langgan, paraphrasing the Yu Gong, "Yong Province [using the ancient yōng 雝 character for yōng 雍] [produces] qiulin and langgan [gems] [球琳琅玕]". Three sections about western Chinese mountains in the c. 4th-2nd centuries BCE Shanhaijing "Classic of Mountains and Seas" record early geographic legends associating langgan with Xi Wang Mu "Queen Mother of the West" who lives on Jade Mountain in the mythological axis mundi Kunlun Mountain paradise. Two mention langgan gems and one mentions langganshu 琅玕樹 trees. The Shanhaijing translator Anne Birrell exemplifies the difficulties of translating the word langgan in three ways: "pearl-like gems", "red jade", and "precious gem [tree]". First, the "Classic of the Mountains: West" section says Huaijiang 槐江 (lit. "pagoda-tree river") Mountain, located 400 li northeast of Kunlun Mountain, has abundant langgan and other valuable minerals. "On the summit of Mount Carobriver are quantities of green male-yellow 多青雄黃, precious pearl-like gems [藏琅玕], and yellow gold and jade. Granular cinnabar is abundant on its south face and there are quantities of speckled yellow gold and silver on its north face." (2) "Male-yellow" overliterally translates xiónghuáng 雄黃 "realgar; red orpiment"—Compare Richard Strassberg's translation, "On the mountain’s heights is much green realgar, the finest quality of Langgan-Stone, yellow gold, and jade. On its southern slope are many grains of cinnabar, while on its northern slope are much glittering yellow gold and silver.". Guo Pu's 4th century CE Shanhaijing commentary says langgan shi 石 "stone/gem" (cf. zi 子 "seeds" in the third section) resembles a pearl, and cáng 藏 "store; conceal, hide" means yǐn 隱 "conceal; hide". However, Hao Yixing's 郝懿行 1822 commentary says cáng 藏 was originally written zāng 臧 "good", that is, Huaijiang Mountain has the "best" quality langgan. Second, the "Classic of the Great Wilderness: West" section records that on [Xi] Wang Mu 王母 "Queen Mother [of the West]" Mountain: "Here are the sweet-bloom tree, sweet quince, white weeping willow, the look-flesh creature, the triply-grey horse, precious jade [琁瑰], dark green jade gemstone [瑤碧], the white tree, red jade [琅玕], white cinnabar, green cinnabar, and quantities of silver and iron." (16) Third, the "Classic of Regions Within the Seas: West" section refers to a mythical tricephalic creature dwelling in a fuchangshu 服常樹 (lit. "serve constant tree") who guards a langganshu tree south of Kunlun: "The wears-ever fruit tree—on its crown there is a three-headed person who is in charge of the precious gem tree [琅玕樹]." (11) Interpreters disagree whether the langgan tree grows alongside the fuchang tree or grows on it. Guo Pu's commentary admits unfamiliarity with the fuchang 服常 tree; Wu Renchen's 17th-century commentary notes the similarity with the shachang 沙棠 "sand-plum tree" that the Huainanzi lists with langgan, but doubts they are the same. Guo's commentary says langgan zi 子 "seeds". or "fruits" resemble pearls (cf. the Shuowen definition) and quotes the Erya that it is found on Kunlun Mountain. The c. 120 BCE Huainanzi "Terrestrial Forms" chapter (4 墬形) describes langgan trees and langgan jade both found on Mt. Kunlun. The first context describes how Yu the Great controlled the Great Flood and "excavated the wastelands of Kunlun [昆侖之球] to make level ground". "Atop the heights of Kunlun are treelike cereal plants [木禾] thirty-five feet tall. Growing to the west of these are pearl trees [珠樹], jade trees [玉樹], carnelian trees [琁樹], and no-death trees [不死樹]. To the east are found sand-plum trees [沙棠] and malachite trees [琅玕]. To the south are crimson trees [絳樹]. To the north are bi jade trees [碧樹] and yao jade trees [瑤樹]." (4.3), translating with Schafer's "malachite" instead of "coral"). The second context paraphrases the Erya definition (above) of langgan: "The beautiful things of the northwest are the qiu, lin, and langgan jades [球琳琅玕] of the Kunlun Mountains [昆侖]" (4.7), noting that qiu, lin, and langgan are "types of jade, mostly not identifiable with certainty". Medicine Several early classics of traditional Chinese medicine mention langgan. The c. 1st century BCE Huangdi Neijings Suwen 素問 "Basic Questions" section uses langgan beads to describe a healthy pulse. "When man is serene and healthy the pulse of the heart flows and connects, just as pearls are joined together or like a string of red jade [如循琅玕]—then one can speak of a healthy heart". The c. 2nd century CE Nan Jing explains this langgan bead simile: "[If the qi in] the vessels comes tied together like rings, or as if they were following [in their movement a chain of] lang gan stones [如循琅玕], that implies a normal state." Commentaries elaborate that langgan stones "resemble pearls" and their movement is like a "string of jade- or pearl-like beads". The c. 3rd century CE Shennong Bencaojing lists qīng lánggān 青琅玕 "blue-green langgan" or shízhū 石珠 (lit. "rock pearl") as a mineral drug used to treat ailments such as itchy skin, carbuncle, and ALS. This is one of the rare early references to langgan that treats it as a real substance, while many others make it a feature of the divine world. Alchemy The langgan huadan 琅玕華丹 "Elixir Efflorescence of Langgan" name of the waidan "external alchemy" elixir of immortality is the best-known usage of the word langgan. Some other translations are "Elixir of Langgan Efflorescence", "Lang-Kan (Gem) Radiant Elixir", and "Elixir Flower of Langgan". The earliest method of compounding the elixir is found in the Taiwei lingshu ziwen langgan huadan shenzhen shangjing 太微靈書紫文琅玕華丹神真上經 "Supreme Scripture on the Elixir of Langgan Efflorescence, from the Purple Texts Inscribed by the Spirits of Grand Tenuity". This text was originally part of the Daoist Shangqing School scriptural corpus supposedly revealed to Yang Xi (330-c. 386 CE) between 364 and 370. The Purple Texts alchemical recipe for preparing Elixir of Langgan Efflorescence involves nine steps in four stages carried out over thirteen years. The first stage produces the Langgan Efflorescence proper, which when ingested is said to make "one's complexion similar to gold and jade and enables one to summon divine beings". The next three stages further refine and transform the Langgan Elixir, repeatedly plant it in the earth, and eventually generate a tree whose fruits confer immortality when eaten, just like those of the legendary langgan tree on Mount Kunlun. Upon completing any of the nine successive steps in producing the elixir, the alchemist (or adept in the neidan interpretation) can choose to either ingest the products and obtain immortality by ascending into the realm of Shangqing heavens or may continue on to the next step with the promise of ever-increasing rewards. The first stage has one complex waidan step of compounding the primary Langgan Efflorescence. After performing ritual zhāi 齋 "purification practices" for 40 days, the adept spends 60 days to acquire and prepared the elixir's fourteen ingredients, place them in a crucible, add mercury on top of them, lute the crucible with several layers of mud, and after sacrificing wine to the divinities, heating the crucible for 100 days. The elixir's fourteen reagents, given in exalted code names such as "White-Silk Flying Dragon" for quartz, are: cinnabar, realgar, milky quartz, azurite, amethyst, graphite, saltpeter, sulfur, asbestos, mica, iron pyrite, lead carbonate, Turkestan salt (desert lake precipitates containing gypsum, anhydrite, and halite), and orpiment. Based upon these ingredients, Schafer says the end product was probably bluish flint glass with a high lead content. The alchemist can either leave the crucible closed and proceed to the next stage or break it open and consume the langan elixir that is said to yield marvelous results. The efflorescence should have thirty-seven hues. It is a volatile liquid both brilliant and mottled, a purple aurora darkly flashing. This is called the Elixir of Langgan Efflorescence. If, just at dawn on the first day of the eleventh, fourth, or eighth month, you bow repeatedly and ingest one ounce of this elixir with the water from an east-flowing stream, seven-colored pneumas will rise from your head and your face will have the jadelike glow of metallic efflorescence. If you hold your breath, immediately a chariot from the eight shrouded extents of the universe will arrive. When you spit on the ground, your saliva will transform into a flying dragon. When you whistle to your left, divine Transcendents will pay court to you; when you point to the right, the vapors of Three Elementals will join with the wind. Then, in thousands of conveyances, with myriad outriders, you will fly up to Upper Clarity. The second stage comprises two iterative 100-day waidan alchemical steps transforming the elixir. Firing the unopened stage one crucible of Langgan Efflorescence for another 100 days will produce the Lunar Efflorescence of the Yellow Solution [黄水月華], which when consumed will make you "change forms ten thousand times, your eyes will become luminous moons, and you will float above in the Grand Void to fly off to the Palace of Purple Tenuity". The next step of firing the closed crucible for an additional one 100 days will produce three giant pearls called the Jade Essence of the Swirling Solution [徊水玉精]. Ingesting one alchemical pearl supposedly causes you to immediately give off liquid and fire, form gems with your breath, and your body "will become a sun, and the Thearchs of Heaven will descend to greet you. You will rise as a glowing orb to Upper Clarity." The third stage involves four 3-year steps utilizing the elixirs produced in the first two stages to create fantastic seeds that are replanted and grow into increasingly perfected "spirit trees" with fruits of immortality. This stage falls between conventional waidan alchemy and the horticultural art of growing marvelous zhi 芝 "plants of longevity; fungi" such as the lingzhi mushroom. Initially, the adept mixes the Elixir of Langgan Efflorescence with Jade Essence of the Swirling Solution, transforming the jīng 精 "essence; sperm; seed" in the latter name into an actual seed that is planted in an irrigated field. After three years it grows into the Tree of Ringed Adamant [環剛樹子] or Hidden Polypore of the Grand Bourne [太極隱芝], which has a ring-shaped fruit like a red jujube. Next, the adept plants one of the ringed fruits and waters it with the Yellow Solution, and after three years a plant called the Phoenix-Brain Polypore [fengnao zhi 鳳腦芝] will grow like a calabash, with pits like five-colored peaches. Then, a phoenix-brain fruit is planted and watered with Yellow Solution, which after three years will grow into a red tree, like a pine, five or six feet in height, with a jade-white fruit like a pear [赤樹白子]. Lastly, the adept plants the seed of the red tree, waters it with Swirling Solution, waits another three years for the growth of a vermilion tree like a plum, six or seven feet in height, with a halycon-blue fruit like the jujube [絳樹青實]. Upon eating this fruit, the adept will ascend to the heaven of Purple Tenuity. The fourth stage involves two comparatively quicker waidan steps. The adept repeatedly boils equal parts of the Yellow Solution and the Swirling Solution, and transforms them into the Blue Florets of Aqueous Yang [水陽青映]. If you drink this at dawn, your body will issue a blue and gemmy light, your mouth will spew forth purple vapors, and you will rise above to Upper Clarity [Shangqing]. But before departing earth, the adept's last step is to mix the remaining Elixir of Langan Efflorescence with liquified lead and mercury to produce 50-pound ingots of alchemical silver and purple gold, make incantations to the water spirits, and throw both oblatory ingots into a stream. Despite the carefully detailed Purple Texts' waidan recipe for preparing langgan elixirs, scholars have doubted that the authors actually meant for it to be produced and consumed. Some interpret the impractical 13-year elixir recipe as symbolic instructions for what later came to be known as neidan meditative visualization, and is more a "product of religious imagination", drawing on the respected metaphors of alchemical language, than a laboratory manual drawing on the metaphors of meditation. Others believe this "extravagantly impractical recipe" is an attempt to assimilate into conventional waidan alchemy the ancient legends about langgan gems that grow on trees in the paradise of KunIun. The Shangqing Daoist patriarch Tao Hongjing compiled and edited both the c. 370 Taiwei lingshu ziwen langgan huadan shenzhen shangjing and the c. 499 Zhen'gao 真誥 "Declarations of the Perfected" that also mentions langan elixirs in some of the same terminology. One context records that the early Daoist masters Yan Menzi 衍門子, Gao Qiuzi 高丘子, and Master Hongyai 洪涯先生 swallowed langgan hua 琅玕華 "langgan blossoms" to feign death and become xian transcendents and enter the "dark region" beyond the world. Needham and Lu proposed this langgan hua probably refers to a red or green poisonous mushroom, and Knoblock surmised that these "ruby blossoms" were a species of hallucinogenic mushroom connected with the elixir of immortality. Another Zhen'gao context describes how in the Shangqing latter days before the apocalypse (predicted to be in 507) people will practice alchemy to create immortality drugs, including the Langgan Elixir that "will flow and flower in thick billows" and Cloud Langgan. If the adept takes one spatula full of elixir, "their spiritual feathers will spread forth like pinions. Then will they (be able to) peruse the pattern figured on the Vault of Space, and glow forth in the Chamber of Primal Commencement". Several ingredients in the Elixir of Langgan Efflorescence are toxic heavy metals including mercury, lead, and arsenic, and alchemical elixir poisoning was common knowledge in China. Academics have puzzled over why Daoist adepts would knowingly consume a compound of mineral poisons, and Michel Strickmann, a scholar of Daoist and Buddhist studies, proposes that langgan elixir was believed to be an agent of self-liberation that guaranteed immortality to the faithful through a kind of ritual suicide. Since early Daoist literature thoroughly, "even rapturously", described the deadly toxic qualities of many elixirs, Strickmann concluded that scholars need to reexamine the Western stereotype of "accidental elixir poisoning" that supposedly applied to "misguided alchemists and their unwitting imperial patrons". Literature Chinese authors extended the classical descriptions of langgan meaning "a highly valued gem from western China; a mythical tree of immortality on Kunlun Mountain" into a literary and poetic metaphor for the exotic beauties of an idealized natural world. Several early writers described langgan jewelry, both real and fictional. The 2nd-century scholar and scientist Zhang Heng described a party for the Han nobility at which guests were delighted with the presentation of bowls overflowing with zhēnxiū 珍羞 "delicacies; exotic foods" including langgan fruits of paradise. The 3rd-century poet Cao Zhi described hanging "halcyon blue" (cuì 翠) langgan from the waist of his "beautiful person", and the 5th-century poet Jiang Yan adorned a goddess with gems of langgan. Some other authors reinforced use of its name to refer to divine fruits on heavenly trees. Ruan Ji, one of the Seven Sages of the Bamboo Grove, wrote a 3rd-century poem titled "Dining at Sunrise on Langgan Fruit". The 8th-century poet Li Bai wrote about a famished but proud fenghuang that would not deign to peck at bird food, but like a Daoist adept, would scorn all but a diet of langgan. This represents a literary transition from glittering fruit of distant Kunlun, to aristocratic fare in golden bowls, eventually to an elixir of immortality. A further extension of the langgan metaphor was to describe natural images of beautiful crystals and lush vegetation. For example, Ban Zhao's poem on "The Arrival of Winter" says, "The long [Yellow River] forms (crystalline) langgan [written langan 瓓玕] / Layered ice is like banked-up jade". Two of Du Fu's poems figuratively used the word langgan in reference to the vegetation around the forest home of a Daoist recluse, and to the splendid grass that provided seating for guests at a royal picnic near a mysterious grotto. Bamboo was the most typical representative of blue-green langgan in the plant world, compare láng 筤 ("bamboo radical" and the liáng phonetic in láng 琅) "young bamboo; blue'" Liu Yuxi wrote that the famous spotted bamboo of South China was "langgan colored". Geographic sources Chinese texts list many diverse locations from where langgan occurred. Several classical works associate mythical langan trees with Kunlun Mountain (far west or northwest China), and two gives sources of actual langgan gemstones, the Shujing says it was tribute from Yong Province (present day Gansu and Shaanxi) and the Guanzi says the Kunlun desert (Xinjiang and Tibet). Official Chinese histories record langgan coming from different sources. The 3rd-century Weilüe, 5th-century Hou Hanshu, 6th-century Wei shu, and 7th-century Liang shu list langgan among the products of Daqin, which depending on context meant the Near East or the Eastern Roman Empire, especially Syria. The Liang shu also says it was found in Kucha (modern Aksu Prefecture, Xinjiang), the 7th-century Jinshu says in Shaanxi, and the 10th-century Tangshu says in India. The Jiangnan Bielu history of the Southern Tang (937–976) says langgan was mined at Pingze 平澤 in Shu (Sichuan Province). The Daoist scholar and alchemist Tao Hongjing (456-536) notes langgan gemstone was traditionally associated with Sichuan. The Tang pharmacologist Su Jing 蘇敬 (d. 674) reports that it came from the distant Man tribes of the Yunnan–Guizhou Plateau and Hotan/Khotan. Accurately identifying geographic sources may be complicated by langgan referring to more than one mineral, as discussed next. Identifications The precise referent of the Chinese name langgan 琅玕 is uncertain in the present day. Scholars have described it as an "enigmatic archaism of politely pleasant or poetic usage", and "one of the most elusive terms in Chinese mineralogy". Identifications of langgan comprise at least three categories: Blue-green langgan was first recorded circa 4th century BCE, Coral langgan from the 8th century, and Red langgan is from an uncertain date. Edward H. Schafer, an eminent scholar of Tang dynasty literature and history, discussed langgan in several books and articles. His proposed identifications gradually changed from Mediterranean red coral, to coral or a glass-like gem, to chrysoprase or demantoid, to coral or red spinel, and ultimately to malachite. Blue-green langgan Langgan was a qīng 青 "green; blue; greenish black" (see Blue–green distinction in language) gemstone of lustrous appearance mentioned in numerous classical texts. They listed it among historical imperial tribute products presented from the far western regions of China, and as the mineral-fruit of the legendary langgan trees of immortality on Mount Kunlun. Schafer's 1978 monograph on langgan sought to identify the treasured blue-green gemstone, if it ever had a unique identity, and concluded the most plausible identification is malachite, a bright green mineral that was anciently used as a copper ore and an ornamental stone. Two early Chinese mineralogical authorities identified langgan as malachite, commonly called kǒngquèshí 孔雀石 (lit. "peacock stone") or shílǜ 石綠 (lit. "stone green"). Comparing blue-green stones that were known in early East Asia, Schafer disqualified several conceivable identities; demantoid garnet and green tourmaline are rarely of gem quality, while neither apple-green chrysoprase nor light greenish-blue turquoise typically have dark hues. This leaves malachite, This handsome green carbonate of copper has important credentials. It is often found in copper mines, and is therefore regularly at the disposal of copper- and bronze-producing peoples. It has, in certain varieties, a lovely silky luster, caused by its fibrous structure. It is soft and easily cut. It takes a good polish. It was commonly made into beads both in the western and eastern worlds. Above all, even uncut malachite often has a nodular or botryoidal structure, like little clumps of bright green beads, one of the classical forms attributed to lang-kan. Sometimes, too, it is stalactitic, like little stone trees. Furthermore, archeology confirms that malachite was an important gemstone of pre-Han China. Inlays of malachite and turquoise decorated many early Chinese bronze weapons and ritual vessels. Tang sources continued to record blue-green langgan. Su Jing's 652 Xinxiu bencao 新修本草 said it was a glassy substance similar to liúli 琉璃 "colored glaze; glass; glossy gem" that was imported from the Man tribes in the Southwest and from Khotan. In 762, Emperor Daizong of Tang proclaimed a new era name of Baoying 寶應 "Treasure Response" in honor of the discovery of thirteen auspicious treasures in Jiangsu, one of which was glassy langgan beads Coral langgan Tang dynasty herbalists and pharmacists changed the denotation of langgan from the traditional blue-green gemstone to a kind of coral. Chen Cangqi's c. 720 Bencao shiyi 本草拾遺 "Collected Addenda to the Pharmacopoeia" described it a pale red coral, growing like a branched tree on the bottom of the sea, fished by means of nets, and after coming out of the water gradually darkens and turns blue. Langan already had an established connection with coral. Chinese mythology matches two antipodean paradises of Mount Kunlun in the far west and Mount Penglai located on an island in the far eastern Bohai Sea. Both mountains had mythic plants and trees of immortality that attracted Daoist xian transcendents; Kunlun's red langgan trees with blue-green fruits were paralleled by Penglai's shanhu shu 珊瑚樹 "red coral trees". Regarding what variety of blue or green branching coral was identified as this "mineralized subaqueous shrub" langgan. Since it must have been a coral attractive enough to be comparable with the extravagant myths of Kunlun, Schafer suggests considering the blue coral Heliopora coerula. It is the only living species in the family Helioporidae, the only octocoral known to produce a massive skeleton, and was found throughout Pacific and Indian Oceans, although the IUCN currently considers it a vulnerable species. Du Wan's c. 1124 Yunlin shipu mineralogy book has a section (100) on langgan shi 琅玕石 that mentions shanhu "coral". A coral-like stone found in shallow water along the coast of Ningbo Zhejiang. Some specimens are two or three feet high. They must be pulled up by ropes let down from rafts. Though white when first taken from the water, they turn a dull purple after a while. They are patterned everywhere with circles, like ginger branches, and are rather brittle. Though the natives hold … Li Shizhen's 1578 Bencao Gangmu classic pharmacopeia objects to applying the term langgan to these marine invertebrates, which should properly be called shanhu while langgan should only be applied to the stone occurring in the mountains. Li's commentary suggests that the terminological confusion arose from the Shuowen jiezi definition of shanhu 珊瑚: 色赤生於海或生於山 "coral is red colored and grows in the ocean or in the mountains". This puzzling description of mountain corals was more likely a textual misunderstanding than a reference to coral fossils. Red langgan The most recent, and least historically documented, identification of langgan is a red gemstone. The Chinese geologist Chang Hung-Chao (Zhang Hongzhao) propagated this explanation when his book about geological terms in Chinese literature identified langgan as malachite, and noted an alternative construal of reddish spinel or balas ruby from the famous mines at Badakhshan. Some authors have cited Chang's balas ruby identification of langgan; others have used, or even confused, it with ruby, in translations (e.g., "precious rubies"). However, Schafer demonstrates that Chang's "supposed" textual evidence for red langgan is tenuous and suggests that Guo Pu's Shanhai jing commentary created this mineralogical confusion. Guo glosses the langgan tree as red, but is unclear whether this refers to the tree itself or its gem-like fruit. Compare Birrell's and Bokenkamp's Shanhai jing translations of "red jade" and "green kernels from scarlet gem trees". Chang misquotes dan'gan 丹矸 "cinnabar rock" from the Xunzi as dan'gan 丹玕 "cinnabar gan", and cites one textual occurrence of the term. The Shangqing Daoist Dadong zhenjing 大洞真經 Authentic Scripture of the Great Cavern records a heavenly palace named Dan'gan dian 丹玕殿 Basilica of the Cinnabar Gan. Admitting the possibility of interpreting gan 玕 as a monosyllabic truncation for langgan 琅玕, comparable with reading hongpo 红珀 for honghupo 红琥珀 "red amber", Schafer concludes there is insufficient dan'gan evidence for an explicit red variety of langgan. The lyrical term langgan occurs 87 times in the huge Complete Tang Poems collection of Tang poetry, with only two hong langgen 紅琅玕 "red langgan" usages by the Buddhist monk-poets Guanxiu (831-912) and Ji Qi 齊己 (863-937). Both poems use langgan to describe "red coral", the latter (贈念法華經) uses shanhu in the same line: 珊瑚捶打紅琅玕 "coral beating on red langgan" in cold waters. Dictionary translations Chinese-English dictionaries illustrate the multifaceted difficulties of identifying langgan. Compare the following list. Most of these bilingual Chinese dictionaries cross-reference lang and gan to langgan, but a few translate lang and gan independently. In terms of Chinese word morphology, láng 琅 is a free morpheme that can appear alone (for instance, a surname) or in other compound words (such as fàláng 琺琅 "enamel" and Lángyá shān 琅琊山 "Mount Langya (Anhui)") while gān 玕 is a bound morpheme that only occurs in the compound lánggān and does not have independent meaning. The origin of Giles' lang translation "a kind of white carnelian" is unknown, unless it derives from Williams' "a whitish stone". It was copied in Mathews' and various other Chinese dictionaries up to the online standard Unihan Database "a variety of white carnelian; pure". "White carnelian" is a marketing name for "white or whitish chalcedony of faint carnelian color". Carnelian is usually reddish-brown while common chalcedony colors are white, grey, brown, and blue. References Footnotes''' External links Taiwei lingshu ziwen langgan huadan shenzhen shangjing 太微靈書紫文琅玕華丹神真上經, 1445 Ming Dynasty edition Zhengtong daozang'' 正統道藏 Alchemical substances Chinese alchemy Chinese mythology Gemstones Mythological objects
Langgan
Physics,Chemistry
8,102
63,554,517
https://en.wikipedia.org/wiki/Generic%20Network%20Virtualization%20Encapsulation
Generic Network Virtualization Encapsulation (Geneve) is a network encapsulation protocol created by the IETF in order to unify the efforts made by other initiatives like VXLAN and NVGRE, with the intent to eliminate the wild growth of encapsulation protocols. Open vSwitch is an example of a software-based virtual network switch that supports Geneve overlay networks. It is also supported by AWS Gateway Load Balancers. References Telecommunications engineering Network architecture Telecommunications infrastructure
Generic Network Virtualization Encapsulation
Technology,Engineering
104
58,162,871
https://en.wikipedia.org/wiki/Liquid%20bleach
Liquid bleach, often called just bleach, is a common chemical household product that consists of a dilute solution of sodium hypochlorite () and other secondary ingredients. It is a chlorine releasing bleaching agent widely used to whiten clothes and remove stains, as a disinfectant to kill germs, and for several other uses. While the term has had this meaning for a long time, it may now be applied more generically to any liquid bleaching agent for laundry, irrespective of composition, such as peroxide-based bleaches. History Potassium hypochlorite () was synthesized by French scientist Berthollet in 1789, by reacting chlorine gas () with a solution of potassium hydroxide (potash, ). He also discovered its cloth bleaching properties, and set out to commercialize it under the name of Eau de Javel ("water of Javel") after the borough of Paris where it was manufactured. It was the first product intended specifically for that application, and it shortened the process of bleaching newly made cloth from months to hours. Scottish chemist and industrialist Charles Tennant proposed in 1798 a solution of calcium hypochlorite as an alternative for Javel water, and patented bleaching powder (solid calcium hypochlorite, Ca(ClO)2) in 1799. Around 1820, Antoine Labarraque substituted the much cheaper precursor sodium hydroxide (soda lye, ) for potash, thus producing Eau de Labarraque, basically the same "liquid bleach" (NaClO) still in use today. He also discovered its disinfectant properties, and was instrumental in spreading it worldwide for that purpose. His work greatly improved medical practice, public health, the sanitary conditions in hospitals, slaughterhouses, and all industries dealing with animal products—decades before Pasteur and others established the germ theory of disease. In particular, it led to the nearly universal practice of chlorination of tap water to prevent the spread of diseases like typhoid fever and cholera. Composition The active agent in liquid bleach is sodium hypochlorite, which gives the product a light greenish yellow tinge and its characteristic chlorine smell. Formulations for household use usually contain 8% or less of sodium hypochlorite by weight, although more concentrated solutions of up to 50% are available for industrial use. Concentrated solutions present serious safety risks. Solid anhydrous sodium hypochlorite is unstable and decomposes explosively. A non-explosive hydrated solid is available for laboratory use, but must be kept refrigerated to avoid decomposition. Liquid bleach usually contains also some sodium hydroxide (caustic soda or soda lye, ), intended to keep the solution alkaline. Sodium chloride (table salt, ) is often present too, and plays no role in the product's action. Sodium chloride and hydroxide are normal residues from the main production processes. References Cleaning products
Liquid bleach
Chemistry
635
2,892,513
https://en.wikipedia.org/wiki/Nonimaging%20optics
Nonimaging optics (also called anidolic optics) is a branch of optics that is concerned with the optimal transfer of light radiation between a source and a target. Unlike traditional imaging optics, the techniques involved do not attempt to form an image of the source; instead an optimized optical system for optimal radiative transfer from a source to a target is desired. Applications The two design problems that nonimaging optics solves better than imaging optics are: solar energy concentration: maximizing the amount of energy applied to a receiver, typically a solar cell or a thermal receiver illumination: controlling the distribution of light, typically so it is "evenly" spread over some areas and completely blocked from other areas Typical variables to be optimized at the target include the total radiant flux, the angular distribution of optical radiation, and the spatial distribution of optical radiation. These variables on the target side of the optical system often must be optimized while simultaneously considering the collection efficiency of the optical system at the source. Solar energy concentration For a given concentration, nonimaging optics provide the widest possible acceptance angles and, therefore, are the most appropriate for use in solar concentration as, for example, in concentrated photovoltaics. When compared to "traditional" imaging optics (such as parabolic reflectors or fresnel lenses), the main advantages of nonimaging optics for concentrating solar energy are: wider acceptance angles resulting in higher tolerances (and therefore higher efficiencies) for: less precise tracking imperfectly manufactured optics imperfectly assembled components movements of the system due to wind finite stiffness of the supporting structure deformation due to aging capture of circumsolar radiation other imperfections in the system higher solar concentrations smaller solar cells (in concentrated photovoltaics) higher temperatures (in concentrated solar thermal) lower thermal losses (in concentrated solar thermal) widen the applications of concentrated solar power, for example to solar lasers possibility of a uniform illumination of the receiver improve reliability and efficiency of the solar cells (in concentrated photovoltaics) improve heat transfer (in concentrated solar thermal) design flexibility: different kinds of optics with different geometries can be tailored for different applications Also, for low concentrations, the very wide acceptance angles of nonimaging optics can avoid solar tracking altogether or limit it to a few positions a year. The main disadvantage of nonimaging optics when compared to parabolic reflectors or Fresnel lenses is that, for high concentrations, they typically have one more optical surface, slightly decreasing efficiency. That, however, is only noticeable when the optics are aiming perfectly towards the Sun, which is typically not the case because of imperfections in practical systems. Illumination optics Examples of nonimaging optical devices include optical light guides, nonimaging reflectors, nonimaging lenses or a combination of these devices. Common applications of nonimaging optics include many areas of illumination engineering (lighting). Examples of modern implementations of nonimaging optical designs include automotive headlamps, LCD backlights, illuminated instrument panel displays, fiber optic illumination devices, LED lights, projection display systems and luminaires. When compared to "traditional" design techniques, nonimaging optics has the following advantages for illumination: better handling of extended sources more compact optics color mixing capabilities combination of light sources and light distribution to different places well suited to be used with increasingly popular LED light sources tolerance to variations in the relative position of light source and optic Examples of nonimaging illumination optics using solar energy are anidolic lighting or solar pipes. Other applications Modern portable and wearable optical devices, and systems of small sizes and low weights may require nanotechnology. This issue may be addressed by nonimaging metaoptics, which uses metalenses and metamirrors to deal with the optimal transfer of light energy. Collecting radiation emitted by high-energy particle collisions using the fewest photomultiplier tubes. Collecting luminescent radiation in photon upconversion devices with the compound parabolic concentrator being to-date the most promising geometrical optics collector. Some of the design methods for nonimaging optics are also finding application in imaging devices, for example some with ultra-high numerical aperture. Theory Early academic research in nonimaging optical mathematics seeking closed form solutions was first published in textbook form in a 1978 book. A modern textbook illustrating the depth and breadth of research and engineering in this area was published in 2004. A thorough introduction to this field was published in 2008. Special applications of nonimaging optics such as Fresnel lenses for solar concentration or solar concentration in general have also been published, although this last reference by O'Gallagher describes mostly the work developed some decades ago. Other publications include book chapters. Imaging optics can concentrate sunlight to, at most, the same flux found at the surface of the Sun. Nonimaging optics have been demonstrated to concentrate sunlight to 84,000 times the ambient intensity of sunlight, exceeding the flux found at the surface of the Sun, and approaching the theoretical (2nd law of thermodynamics) limit of heating objects to the temperature of the Sun's surface. The simplest way to design nonimaging optics is called "the method of strings", based on the edge ray principle. Other more advanced methods were developed starting in the early 1990s that can better handle extended light sources than the edge-ray method. These were developed primarily to solve the design problems related to solid state automobile headlamps and complex illumination systems. One of these advanced design methods is the simultaneous multiple surface design method (SMS). The 2D SMS design method () is described in detail in the aforementioned textbooks. The 3D SMS design method () was developed in 2003 by a team of optical scientists at Light Prescriptions Innovators. Edge ray principle In simple terms, the edge ray principle states that if the light rays coming from the edges of the source are redirected towards the edges of the receiver, this will ensure that all light rays coming from the inner points in the source will end up on the receiver. There is no condition on image formation, the only goal is to transfer the light from the source to the target. Figure Edge ray principle on the right illustrates this principle. A lens collects light from a source S1S2 and redirects it towards a receiver R1R2. The lens has two optical surfaces and, therefore, it is possible to design it (using the SMS design method) so that the light rays coming from the edge S1 of the source are redirected towards edge R1 of the receiver, as indicated by the blue rays. By symmetry, the rays coming from edge S2 of the source are redirected towards edge R2 of the receiver, as indicated by the red rays. The rays coming from an inner point S in the source are redirected towards the target, but they are not concentrated onto a point and, therefore, no image is formed. Actually, if we consider a point P on the top surface of the lens, a ray coming from S1 through P will be redirected towards R1. Also a ray coming from S2 through P will be redirected towards R2. A ray coming through P from an inner point S in the source will be redirected towards an inner point of the receiver. This lens then guarantees that all light from the source crossing it will be redirected towards the receiver. However, no image of the source is formed on the target. Imposing the condition of image formation on the receiver would imply using more optical surfaces, making the optic more complicated, but would not improve light transfer between source and target (since all light is already transferred). For that reason nonimaging optics are simpler and more efficient than imaging optics in transferring radiation from a source to a target. Design methods Nonimaging optics devices are obtained using different methods. The most important are: the flow-line or Winston-Welford design method, the SMS or Miñano-Benitez design method and the Miñano design method using Poisson brackets. The first (flow-line) is probably the most used, although the second (SMS) has proven very versatile, resulting in a wide variety of optics. The third has remained in the realm of theoretical optics and has not found real world application to date. Often optimization is also used. Typically optics have refractive and reflective surfaces and light travels through media of different refractive indices as it crosses the optic. In those cases a quantity called optical path length (OPL) may be defined as where index i indicates different ray sections between successive deflections (refractions or reflections), ni is the refractive index and di the distance in each section i of the ray path. The OPL is constant between wavefronts. This can be seen for refraction in the figure "constant OPL" to the right. It shows a separation c(τ) between two media of refractive indices n1 and n2, where c(τ) is described by a parametric equation with parameter τ. Also shown are a set of rays perpendicular to wavefront w1 and traveling in the medium of refractive index n1. These rays refract at c(τ) into the medium of refractive index n2 in directions perpendicular to wavefront w2. Ray rA crosses c at point c(τA) and, therefore, ray rA is identified by parameter τA on c. Likewise, ray rB is identified by parameter τB on c. Ray rA has optical path length . Also, ray rB has optical path length . The difference in optical path length for rays rA and rB is given by: In order to calculate the value of this integral, we evaluate , again with the help of the same figure. We have and . These expressions can be rewritten as and . From the law of refraction and therefore , leading to . Since these may be arbitrary rays crossing c, it may be concluded that the optical path length between w1 and w2 is the same for all rays perpendicular to incoming wavefront w1 and outgoing wavefront w2. Similar conclusions may be drawn for the case of reflection, only in this case . This relationship between rays and wavefronts is valid in general. Flow-line design method The flow-line (or Winston-Welford) design method typically leads to optics which guide the light confining it between two reflective surfaces. The best known of these devices is the CPC (Compound Parabolic Concentrator). These types of optics may be obtained, for example, by applying the edge ray of nonimaging optics to the design of mirrored optics, as shown in figure "CEC" on the right. It is composed of two elliptical mirrors e1 with foci S1 and R1 and its symmetrical e2 with foci S2 and R2. Mirror e1 redirects the rays coming from the edge S1 of the source towards the edge R1 of the receiver and, by symmetry, mirror e2 redirects the rays coming from the edge S2 of the source towards the edge R2 of the receiver. This device does not form an image of the source S1S2 on the receiver R1R2 as indicated by the green rays coming from a point S in the source that end up on the receiver but are not focused onto an image point. Mirror e2 starts at the edge R1 of the receiver since leaving a gap between mirror and receiver would allow light to escape between the two. Also, mirror e2 ends at ray r connecting S1 and R2 since cutting it short would prevent it from capturing as much light as possible, but extending it above r would shade light coming from S1 and its neighboring points of the source. The resulting device is called a CEC (Compound Elliptical Concentrator). A particular case of this design happens when the source S1S2 becomes infinitely large and moves to an infinite distance. Then the rays coming from S1 become parallel rays and the same for those coming from S2 and the elliptical mirrors e1 and e2 converge to parabolic mirrors p1 and p2. The resulting device is called a CPC (Compound Parabolic Concentrator), and shown in the "CPC" figure on the left. CPCs are the most common seen nonimaging optics. They are often used to demonstrate the difference between Imaging optics and nonimaging optics. When seen from the CPC, the incoming radiation (emitted from the infinite source at an infinite distance) subtends an angle ±θ (total angle 2θ). This is called the acceptance angle of the CPC. The reason for this name can be appreciated in the figure "rays showing the acceptance angle" on the right. An incoming ray r1 at an angle θ to the vertical (coming from the edge of the infinite source) is redirected by the CPC towards the edge R1 of the receiver. Another ray r2 at an angle α<θ to the vertical (coming from an inner point of the infinite source) is redirected towards an inner point of the receiver. However, a ray r3 at an angle β>θ to the vertical (coming from a point outside the infinite source) bounces around inside the CPC until it is rejected by it. Therefore, only the light inside the acceptance angle ±θ is captured by the optic; light outside it is rejected. The ellipses of a CEC can be obtained by the (pins and) string method, as shown in the figure "string method" on the left. A string of constant length is attached to edge point S1 of the source and edge point R1 of the receiver. The string is kept stretched while moving a pencil up and down, drawing the elliptical mirror e1. We can now consider a wavefront w1 as a circle centered at S1. This wavefront is perpendicular to all rays coming out of S1 and the distance from S1 to w1 is constant for all its points. The same is valid for wavefront w2 centered at R1. The distance from w1 to w2 is then constant for all light rays reflected at e1 and these light rays are perpendicular to both, incoming wavefront w1 and outgoing wavefront w2. Optical path length (OPL) is constant between wavefronts. When applied to nonimaging optics, this result extends the string method to optics with both refractive and reflective surfaces. Figure "DTIRC" (Dielectric Total Internal Reflection Concentrator) on the left shows one such example. The shape of the top surface s is prescribed, for example, as a circle. Then the lateral wall m1 is calculated by the condition of constant optical path length S=d1+n d2+n d3 where d1 is the distance between incoming wavefront w1 and point P on the top surface s, d2 is the distance between P and Q and d3 the distance between Q and outgoing wavefront w2, which is circular and centered at R1. Lateral wall m2 is symmetrical to m1. The acceptance angle of the device is 2θ. These optics are called flow-line optics and the reason for that is illustrated in figure "CPC flow-lines" on the right. It shows a CPC with an acceptance angle 2θ, highlighting one of its inner points P. The light crossing this point is confined to a cone of angular aperture 2α. A line f is also shown whose tangent at point P bisects this cone of light and, therefore, points in the direction of the "light flow" at P. Several other such lines are also shown in the figure. They all bisect the edge rays at each point inside the CPC and, for that reason, their tangent at each point points in the direction of the flow of light. These are called flow-lines and the CPC itself is just a combination of flow line p1 starting at R2 and p2 starting at R1. Variations to the flow-line design method There are some variations to the flow-line design method. A variation are the multichannel or stepped flow-line optics in which light is split into several "channels" and then recombined again into a single output. Aplanatic (a particular case of SMS) versions of these designs have also been developed. The main application of this method is in the design of ultra-compact optics. Another variation is the confinement of light by caustics. Instead of light being confined by two reflective surfaces, it is confined by a reflective surface and a caustic of the edge rays. This provides the possibility to add lossless non-optical surfaces to the optics. Simultaneous multiple surface (SMS) design method This section describes The design procedure The SMS (or Miñano-Benitez) design method is very versatile and many different types of optics have been designed using it. The 2D version allows the design of two (although more are also possible) aspheric surfaces simultaneously. The 3D version allows the design of optics with freeform surfaces (also called anamorphic) surfaces which may not have any kind of symmetry. SMS optics are also calculated by applying a constant optical path length between wavefronts. Figure "SMS chain" on the right illustrates how these optics are calculated. In general, the rays perpendicular to incoming wavefront w1 will be coupled to outgoing wavefront w4 and the rays perpendicular to incoming wavefront w2 will be coupled to outgoing wavefront w3 and these wavefronts may be any shape. However, for the sake of simplicity, this figure shows a particular case or circular wavefronts. This example shows a lens of a given refractive index n designed for a source S1S2 and a receiver R1R2. The rays emitted from edge S1 of the source are focused onto edge R1 of the receiver and those emitted from edge S2 of the source are focused onto edge R2 of the receiver. We first choose a point T0 and its normal on the top surface of the lens. We can now take a ray r1 coming from S2 and refract it at T0. Choosing now the optical path length S22 between S2 and R2 we have one condition that allows us to calculate point B1 on the bottom surface of the lens. The normal at B1 can also be calculated from the directions of the incoming and outgoing rays at this point and the refractive index of the lens. Now we can repeat the process taking a ray r2 coming from R1 and refracting it at B1. Choosing now the optical path length S11 between R1 and S1 we have one condition that allows us to calculate point T1 on the top surface of the lens. The normal at T1 can also be calculated from the directions of the incoming and outgoing rays at this point and the refractive index of the lens. Now, refracting at T1 a ray r3 coming from S2 we can calculate a new point B3 and corresponding normal on the bottom surface using the same optical path length S22 between S2 and R2. Refracting at B3 a ray r4 coming from R1 we can calculate a new point T3 and corresponding normal on the top surface using the same optical path length S11 between R1 and S1. The process continues by calculating another point B5 on the bottom surface using another edge ray r5, and so on. The sequence of points T0 B1 T1 B3 T3 B5 is called an SMS chain. Another SMS chain can be constructed towards the right starting at point T0. A ray from S1 refracted at T0 defines a point and normal B2 on the bottom surface, by using constant optical path length S11 between S1 and R1. Now a ray from R2 refracted at B2 defines a new point and normal T2 on the top surface, by using constant optical path length S22 between S2 and R2. The process continues as more points are added to the SMS chain. In this example shown in the figure, the optic has a left-right symmetry and, therefore, points B2 T2 B4 T4 B6 can also be obtained by symmetry about the vertical axis of the lens. Now we have a sequence of spaced points on the plane. Figure "SMS skinning" on the left illustrates the process used to fill the gaps between points, completely defining both optical surfaces. We pick two points, say B1 and B2, with their corresponding normals and interpolate a curve c between them. Now we pick a point B12 and its normal on c. A ray r1 coming from R1 and refracted at B12 defines a new point T01 and its normal between T0 and T1 on the top surface, by applying the same constant optical path length S11 between S1 and R1. Now a ray r2 coming from S2 and refracted at T01 defines a new point and normal on the bottom surface, by applying the same constant optical path length S22 between S2 and R2. The process continues with rays r3 and r4 building a new SMS chain filling the gaps between points. Picking other points and corresponding normals on curve c gives us more points in between the other SMS points calculated originally. In general, the two SMS optical surfaces do not need to be refractive. Refractive surfaces are noted R (from Refraction) while reflective surfaces are noted X (from the Spanish word refleXión). Total Internal Reflection (TIR) is noted I. Therefore, a lens with two refractive surfaces is an RR optic, while another configuration with a reflective and a refractive surface is an XR optic. Configurations with more optical surfaces are also possible and, for example, if light is first refracted (R), then reflected (X) then reflected again by TIR (I), the optic is called an RXI. The SMS 3D is similar to the SMS 2D, only now all calculations are done in 3D space. Figure "SMS 3D chain" on the right illustrates the algorithm of an SMS 3D calculation. The first step is to choose the incoming wavefronts w1 and w2 and outgoing wavefronts w3 and w4 and the optical path length S14 between w1 and w4 and the optical path length S23 between w2 and w3. In this example the optic is a lens (an RR optic) with two refractive surfaces, so its refractive index must also be specified. One difference between the SMS 2D and the SMS 3D is on how to choose initial point T0, which is now on a chosen 3D curve a. The normal chosen for point T0 must be perpendicular to curve a. The process now evolves similarly to the SMS 2D. A ray r1 coming from w1 is refracted at T0 and, with the optical path length S14, a new point B2 and its normal is obtained on the bottom surface. Now ray r2 coming from w3 is refracted at B2 and, with the optical path length S 23, a new point T2 and its normal is obtained on the top surface. With ray r3 a new point B2 and its normal are obtained, with ray r4 a new point T4 and its normal are obtained, and so on. This process is performed in 3D space and the result is a 3D SMS chain. As with the SMS 2D, a set of points and normals to the left of T0 can also be obtained using the same method. Now, choosing another point T0 on curve a the process can be repeated and more points obtained on the top and bottom surfaces of the lens. The power of the SMS method lies in the fact that the incoming and outgoing wavefronts can themselves be free-form, giving the method great flexibility. Also, by designing optics with reflective surfaces or combinations of reflective and refractive surfaces, different configurations are possible. Miñano design method using Poisson brackets This design method was developed by Miñano and is based on Hamiltonian optics, the Hamiltonian formulation of geometrical optics which shares much of the mathematical formulation with Hamiltonian mechanics. It allows the design of optics with variable refractive index, and therefore solves some nonimaging problems that are not solvable using other methods. However, manufacturing of variable refractive index optics is still not possible and this method, although potentially powerful, did not yet find a practical application. Conservation of etendue Conservation of etendue is a central concept in nonimaging optics. In concentration optics, it relates the acceptance angle with the maximum concentration possible. Conservation of etendue may be seen as constant a volume moving in phase space. Köhler integration In some applications it is important to achieve a given irradiance (or illuminance) pattern on a target, while allowing for movements or inhomogeneities of the source. Figure "Köhler integrator" on the right illustrates this for the particular case of solar concentration. Here the light source is the sun moving in the sky. On the left this figure shows a lens L1 L2 capturing sunlight incident at an angle α to the optical axis and concentrating it onto a receiver L3 L4. As seen, this light is concentrated onto a hotspot on the receiver. This may be a problem in some applications. One way around this is to add a new lens extending from L3 to L4 that captures the light from L1 L2 and redirects it onto a receiver R1 R2, as shown in the middle of the figure. The situation in the middle of the figure shows a nonimaging lens L1 L2 is designed in such a way that sunlight (here considered as a set of parallel rays) incident at an angle θ to the optical axis will be concentrated to point L3. On the other hand, nonimaging lens L3 L4 is designed in such a way that light rays coming from L1 are focused on R2 and light rays coming from L2 are focused on R1. Therefore, ray r1 incident on the first lens at an angle θ will be redirected towards L3. When it hits the second lens, it is coming from point L1 and it is redirected by the second lens to R2. On the other hand, ray r2 also incident on the first lens at an angle θ will also be redirected towards L3. However, when it hits the second lens, it is coming from point L2 and it is redirected by the second lens to R1. Intermediate rays incident on the first lens at an angle θ will be redirected to points between R1 and R2, fully illuminating the receiver. Something similar happens in the situation shown in the same figure, on the right. Ray r3 incident on the first lens at an angle α<θ will be redirected towards a point between L3 and L4. When it hits the second lens, it is coming from point L1 and it is redirected by the second lens to R2. Also, Ray r4 incident on the first lens at an angle α<θ will be redirected towards a point between L3 and L4. When it hits the second lens, it is coming from point L2 and it is redirected by the second lens to R1. Intermediate rays incident on the first lens at an angle α<θ will be redirected to points between R1 and R2, also fully illuminating the receiver. This combination of optical elements is called Köhler illumination. Although the example given here was for solar energy concentration, the same principles apply for illumination in general. In practice, Köhler optics are typically not designed as a combination of nonimaging optics, but they are simplified versions with a lower number of active optical surfaces. This decreases the effectiveness of the method, but allows for simpler optics. Also, Köhler optics are often divided into several sectors, each one of them channeling light separately and then combining all the light on the target. An example of one of these optics used for solar concentration is the Fresnel-R Köhler. Compound parabolic concentrator In the drawing opposite there are two parabolic mirrors CC' (red) and DD' (blue). Both parabolas are cut at B and A respectively. A is the focal point of parabola CC' and B is the focal point of the parabola DD' The area DC is the entrance aperture and the flat absorber is AB. This CPC has an acceptance angle of θ. The parabolic concentrator has an entrance aperture of DC and a focal point F. The parabolic concentrator only accepts rays of light that are perpendicular to the entrance aperture DC. The tracking of this type of concentrator must be more exact and requires expensive equipment. The compound parabolic concentrator accepts a greater amount of light and needs less accurate tracking. For a 3-dimensional "nonimaging compound parabolic concentrator", the maximum concentration possible in air or in vacuum (equal to the ratio of input and output aperture areas), is: where is the half-angle of the acceptance angle (of the larger aperture). History The development started in the mid-1960s at three different locations by V. K. Baranov (USSR) with the study of the focons (focusing cones) Martin Ploke (Germany), and Roland Winston (United States), and led to the independent origin of the first nonimaging concentrators, later applied to solar energy concentration. Among these three earliest works, the one most developed was the American one, resulting in what nonimaging optics is today. A good introduction was published by - Winston, Roland. “Nonimaging Optics.” Scientific American, vol. 264, no. 3, 1991, pp. 76–81. JSTOR, There are different commercial companies and universities working on nonimaging optics. Currently the largest research group in this subject is the Advanced Optics group at the CeDInt, part of the Technical University of Madrid (UPM). See also Etendue Acceptance angle Concentrated photovoltaics Concentrated solar power Solid-state lighting Lighting Anidolic lighting Hamiltonian optics Winston cone References External links Oliver Dross et al., Review of SMS design methods and real-world applications, SPIE Proceedings Vol. 5529, pp. 35–47, 2004 Compound Parabolic Concentrator for Passive Radiative Cooling Photovoltaic applications of Compound Parabolic Concentrator (CPC) Optics
Nonimaging optics
Physics,Chemistry
6,261
15,070,541
https://en.wikipedia.org/wiki/MYCBP
C-Myc-binding protein is a protein that in humans is encoded by the MYCBP gene. Function The MYCBP gene encodes a protein that binds to the N-terminal region of MYC (MIM 190080) and stimulates the activation of E box-dependent transcription by MYC.[supplied by OMIM] Interactions MYCBP has been shown to interact with AKAP1, C3orf15 and Myc. References Further reading
MYCBP
Chemistry
97
12,159,986
https://en.wikipedia.org/wiki/Phallus%20impudicus
Phallus impudicus, known colloquially as the common stinkhorn, is a widespread fungus in the Phallaceae (stinkhorn) family. It is recognizable for its foul odor and its phallic shape when mature, the latter feature giving rise to several names in 17th-century England. It is a common mushroom in Europe and North America, where it occurs in habitats rich in wood debris such as forests and mulched gardens. It appears from summer to late autumn. The fruiting structure is tall and white with a slimy, dark olive colored conical head. Known as the gleba, this material contains the spores, and is transported by insects which are attracted by the odor—described as resembling carrion. Despite its foul smell, it is not usually poisonous and immature mushrooms are consumed in parts of France, Germany and the Czech Republic. Taxonomy The Italian naturalist Ulisse Aldrovandi described the fungus in 1560 with name fungus priapeus, and he depicted it in his series of water-coloured plates called teatro della natura ('nature's theater' 1560–1590). Another botanist, John Gerard called it the "pricke mushroom" or "fungus virilis penis effigie" in his General Historie of Plants of 1597, and John Parkinson referred to it as "Hollanders workingtoole" or "phallus hollandicus" in his Theatrum botanicum of 1640. Linnaeus described it in his 1753 Species Plantarum, and it still bears its original binomial name. Its specific epithet, impudicus, is derived from the Latin for "shameless" or "immodest". Description Sometimes called the witch's egg, the immature stinkhorn is whitish or pinkish, egg-shaped, and typically by .On the outside is a thick whitish volva, also known as the peridium, covering the olive-colored gelatinous gleba. It is the latter that contains the spores and later stinks and attracts the flies; within this layer is a green layer which will become the 'head' of the expanded fruit body; and inside this is a white structure called the receptaculum (the stalk when expanded), that is hard, but has an airy structure like a sponge. The eggs become fully grown stinkhorns very rapidly, over a day or two. The mature stinkhorn is tall and in diameter, topped with a conical cap high that is covered with the greenish-brown slimy gleba. In older fungi the slime is eventually removed, exposing a bare yellowish pitted and ridged (reticulate) surface. This has a passing resemblance to the common morel (Morchella esculenta), for which it is sometimes mistaken. The rate of growth of Phallus impudicus has been measured at per hour. The growing fruit body is able to exert up to 1.33 kPa of pressure — a force sufficient to push up through asphalt. The spores have an elliptical to oblong shape, with dimensions of 3–5 to 1.5–2.5 μm. Similar species In North America, Phallus impudicus can be distinguished from the very similar P. hadriani by the latter's purplish-tinted volva. Spore dispersal The dispersal of spores is different from most "typical" mushrooms that spread their spores through the air. Stinkhorns instead produce a sticky spore mass on their tip which has a sharp, sickly-sweet odor of carrion to attract flies and other insects. Odorous chemicals in the gleba include methanethiol, hydrogen sulfide, linalool, trans-ocimene, phenylacetaldehyde, dimethyl sulfide, and dimethyl trisulfide. The latter compound has been found to be emitted from fungating cancerous wounds. The mature fruiting bodies can be smelled from a considerable distance in the woods, and at close quarters most people find the cloying stink extremely repulsive. The flies land in the gleba and in doing so collect the spore mass on their legs and carry it to other locations. An Austrian study demonstrated that blow-flies (species Calliphora vicina, Lucilia caesar, Lucilia ampullacea and Dryomyza anilis) also feed on the slime, and soon after leaving the fruit body, they deposit liquid feces that contain a dense suspension of spores. The study also showed that beetles (Oeceoptoma thoracica and Meligethes viridescens) are attracted to the fungus, but seem to have less of a role in spore dispersal as they tend to feed on the hyphal tissue of the fruiting body. There is also a possible ecological association between the P. impudicus and badger (Meles meles) setts. Fruiting bodies are commonly clustered in a zone from the entrances; the setts typically harbor a regularly-available supply of badger cadavers – the mortality rate of cubs is high, and death is more likely to occur within the sett. The fruiting of large numbers of stinkhorns attracts a high population of blow-flies (Calliphora and Lucilla breed on carrion); this ensures the rapid elimination of badger carcasses, removing a potential source of disease to the badger colony. The laxative effect of the gleba reduces the distance from the fruiting body to where the spores are deposited, ensuring the continued production of high densities of stinkhorns. Distribution and habitat The common stinkhorn can be found throughout much of Europe and North America, and it has also been collected in Asia (including China, Taiwan, and India), Costa Rica, Iceland, Tanzania, and southeast Australia. In North America, it is most common west of the Mississippi River; Ravenel's stinkhorn (Phallus ravenelii) is more common to the east. The fungus is associated with rotting wood, and as such it is most commonly encountered in deciduous woods where it fruits from summer to late autumn, though it may also be found in conifer woods or even grassy areas such as parks and gardens. It may also form mycorrhizal associations with certain trees. Uses Edibility At the egg stage, pieces of the inner layer (the receptaculum) can be cut out with a knife and eaten raw. They are crisp and crunchy with an attractive radishy taste. The fungus is enjoyed and eaten in France and parts of Germany, where it may be sold fresh or pickled and used in sausages. Similar species are consumed in China. Medicinal properties Venous thrombosis, the formation of a blood clot in a vein, is a common cause of death in breast cancer patients; patients with recurrent disease are typically maintained on anticoagulants for their lifetimes. A research study has suggested that extracts from P. impudicus can reduce the risk of this condition by reducing the incidence of platelet aggregation, and may have potential as a supportive preventive nutrition. It was used in medieval times as a cure for gout and as a love potion. Folk uses In Northern Montenegro, peasants rub Phallus impudicus on the necks of bulls before bull fighting contests in an attempt to make them stronger. They are also fed to young bulls as they are thought to be a potent aphrodisiac. In 1777, the reverend John Lightfoot wrote that the people of Thuringia called the unopened stinkhorns "ghost's or daemon's eggs" and dried and powdered them before mixing them in spirits as an aphrodisiac. In culture Writing about life in Victorian Cambridge, Gwen Raverat (granddaughter of Charles Darwin) describes the 'sport' of stinkhorn hunting: In our native woods there grows a kind of toadstool, called in the vernacular The Stinkhorn, though in Latin it bears a grosser name. The name is justified, for the fungus can be hunted by the scent alone; and this was Aunt Etty's great invention. Armed with a basket and a pointed stick, and wearing special hunting cloak and gloves, she would sniff her way round the wood, pausing here and there, her nostrils twitching, when she caught a whiff of her prey; then at last, with a deadly pounce, she would fall upon her victim, and poke his putrid carcass into her basket. At the end of the day's sport, the catch was brought back and burnt in the deepest secrecy on the drawing-room fire, with the door locked; because of the morals of the maids. In Thomas Mann's novel The Magic Mountain (Der Zauberberg), the psychologist Dr. Krokowski gives a lecture on the Phallus impudicus: And Dr. Krokowski had spoken about one fungus, famous since classical antiquity for its form and the powers ascribed to it – a morel, its Latin name ending in the adjective impudicus, its form reminiscent of love, and its odor, of death. For the stench given off by the impudicus was strikingly like that of a decaying corpse, the odor coming from greenish, viscous slime that carried its spores and dripped from the bell-shaped cap. And even today, among the uneducated, this morel was thought to be an aphrodisiac.In Danilo Kiš's novel Garden, Ashes the protagonist's father Eduard Schaum provokes the suspicions of the local residents and authorities through his mad wandering and sermonizing in the forests:The story went round, and was preached from the pulpit, that his iron-tipped cane possessed magical powers, that trees withered like grass whenever he walked in the Count's forest, that his spit produced poisonous mushrooms --Ithyphalus impudicus--that grew under the guise of edible, cultivated varieties. References External links Edible fungi Phallales Fungi of North America Fungi of Europe Fungi of Asia Fungi of Australia Fungi of Central America Fungi described in 1753 Taxa named by Carl Linnaeus Fungi of Iceland Fungus species
Phallus impudicus
Biology
2,083
23,691,886
https://en.wikipedia.org/wiki/Altered%20Esthetics
Altered Esthetics is a non-profit, community-based art gallery and arts advocacy organization in the Northeast Minneapolis Arts District. According to its mission statement, its goal is to support and expand the vibrant Minneapolis arts community by hosting exhibitions, creating and sponsoring various art programs, and participating in community art events. History and development Altered Esthetics was originally conceived as an exhibition venue where the focus would be on "art for art's sake" as opposed to art for profit. Founded by Jamie Schumacher, it opened in April 2004 in Minneapolis with the inaugural exhibition "The Art of War", featuring the work of 15 local artists. The organization has grown significantly since 2004. Its staff of about 100 are all volunteers. Its board of directors has 18 active members. Due to its contributed growth since 2004, it was moved in late 2006 to the Q'arma Building in Northeast Minneapolis' arts district. In May 2007, it received 501(c)3 non-profit status. In addition to its physical gallery, it maintains an online gallery featuring additional artists. Exhibitions and internship programs Over the past six years, Altered Esthetics has hosted over 50 group exhibitions focusing on fine art, music, poetry, performance art and film. It has presented the work of over 1,000 national and international artists, including such notable artists as Manuel Ocampo and J.M. Culver. Its exhibitions have addressed such diverse themes as banned books, comic art, gender, and activism in the arts. In 2007, Altered Esthetics began a curatorial internship program to offer artists, students and community members hands-on experience in the arts. In 2008, a Gallery Director internship program was created whose goal is to provide participants with experience in grant writing, fundraising, and other aspects of running non-profit arts organizations. Community presence In 2009, Altered Esthetics hosted 14 exhibits, drawing over 2,000 people to the Minneapolis arts district. It is also a participant in the arts district's Art-A-Whirl, the country's largest open-studio tour, attended by over 20,000 people. References External links https://web.archive.org/web/20110613150345/http://www.mndaily.com/2010/02/03/altered-look-banned-books http://www.tcdailyplanet.net/article/2008/05/05/tasty-lutefisk-sushi-altered-esthetics.html https://web.archive.org/web/20110613150442/http://www.mndaily.com/2008/04/03/sibling-rivalry https://web.archive.org/web/20110613150507/http://www.mndaily.com/2008/02/14/few-bites-feminist-art http://www.tcdailyplanet.net/article/2008/02/03/art-note-bitter-fruits-and-anxiety-dreams-northeast-minneapolis.html https://web.archive.org/web/20110717015141/http://www.wakemag.org/sound-vision/bust-out-the-huffy/ https://web.archive.org/web/20110717015305/http://www.wakemag.org/sound-vision/two-takes-on-activist-art/ http://www.tcdailyplanet.net/news/2006/05/17/bike-art-ne-minneapolis-altered-esthetics-through-june-28 Arts organizations based in Minneapolis Art and design organizations Arts organizations established in 2004 2004 establishments in Minnesota
Altered Esthetics
Engineering
799
61,288,146
https://en.wikipedia.org/wiki/Hokovirus
Hokovirus (HokV) is a genus of giant double-stranded DNA-containing viruses (NCLDV). This genus was detected during the analysis of metagenome samples of bottom sediments of reservoirs at the wastewater treatment plant in Klosterneuburg, Austria. New Klosneuvirus (KNV), Catovirus and Indivirus genera (all found in these sewage waters) were also described together with Hokovirus, building up a putative virus subfamily Klosneuvirinae (Klosneuviruses) with KNV as type genus. Hokovirus has a large genome of 1.33 million base pairs (881 gene families). This is the third largest genome among known Klosneuviruses after KNV (1.57 million base pairs, 1272 gene families) and Catovirus. GC content is 21.4 % Classification of metagenome, made by analyzing 18S rRNA indicate that their hosts are relate to the simple Cercozoa. Phylogenetic tree topology of Mimiviridae is still under discussion. Some authors (CNS 2018) like to put Klosneuviruses together with Cafeteria roenbergensis virus (CroV) and Bodo saltans virus (BsV) into a tentative subfamily called Aquavirinae. Another proposal is to put them together with Mimiviruses into a subfamily Megavirinae. See also Nucleocytoplasmic large DNA viruses Girus Mimiviridae References Further reading Mitch Leslie: Giant viruses found in Austrian sewage fuel debate over potential fourth domain of life. In: Science. 5. April 2017, doi:10.1126/science.aal1005. Virus genera Mimiviridae Unaccepted virus taxa
Hokovirus
Biology
361
76,188,403
https://en.wikipedia.org/wiki/Carbonate%20nitrate
Carbonate nitrates are mixed anion compounds containing both carbonate and nitrate ions. Hydrotalcite can contain carbonate and nitrate ions between its layers. Magnesium can be substituted by nickel, cobalt or copper. Oxycarbonitrates containing an alkaline earth metal and cuprate and nitrate and carbonate anions in layers, form a family of superconducting materials. List References Carbonates Nitrates Mixed anion compounds
Carbonate nitrate
Physics,Chemistry
85
48,412,548
https://en.wikipedia.org/wiki/Network%20for%20Astronomy%20School%20Education
Network for Astronomy School Education (NASE) is an International Astronomical Union (IAU) Working Group that works on Training Teachers for primary and secondary schools. In 2007, professor George K. Miley, IAU vice-president, invited Rosa M. Ros to begin exploring the idea of setting up an astronomy program to give primary and secondary school teachers a better preparation in this area of knowledge. The birth of NASE Group occurred when Rosa Maria Ros and Alexandre Costa were sent by UNESCO and IAU to give two courses in Peru and Ecuador in July 2009. Shortly after NASE was officially created in August 2009 during IAU's General Assembly at Rio de Janeiro. From there on more than 80 courses have been presented worldwide. The topics of "the basic NASE course" are: Positional astronomy Solar System Exoplanets Spectrography Photometry Spectroscopy Determination of absolute magnitude Stellar nucleosynthesis Stellar evolution Cosmology NASE classes were designed for developing countries where teachers don't have many financial resources. NASE Working Group members go to these countries for the first time to prepare a local task group that will disseminate astronomy knowledge and inexpensive didactic materials. The main goal is precisely to set up in each country a local group of NASE members who carry on teaching the essential NASE course every year and to create new didactic inexpensive experiments, demonstrations and astronomical instruments. This has allowed to build a very large repository of educational materials for astronomy with PowerPoint Presentations], animations, articles and lectures, photos, games, simulations websites, interactive programs(e.g. Stellarium) and videos. NASE Courses NASE has now given more than seventy courses mainly in South America, Africa and Asia. Partnership courses NASE has also cooperated with other associations to promote teacher training on astronomy, namely with UNESCO and the European Association for Astronomy Education-EAAE. See also List of astronomical societies References External links NASE Website 14 Steps to the Universe (book) Geometry of Light and Light Shadows Cosmic lights (book) International Astronomical Union (Official website) Astronomy organizations International educational organizations
Network for Astronomy School Education
Astronomy
428
69,419,151
https://en.wikipedia.org/wiki/Gold%28II%29%20sulfate
Gold(II) sulfate is the chemical compound with the formula or more correctly . This compound was previously thought to be a mixed-valent compound as AuIAuIII(SO4)2. But later, it was shown that it contained the diatomic cation, which made it the first simple inorganic gold(II) compound. The bond distance between the gold atoms in the diatomic cation is 249 pm. Production and properties Gold(II) sulfate is produced by reaction of sulfuric acid and gold(III) hydroxide. Gold(II) sulfate is unstable in air and oxidizes to hydrogen disulfoaurate(III)(). References Gold compounds Sulfates
Gold(II) sulfate
Chemistry
144
27,203,428
https://en.wikipedia.org/wiki/March%20of%20Dimes%20Prize%20in%20Developmental%20Biology
The March of Dimes Prize in Developmental Biology is awarded once a year by the March of Dimes. The Prize honors outstanding scientists who profoundly advance the science that underlies our understanding of pregnancy, parturition, and prenatal development. Created as a tribute to Dr. Jonas Salk shortly before his death in 1995, the Prize has been awarded annually since 1996. It’s now named in recognition of Dr. Richard B. Johnston, Jr. MD, March of Dimes Medical Director when the Prize was initiated. Dr. Johnston, Jr. is a member of the National Academy of Medicine. It carries a cash award "to scientific leaders who have pioneered research to advance our understanding of prenatal development and pregnancy". Laureates Source: March of Dimes 2024 Marisa Bartolomei 2022-23 Patrica Hunt 2021 Alan W. Flake 2020 Susan Fisher 2019 Myriam Hemberger 2018 Allan C. Spradling 2017 Charles David Allis 2016 Victor R. Ambros and Gary B. Ruvkun 2015 Rudolf Jaenisch 2014 Huda Y. Zoghbi 2013 Eric N. Olson 2012 Elaine Fuchs and Howard Green 2011 Patricia Ann Jacobs and David C. Page 2010 Shinya Yamanaka 2009 Kevin P. Campbell and Louis M. Kunkel 2008 Philip A. Beachy and Clifford Tabin 2007 Anne McLaren and Janet Rossant 2006 Alexander Varshavsky 2005 Mario R. Capecchi and Oliver Smithies 2004 Mary F. Lyon 2003 Pierre Chambon and Ronald M. Evans 2002 Seymour Benzer and Sydney Brenner 2001 and Thomas M. Jessell 2000 H. Robert Horvitz 1999 Martin J. Evans and Richard L. Gardner 1998 Davor Solter 1997 Walter J. Gehring and David S. Hogness 1996 Beatrice Mintz and Ralph L. Brinster See also List of biology awards List of medicine awards References Medicine awards Lists of award winners Biology awards
March of Dimes Prize in Developmental Biology
Technology
394
22,181,211
https://en.wikipedia.org/wiki/Feature%20model
In software development, a feature model is a compact representation of all the products of the Software Product Line (SPL) in terms of "features". Feature models are visually represented by means of feature diagrams. Feature models are widely used during the whole product line development process and are commonly used as input to produce other assets such as documents, architecture definition, or pieces of code. A SPL is a family of related programs. When the units of program construction are features—increments in program functionality or development—every program in an SPL is identified by a unique and legal combination of features, and vice versa. Feature models were first introduced in the Feature-Oriented Domain Analysis (FODA) method by Kang in 1990. Since then, feature modeling has been widely adopted by the software product line community and a number of extensions have been proposed. Background A "feature" is defined as a "prominent or distinctive user-visible aspect, quality, or characteristic of a software system or system". The focus of SPL development is on the systematic and efficient creation of similar programs. FODA is an analysis devoted to identification of features in a domain to be covered by a particular SPL. Model A feature model is a model that defines features and their dependencies, typically in the form of a feature diagram + left-over (a.k.a. cross-tree) constraints. But also it could be as a table of possible combinations. Diagram A feature diagram is a visual notation of a feature model, which is basically an and-or tree. Other extensions exist: cardinalities, feature cloning, feature attributes, discussed below. Configuration A feature configuration is a set of features that describes a member of an SPL: the member contains a feature if and only if the feature is in its configuration. A feature configuration is permitted by a feature model if and only if it does not violate constraints imposed by the model... Feature Tree A Feature Tree (sometimes also known as a Feature Model or Feature Diagram) is a hierarchical diagram that visually depicts the features of a solution in groups of increasing levels of detail. Feature Trees are great ways to summarize the features that will be included in a solution and how they are related in a simple visual manner. Feature modeling notations Current feature modeling notations may be divided into three main groups, namely: Basic feature models Cardinality-based feature models Extended feature models Basic feature models Relationships between a parent feature and its child features (or sub-features) are categorized as: Mandatory – child feature must be selected. Optional – child feature can be selected or not selected. Or – at least one of the sub-features must be selected. Alternative (xor) – exactly one of the sub-features must be selected. In addition to the parental relationships between features, cross-tree constraints are allowed. The most common are: A requires B – The selection of A in a product implies the selection of B. A excludes B – A and B cannot be part of the same product. As an example, the figure to the right illustrates how feature models can be used to specify and build configurable on-line shopping systems. The software of each application is determined by the features that it provides. The root feature (i.e. E-Shop) identifies the SPL. Every shopping system implements a catalogue, payment modules, security policies and optionally a search tool. E-shops must implement a high or standard security policy (choose one), and can provide different payment modules: bank transfer, credit card or both of them. Additionally, a cross-tree constraint forces shopping systems including the credit card payment module to implement a high security policy. Cardinality-based feature models Some authors propose extending basic feature models with UML-like multiplicities of the form [n,m] with n being the lower bound and m the upper bound. These are used to limit the number of sub-features that can be part of a product whenever the parent is selected. If the upper bound is * the feature can be cloned as many times as we want (as long as the other constraints are respected). This notation is useful for products extensible with an arbitrary number of components. Extended feature models Others suggest adding extra-functional information to the features using "attributes". These are mainly composed of a name, a domain, and a value. Semantics The semantics of a feature model is the set of feature configurations that the feature model permits. The most common approach is to use mathematical logic to capture the semantics of a feature diagram. Each feature corresponds to a boolean variable and the semantics is captured as a propositional formula. The satisfying valuations of this formula correspond to the feature configurations permitted by the feature diagram. For instance, if is a mandatory sub-feature of , the formula will contain the constraint . The following table provides a translation of the basic primitives. We assume that the diagram is a rooted tree with root . The semantics of a whole diagram is a conjunct of the translations of the elements contained in the diagram. Therefore, in case all elements are written in Conjunctive normal form (CNF), then the terms can easily be combined with logical AND and the whole logical expression will remain in CNF. Configuring products A product of the SPL is declaratively specified by selecting or deselecting features according to user's preferences. Such decisions must respect the constraints imposed by the feature model. A "configurator" is a tool that assists the user during a configuration process. For instance by automatically selecting or deselecting features that must or must not, respectively, be selected for the configuration to be completed successfully. Current approaches use unit propagation and CSP solvers. Properties and analyses An analysis of a feature model targets certain properties of the model which are important for marketing strategies or technical decisions. A number of analyses are identified in the literature. Typical analyses determine whether a feature model is void (represents no products), whether it contains dead features (features that cannot be part of any product), or the number of products of the software product line represented by the model. Other analyses focus on comparing several feature models (e.g. to check whether a model is a specialization or refactoring or generalization of another). See also Domain analysis Domain engineering Feature-oriented Programming - a paradigm for software product line synthesis Product Family Engineering Software Product Lines References External links Feature Model Repository Wiki Software Product Line Engineering with Feature Models Software requirements
Feature model
Engineering
1,330
18,715,285
https://en.wikipedia.org/wiki/DOCK%20%28protein%29
DOCK (Dedicator of cytokinesis) is a family of related proteins involved in intracellular signalling networks. DOCK family members contain a RhoGEF domain to function as guanine nucleotide exchange factors to promote GDP release and GTP binding to specific Small GTPases of the Rho family (e.g., Rac and Cdc42), leading to their activation since Rho proteins are inactive when bound to GDP but active when bound to GTP. Subfamilies DOCK family proteins are categorised into four subfamilies based on their sequence homology: DOCK-A subfamily Dock180 (also known as Dock1) Dock2 Dock5 DOCK-B subfamily Dock3 (also known as MOCA and PBP) Dock4 DOCK-C subfamily (also known as Zir subfamily) Dock6 (also known as Zir1) Dock7 (also known as Zir2) Dock8 (also known as Zir3) DOCK-D subfamily (also known as Zizimin subfamily) Dock9 (also known as Zizimin1) Dock10 (also known as Zizimin3) Dock11 (also known as Zizimin2) References GTP-binding protein regulators
DOCK (protein)
Chemistry
261
1,993,856
https://en.wikipedia.org/wiki/Ketu%20%28mythology%29
Ketu (Sanskrit: केतु, IAST: ) () is the descending (i.e. 'south') lunar node in Vedic, or Hindu astrology. Personified as a deity, Rahu and Ketu are considered to be the two halves of the immortal asura (demon) Svarbhanu, who was beheaded by the god Vishnu. As per Vedic astrology, Rahu and Ketu have an orbital cycle of 18 years and are always 180 degrees from each other orbitally (as well as in the birth charts). This coincides with the precessional orbit of moon or the ~18-year rotational cycle of the lunar ascending and descending nodes on the earth's ecliptic plane. Ketu rules the Scorpio zodiac sign together with Mangala (traditional ruling planet; Mars in Western astrology). Astronomically, Rahu and Ketu denote the points of intersection of the paths of Surya which is the Sun and Chandra which is the Moon as they move on the celestial sphere, and do not correspond to a physical planet. Therefore, Rahu and Ketu are respectively called the north and the south lunar nodes. Eclipses occur when the Sun and the Moon are at one of these points, giving rise to the mythical understanding that the two are being swallowed by the snake. Hence, Ketu is believed to be responsible for causing the lunar eclipse. Astrology In Hindu astrology, Ketu represents karmic collections both good and bad, as well as spirituality and supernatural influences. Ketu signifies the spiritual process of the refinement of materialisation to the spirit and is considered both malefic and benefic: this process causes sorrow and loss, and yet at the same time turns the individual to God. In other words, it causes material loss in order to force a more spiritual outlook in the person. Ketu is a karaka or indicator of intelligence, wisdom, non-attachment, fantasy, penetrating insight, derangement, and psychic abilities. Ketu is believed to bring prosperity to the devotee's family, and removes the effects of snakebite and illness arising out of poisons. He grants good health, wealth and cattle to his devotees. Ketu is the lord of three nakshatras or lunar mansions: Ashvini, Magha and Mula. Ketu is considered responsible for moksha, sannyasa, self-realization, gnana, a wavering nature, restlessness, the endocrine system and slender physique. The people who come under the influence of Ketu can achieve great heights, most of them spiritual. Rahu, being a karmic planet, shows the necessity and urge to work on a specific area of life where there had been ignorance in the past life. To balance the apparent dissatisfaction one has to go that extra mile to provide a satisfactory settlement in the present lifetime. Rahu can remove all negative qualities of every planet, while Ketu can emphasis every positive quality of the planet. Ruler of Ketu: According to the most popular astrology text Brihat Parashara Hora Shastra (BPHS), if looking for solutions regarding Ketu consider working with mantra of Ganesha & Matsya Exaltation and Debilitation: This has been a debatable point in astrology, as per BPHS Ketu is exalted in the sign of Scorpio and debilitated in Taurus, however, many astrologers have disputed this and most modern astrologers now seem to agree that Ketu is exalted in Sagittarius and debilitated in Gemini. This stands to logic as Ketu is a torso and a prominent part of Sagittarius is a big horse torso attached to a male upper body. Negative Significations: While Ketu is considered malefic and has been mostly associated with negative things. Most people consider it a difficult planet as it manifests as obstacles on the material plane, however Lord Ganesha mantras can help remedy those issues because Lord Ganesha is the presiding deity of Ketu. Ketu often brings a sense of complete detachment, losses, mindlessness, wandering, and confusion in one's life. Positive Significations: There is a much deeper side to Ketu and it has been called the most spiritual of all planets. Ketu has been considered the planet of enlightenment and liberation. As the one who has “lost his head (worldly senses = Tattva (Ayyavazhi))” Being a personification of renunciation (torso without a head who needs nothing). Ketu the ascetic that wants to go beyond the mundane life and achieve the final liberation. Friends Planets: Ketu is a friend of Mercury Budha, Venus Shukra, and Saturn Shani; Jupiter Brihaspati is neutral in friendship. Sun Surya, Moon Chandra, and Mars Mangala are Ketu's enemies. See also Sköll References External links Asura Danavas Navagraha Sun myths Moon myths Eclipses
Ketu (mythology)
Astronomy
1,040
529,613
https://en.wikipedia.org/wiki/Binding%20site
In biochemistry and molecular biology, a binding site is a region on a macromolecule such as a protein that binds to another molecule with specificity. The binding partner of the macromolecule is often referred to as a ligand. Ligands may include other proteins (resulting in a protein–protein interaction), enzyme substrates, second messengers, hormones, or allosteric modulators. The binding event is often, but not always, accompanied by a conformational change that alters the protein's function. Binding to protein binding sites is most often reversible (transient and non-covalent), but can also be covalent reversible or irreversible. Function Binding of a ligand to a binding site on protein often triggers a change in conformation in the protein and results in altered cellular function. Hence binding site on protein are critical parts of signal transduction pathways. Types of ligands include neurotransmitters, toxins, neuropeptides, and steroid hormones. Binding sites incur functional changes in a number of contexts, including enzyme catalysis, molecular pathway signaling, homeostatic regulation, and physiological function. Electric charge, steric shape and geometry of the site selectively allow for highly specific ligands to bind, activating a particular cascade of cellular interactions the protein is responsible for. Catalysis Enzymes incur catalysis by binding more strongly to transition states than substrates and products. At the catalytic binding site, several different interactions may act upon the substrate. These range from electric catalysis, acid and base catalysis, covalent catalysis, and metal ion catalysis. These interactions decrease the activation energy of a chemical reaction by providing favorable interactions to stabilize the high energy molecule. Enzyme binding allows for closer proximity and exclusion of substances irrelevant to the reaction. Side reactions are also discouraged by this specific binding. Types of enzymes that can perform these actions include oxidoreductases, transferases, hydrolases, lyases, isomerases, and ligases. For instance, the transferase hexokinase catalyzes the phosphorylation of glucose to make glucose-6-phosphate. Active site residues of hexokinase allow for stabilization of the glucose molecule in the active site and spur the onset of an alternative pathway of favorable interactions, decreasing the activation energy. Inhibition Protein inhibition by inhibitor binding may induce obstruction in pathway regulation, homeostatic regulation and physiological function. Competitive inhibitors compete with substrate to bind to free enzymes at active sites and thus impede the production of the enzyme-substrate complex upon binding. For example, carbon monoxide poisoning is caused by the competitive binding of carbon monoxide as opposed to oxygen in hemoglobin. Uncompetitive inhibitors, alternatively, bind concurrently with substrate at active sites. Upon binding to an enzyme substrate (ES) complex, an enzyme substrate inhibitor (ESI) complex is formed. Similar to competitive inhibitors, the rate at product formation is decreased also. Lastly, mixed inhibitors are able to bind to both the free enzyme and the enzyme-substrate complex. However, in contrast to competitive and uncompetitive inhibitors, mixed inhibitors bind to the allosteric site. Allosteric binding induces conformational changes that may increase the protein's affinity for substrate. This phenomenon is called positive modulation. Conversely, allosteric binding that decreases the protein's affinity for substrate is negative modulation. Types Active site At the active site, a substrate binds to an enzyme to induce a chemical reaction. Substrates, transition states, and products can bind to the active site, as well as any competitive inhibitors. For example, in the context of protein function, the binding of calcium to troponin in muscle cells can induce a conformational change in troponin. This allows for tropomyosin to expose the actin-myosin binding site to which the myosin head binds to form a cross-bridge and induce a muscle contraction. In the context of the blood, an example of competitive binding is carbon monoxide which competes with oxygen for the active site on heme. Carbon monoxide's high affinity may outcompete oxygen in the presence of low oxygen concentration. In these circumstances, the binding of carbon monoxide induces a conformation change that discourages heme from binding to oxygen, resulting in carbon monoxide poisoning. Allosteric site At the regulatory site, the binding of a ligand may elicit amplified or inhibited protein function. The binding of a ligand to an allosteric site of a multimeric enzyme often induces positive cooperativity, that is the binding of one substrate induces a favorable conformation change and increases the enzyme's likelihood to bind to a second substrate. Regulatory site ligands can involve homotropic and heterotropic ligands, in which single or multiple types of molecule affects enzyme activity respectively. Enzymes that are highly regulated are often essential in metabolic pathways. For example, phosphofructokinase (PFK), which phosphorylates fructose in glycolysis, is largely regulated by ATP. Its regulation in glycolysis is imperative because it is the committing and rate limiting step of the pathway. PFK also controls the amount of glucose designated to form ATP through the catabolic pathway. Therefore, at sufficient levels of ATP, PFK is allosterically inhibited by ATP. This regulation efficiently conserves glucose reserves, which may be needed for other pathways. Citrate, an intermediate of the citric acid cycle, also works as an allosteric regulator of PFK. Single- and multi-chain binding sites Binding sites can be characterized also by their structural features. Single-chain sites (of “monodesmic” ligands, μόνος: single, δεσμός: binding) are formed by a single protein chain, while multi-chain sites (of "polydesmic” ligands, πολοί: many) are frequent in protein complexes, and are formed by ligands that bind more than one protein chain, typically in or near protein interfaces. Recent research shows that binding site structure has profound consequences for the biology of protein complexes (evolution of function, allostery). Cryptic binding sites Cryptic binding sites are the binding sites that are transiently formed in an apo form or that are induced by ligand binding. Considering the cryptic binding sites increases the size of the potentially “druggable” human proteome from ~40% to ~78% of disease-associated proteins. The binding sites have been investigated by: support vector machine applied to "CryptoSite" data set, Extension of "CryptoSite" data set, long timescale molecular dynamics simulation with Markov state model and with biophysical experiments, and cryptic-site index that is based on relative accessible surface area. Binding curves Binding curves describe the binding behavior of ligand to a protein. Curves can be characterized by their shape, sigmoidal or hyperbolic, which reflect whether or not the protein exhibits cooperative or noncooperative binding behavior respectively. Typically, the x-axis describes the concentration of ligand and the y-axis describes the fractional saturation of ligands bound to all available binding sites. The Michaelis Menten equation is usually used when determining the shape of the curve. The Michaelis Menten equation is derived based on steady-state conditions and accounts for the enzyme reactions taking place in a solution. However, when the reaction takes place while the enzyme is bound to a substrate, the kinetics play out differently. Modeling with binding curves are useful when evaluating the binding affinities of oxygen to hemoglobin and myoglobin in the blood. Hemoglobin, which has four heme groups, exhibits cooperative binding. This means that the binding of oxygen to a heme group on hemoglobin induces a favorable conformation change that allows for increased binding favorability of oxygen for the next heme groups. In these circumstances, the binding curve of hemoglobin will be sigmoidal due to its increased binding favorability for oxygen. Since myoglobin has only one heme group, it exhibits noncooperative binding which is hyperbolic on a binding curve. Applications Biochemical differences between different organisms and humans are useful for drug development. For instance, penicillin kills bacteria by inhibiting the bacterial enzyme DD-transpeptidase, destroying the development of the bacterial cell wall and inducing cell death. Thus, the study of binding sites is relevant to many fields of research, including cancer mechanisms, drug formulation, and physiological regulation. The formulation of an inhibitor to mute a protein's function is a common form of pharmaceutical therapy. In the scope of cancer, ligands that are edited to have a similar appearance to the natural ligand are used to inhibit tumor growth. For example, Methotrexate, a chemotherapeutic, acts as a competitive inhibitor at the dihydrofolate reductase active site. This interaction inhibits the synthesis of tetrahydrofolate, shutting off production of DNA, RNA and proteins. Inhibition of this function represses neoplastic growth and improves severe psoriasis and adult rheumatoid arthritis. In cardiovascular illnesses, drugs such as beta blockers are used to treat patients with hypertension. Beta blockers (β-Blockers) are antihypertensive agents that block the binding of the hormones adrenaline and noradrenaline to β1 and β2 receptors in the heart and blood vessels. These receptors normally mediate the sympathetic "fight or flight" response, causing constriction of the blood vessels. Competitive inhibitors are also largely found commercially. Botulinum toxin, known commercially as Botox, is a neurotoxin that causes flaccid paralysis in the muscle due to binding to acetylcholine dependent nerves. This interaction inhibits muscle contractions, giving the appearance of smooth muscle. A number of computational tools have been developed for the prediction of the location of binding sites on proteins. These can be broadly classified into sequence based or structure based. Sequence based methods rely on the assumption that the sequences of functionally conserved portions of proteins such as binding site are conserved. Structure based methods require the 3D structure of the protein. These methods in turn can be subdivided into template and pocket based methods. Template based methods search for 3D similarities between the target protein and proteins with known binding sites. The pocket based methods search for concave surfaces or buried pockets in the target protein that possess features such as hydrophobicity and hydrogen bonding capacity that would allow them to bind ligands with high affinity. Even though the term pocket is used here, similar methods can be used to predict binding sites used in protein-protein interactions that are usually more planar, not in pockets. References External links Finding the binding site of a protein with an online tool Drawing the active site of an enzyme Chemical bonding Structural biology Protein structure
Binding site
Physics,Chemistry,Materials_science,Biology
2,266
77,165,711
https://en.wikipedia.org/wiki/Lindb%C3%A4cks
Lindbäcks is a four-generation family company in Piteå, Sweden, founded in 1924. It produces and sells prefabricated multi-story house modules. References Construction and civil engineering companies Construction and civil engineering companies of Sweden Manufacturing companies established in 1924 Privately held companies of Sweden Swedish companies established in 1924
Lindbäcks
Engineering
65
47,967
https://en.wikipedia.org/wiki/Authentication
Authentication (from authentikos, "real, genuine", from αὐθέντης authentes, "author") is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity. It might involve validating personal identity documents, verifying the authenticity of a website with a digital certificate, determining the age of an artifact by carbon dating, or ensuring that a product or document is not counterfeit. Methods Authentication is relevant to multiple fields. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain person or in a certain place or period of history. In computer science, verifying a user's identity is often required to allow access to confidential data or systems. Authentication can be considered to be of three types: The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member, or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while they may not have evidence that every step in the supply chain was authenticated. Centralized authority-based trust relationships back most secure internet communication through known public certificate authorities; decentralized peer-based trust, also known as a web of trust, is used for personal services such as email or files and trust is established by known individuals signing each other's cryptographic key for instance. The second type of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist, on the other hand, might use carbon dating to verify the age of an artifact, do a chemical and spectroscopic analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation. Attribute comparison may be vulnerable to forgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery. In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well. Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught. Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify. The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery and perjury and are also vulnerable to being separated from the artifact and lost. In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a key card or other access devices to allow system access. In this case, authenticity is implied but not guaranteed. Consumer goods such as pharmaceuticals, perfume, and clothing can use all forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation. As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink. Authentication factors The ways in which someone may be authenticated fall into three categories, based on what is known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity before being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority. Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified. The three factors (classes) and some of the elements of each factor are: Knowledge: Something the user knows (e.g., a password, partial password, passphrase, personal identification number (PIN), challenge–response (the user must answer a question or pattern), security question). Ownership: Something the user has (e.g., wrist band, ID card, security token, implanted device, cell phone with a built-in hardware token, software token, or cell phone holding a software token). Inherence: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifiers). Single-factor authentication As the weakest level of authentication, only a single component from one of the three categories of factors is used to authenticate an individual's identity. The use of only one factor does not offer much protection from misuse or malicious intrusion. This type of authentication is not recommended for financial or personally relevant transactions that warrant a higher level of security. Multi-factor authentication Multi-factor authentication involves two or more authentication factors (something you know, something you have, or something you are). Two-factor authentication is a special case of multi-factor authentication involving exactly two factors. For example, using a bank card (something the user has) along with a PIN (something the user knows) provides two-factor authentication. Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication. Authentication types Strong authentication The United States government's National Information Assurance Glossary defines strong authentication as a layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information. The European Central Bank (ECB) has defined strong authentication as "a procedure based on two or more of the three authentication factors". The factors that are used must be mutually independent and at least one factor must be "non-reusable and non-replicable", except in the case of an inherence factor and must also be incapable of being stolen off the Internet. In the European, as well as in the US-American understanding, strong authentication is very similar to multi-factor authentication or 2FA, but exceeding those with more rigorous requirements. The FIDO Alliance has been striving to establish technical specifications for strong authentication. Continuous authentication Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor and authenticate users based on some biometric trait(s). A study used behavioural biometrics based on writing styles as a continuous authentication method. Recent research has shown the possibility of using smartphones sensors and accessories to extract some behavioral attributes such as touch dynamics, keystroke dynamics and gait recognition. These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems. Digital authentication The term digital authentication, also known as electronic authentication or e-authentication, refers to a group of processes where the confidence for user identities is established and presented via electronic methods to an information system. The digital authentication process creates technical challenges because of the need to authenticate individuals or entities remotely over a network. The American National Institute of Standards and Technology (NIST) has created a generic model for digital authentication that describes the processes that are used to accomplish secure authentication: Enrollment – an individual applies to a credential service provider (CSP) to initiate the enrollment process. After successfully proving the applicant's identity, the CSP allows the applicant to become a subscriber. Authentication – After becoming a subscriber, the user receives an authenticator e.g., a token and credentials, such as a user name. He or she is then permitted to perform online transactions within an authenticated session with a relying party, where they must provide proof that he or she possesses one or more authenticators. Life-cycle maintenance – the CSP is charged with the task of maintaining the user's credential over the course of its lifetime, while the subscriber is responsible for maintaining his or her authenticator(s). The authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity. Product authentication Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods, such as electronics, music, apparel, and counterfeit medications, have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting. In their anti-counterfeiting technology guide, the EUIPO Observatory on Infringements of Intellectual Property Rights categorizes the main anti-counterfeiting technologies on the market currently into five main categories: electronic, marking, chemical and physical, mechanical, and technologies for digital media. Products or their packaging can include a variable QR Code. A QR Code alone is easy to verify but offers a weak level of authentication as it offers no protection against counterfeits unless scan data is analyzed at the system level to detect anomalies. To increase the security level, the QR Code can be combined with a digital watermark or copy detection pattern that are robust to copy attempts and can be authenticated with a smartphone. A secure key storage device can be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally, the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these secure coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified. Packaging Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products. Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include: Taggant fingerprinting – uniquely coded microscopic materials that are verified from a database Encrypted micro-particles – unpredictably placed markings (numbers, layers and colors) not visible to the human eye Holograms – graphics printed on seals, patches, foils or labels and used at the point of sale for visual verification Micro-printing – second-line authentication often used on currencies Serialized barcodes UV printing – marks only visible under UV light Track and trace systems – use codes to link products to the database tracking system Water indicators – become visible when contacted with water DNA tracking – genes embedded onto labels that can be traced Color-shifting ink or film – visible marks that switch colors or texture when tilted Tamper evident seals and tapes – destructible or graphically verifiable at point of sale 2d barcodes – data codes that can be tracked RFID chips NFC chips Information content Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging – anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like: A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint. A shared secret, such as a passphrase, in the content of the message. An electronic signature; public-key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key. The opposite problem is the detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism. Literacy and literature authentication In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is – Does one believe it? Related to that, an authentication project is therefore a reading and writing activity in which students document the relevant research process (). It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers consider the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the period. History and state-of-the-art Historically, fingerprints have been used as the most authoritative method of authentication, but court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability. Outside of the legal system as well, fingerprints are easily spoofable, with British Telecom's top computer security official noting that "few" fingerprint readers have not already been tricked by one spoof or another. Hybrid or two-tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device. In a computer data context, cryptographic methods have been developed which are not spoofable if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. However, it is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in the future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered. Authorization The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". While authorization often happens immediately after authentication (e.g., when logging into a computer system), this does not mean authorization presupposes authentication: an anonymous agent could be authorized to a limited action set. Access control One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity. See also Authentication protocol Electronic signature References External links "New NIST Publications Describe Standards for Identity Credentials and Authentication Systems" Access control Applications of cryptography Computer access control Notary Packaging
Authentication
Engineering
3,891
24,739,650
https://en.wikipedia.org/wiki/Elaunin
Elaunin (Greek verb ἐλαύνω "I steer") is a component of elastic fibers formed from deposition of elastin between oxytalan fibers. It is found in the periodontal ligament and the connective tissue of the dermis, particularly in association with sweat glands. Overview Identification Unlike Oxytalan fibres, elaunin fibres stain with orcein, aldehyde fuchsin and resorcin fuchsin without prior oxidation. Research Findings Elaunin fibers have been found within the secretory coil of human eccrine sweat glands. They were found in bundles of microtubules which had a different constancy than elastic fibers. The elaunin fibers found in the secretory coil had a less thick appearance than that of elastic fibers. Elaunin can be identified where the fibers of the gingival ligament are. There are elastic fibers, and one of the main types of elastic fibers is elaunin. In the papillary dermis, elaunin is lost when in reduction. See also Elastic fibre https://www.sciencedirect.com/topics/medicine-and-dentistry/elaunin References External links Structural proteins
Elaunin
Chemistry
253
9,896,453
https://en.wikipedia.org/wiki/Eukaryotic%20DNA%20replication
Eukaryotic DNA replication is a conserved mechanism that restricts DNA replication to once per cell cycle. Eukaryotic DNA replication of chromosomal DNA is central for the duplication of a cell and is necessary for the maintenance of the eukaryotic genome. DNA replication is the action of DNA polymerases synthesizing a DNA strand complementary to the original template strand. To synthesize DNA, the double-stranded DNA is unwound by DNA helicases ahead of polymerases, forming a replication fork containing two single-stranded templates. Replication processes permit copying a single DNA double helix into two DNA helices, which are divided into the daughter cells at mitosis. The major enzymatic functions carried out at the replication fork are well conserved from prokaryotes to eukaryotes, but the replication machinery in eukaryotic DNA replication is a much larger complex, coordinating many proteins at the site of replication, forming the replisome. The replisome is responsible for copying the entirety of genomic DNA in each proliferative cell. This process allows for the high-fidelity passage of hereditary/genetic information from parental cell to daughter cell and is thus essential to all organisms. Much of the cell cycle is built around ensuring that DNA replication occurs without errors. In G1 phase of the cell cycle, many of the DNA replication regulatory processes are initiated. In eukaryotes, the vast majority of DNA synthesis occurs during S phase of the cell cycle, and the entire genome must be unwound and duplicated to form two daughter copies. During G2, any damaged DNA or replication errors are corrected. Finally, one copy of the genomes is segregated into each daughter cell at the mitosis or M phase. These daughter copies each contains one strand from the parental duplex DNA and one nascent antiparallel strand. This mechanism is conserved from prokaryotes to eukaryotes and is known as semiconservative DNA replication. The process of semiconservative replication for the site of DNA replication is a fork-like DNA structure, the replication fork, where the DNA helix is open, or unwound, exposing unpaired DNA nucleotides for recognition and base pairing for the incorporation of free nucleotides into double-stranded DNA. Initiation Initiation of eukaryotic DNA replication is the first stage of DNA synthesis where the DNA double helix is unwound and an initial priming event by DNA polymerase α occurs on the leading strand. The priming event on the lagging strand establishes a replication fork. Priming of the DNA helix consists of the synthesis of an RNA primer to allow DNA synthesis by DNA polymerase α. Priming occurs once at the origin on the leading strand and at the start of each Okazaki fragment on the lagging strand. Origin of replication Replication starts at origins of replication. DNA sequences containing these sites were initially isolated in the late 1970s on the basis of their ability to support replication of plasmids, hence the designation of autonomously replicating sequences (ARS). Origins vary widely in their efficiency, with some being used in almost every cell cycle while others may be used in only one in one thousand S phases. The total number of yeast ARSs is at least 1600, but may be more than 5000 if less active sites are counted, that is, there may be an ARS every 2000 to 8000 base pairs. Pre-replicative complex Multiple replicative proteins assemble on and dissociate from these replicative origins to initiate DNA replication. with the formation of the pre-replication complex (pre-RC) being a key intermediate in the replication initiation process. Association of the origin recognition complex (ORC) with a replication origin recruits the cell division cycle 6 protein (Cdc6) to form a platform for the loading of the minichromosome maintenance (Mcm 2–7) complex proteins, facilitated by the chromatin licensing and DNA replication factor 1 protein (Cdt1). The ORC, Cdc6, and Cdt1 together are required for the stable association of the Mcm2-7 complex with replicative origins during the G1 phase of the cell cycle. Eukaryotic origins of replication control the formation of several protein complexes that lead to the assembly of two bidirectional DNA replication forks. These events are initiated by the formation of the pre-replication complex (pre-RC) at the origins of replication. This process takes place in the G1 stage of the cell cycle. The pre-RC formation involves the ordered assembly of many replication factors including the origin recognition complex (ORC), Cdc6 protein, Cdt1 protein, and minichromosome maintenance proteins (Mcm2-7). Once the pre-RC is formed, activation of the complex is triggered by two kinases, cyclin-dependent kinase 2 (CDK) and Dbf4-dependent kinase (DDK) that help transition the pre-RC to the initiation complex before the initiation of DNA replication. This transition involves the ordered assembly of additional replication factors to unwind the DNA and accumulate the multiple eukaryotic DNA polymerases around the unwound DNA. Central to the question of how bidirectional replication forks are established at replication origins is the mechanism by which ORC recruits two head-to-head Mcm2-7 complexes to every replication origin to form the pre-replication complex. Origin recognition complex The first step in the assembly of the pre-replication complex (pre-RC) is the binding of the origin recognition complex (ORC) to the replication origin. In late mitosis, the Cdc6 protein joins the bound ORC followed by binding the Cdt1-Mcm2-7 complex. ORC, Cdc6, and Cdt1 are all required to load the six protein minichromosome maintenance (Mcm 2–7) complex onto the DNA. The ORC is a six-subunit, Orc1p-6, protein complex that selects the replicative origin sites on DNA for initiation of replication and ORC binding to chromatin is regulated through the cell cycle. Generally, the function and size of the ORC subunits are conserved throughout many eukaryotic genomes with the difference being their diverged DNA binding sites. The most widely studied origin recognition complex is that of Saccharomyces cerevisiae or yeast which is known to bind to the autonomously replicating sequence (ARS). The S. cerevisiae ORC interacts specifically with both the A and B1 elements of yeast origins of replication, spanning a region of 30 base pairs. The binding to these sequences requires ATP. The atomic structure of the S. cerevisiae ORC bound to ARS DNA has been determined. Orc1, Orc2, Orc3, Orc4, and Orc5 encircle the A element by means of two types of interactions, base non-specific and base-specific, that bend the DNA at the A element. All five subunits contact the sugar phosphate backbone at multiple points of the A element to form a tight grip without base specificity. Orc1 and Orc2 contact the minor groove of the A element while a winged helix domain of Orc4 contacts the methyl groups of the invariant Ts in the major groove of the A element via an insertion helix (IH). The absence of this IH in metazoans explains the lack of sequence specificity in human ORC. Removing the IH from the ScORC causes it to lose its specificity for the A element, and to bind promiscuously and preferentially (83%) to promoter regions. The ARS DNA is also bent at the B1 element through interactions with Orc2, Orc5 and Orc6. The bending of origin DNA by ORC appears to be evolutionarily conserved suggesting that it may be required for the Mcm2-7 complex loading mechanism. When the ORC binds to DNA at replication origins, it serves as a scaffold for the assembly of other key initiation factors of the pre-replicative complex. This pre-replicative complex assembly during the G1 stage of the cell cycle is required prior to the activation of DNA replication during the S phase. The removal of at least part of the complex (Orc1) from the chromosome at metaphase is part of the regulation of mammalian ORC to ensure that the pre-replicative complex formation prior to the completion of metaphase is eliminated. Cdc6 protein Binding of the cell division cycle 6 (Cdc6) protein to the origin recognition complex (ORC) is an essential step in the assembly of the pre-replication complex (pre-RC) at the origins of replication. Cdc6 binds to the ORC on DNA in an ATP-dependent manner, which induces a change in the pattern of origin binding that requires Orc1 ATPase. Cdc6 requires ORC in order to associate with chromatin and is in turn required for the Cdt1-Mcm2-7 heptamer to bind to the chromatin. The ORC-Cdc6 complex forms a ring-shaped structure and is analogous to other ATP-dependent protein machines. The levels and activity of Cdc6 regulate the frequency with which the origins of replication are utilized during the cell cycle. Cdt1 protein The chromatin licensing and DNA replication factor 1 (Cdt1) protein is required for the licensing of chromatin for DNA replication. In S. cerevisiae, Cdt1 facilitates the loading of the Mcm2-7 complex one at a time onto the chromosome by stabilising the left-handed open-ring structure of the Mcm2-7 single hexamer. Cdt1 has been shown to associate with the C terminus of Cdc6 to cooperatively promote the association of Mcm proteins to the chromatin. The cryo-EM structure of the OCCM (ORC-Cdc6-Cdt1-MCM) complex shows that the Cdt1-CTD interacts with the Mcm6-WHD. In metazoans, Cdt1 activity during the cell cycle is tightly regulated by its association with the protein geminin, which both inhibits Cdt1 activity during S phase in order to prevent re-replication of DNA and prevents it from ubiquitination and subsequent proteolysis. Minichromosome maintenance protein complex The minichromosome maintenance (Mcm) proteins were named after a genetic screen for DNA replication initiation mutants in S. cerevisiae that affect plasmid stability in an ARS-specific manner. Mcm2, Mcm3, Mcm4, Mcm5, Mcm6 and Mcm7 form a hexameric complex that has an open-ring structure with a gap between Mcm2 and Mcm5. The assembly of the Mcm proteins onto chromatin requires the coordinated function of the origin recognition complex (ORC), Cdc6, and Cdt1. Once the Mcm proteins have been loaded onto the chromatin, ORC and Cdc6 can be removed from the chromatin without preventing subsequent DNA replication. This observation suggests that the primary role of the pre-replication complex is to correctly load the Mcm proteins. The Mcm proteins on chromatin form a head-to-head double hexamer with the two rings slightly tilted, twisted and off-centred to create a kink in the central channel where the bound DNA is captured at the interface of the two rings. Each hexameric Mcm2-7 ring first serves as the scaffold for the assembly of the replisome and then as the core of the catalytic CMG (Cdc45-MCM-GINS) helicase, which is a main component of the replisome. Each Mcm protein is highly related to all others, but unique sequences distinguishing each of the subunit types are conserved across eukaryotes. All eukaryotes have exactly six Mcm protein analogs that each fall into one of the existing classes (Mcm2-7), indicating that each Mcm protein has a unique and important function. Minichromosome maintenance proteins are required for DNA helicase activity. Inactivation of any of the six Mcm proteins during S phase irreversibly prevents further progression of the replication fork suggesting that the helicase cannot be recycled and must be assembled at replication origins. Along with the minichromosome maintenance protein complex helicase activity, the complex also has associated ATPase activity. Studies have shown that within the Mcm protein complex are specific catalytic pairs of Mcm proteins that function together to coordinate ATP hydrolysis. These studies, confirmed by cryo-EM structures of the Mcm2-7 complexes, showed that the Mcm complex is a hexamer with subunits arranged in a ring in the order of Mcm2-Mcm6-Mcm4-Mcm7-Mcm3-Mcm5-. Both members of each catalytic pair contribute to the conformation that allows ATP binding and hydrolysis and the mixture of active and inactive subunits presumably allows the Mcm hexameric complex to complete ATP binding and hydrolysis as a whole to create a coordinated ATPase activity. The nuclear localization of the minichromosome maintenance proteins is regulated in budding yeast cells. The Mcm proteins are present in the nucleus in G1 stage and S phase of the cell cycle, but are exported to the cytoplasm during the G2 stage and M phase. A complete and intact six subunit Mcm complex is required to enter into the cell nucleus. In S. cerevisiae, nuclear export is promoted by cyclin-dependent kinase (CDK) activity. Mcm proteins that are associated with chromatin are protected from CDK export machinery due to the lack of accessibility to CDK. Initiation complex During the G1 stage of the cell cycle, the replication initiation factors, origin recognition complex (ORC), Cdc6, Cdt1, and minichromosome maintenance (Mcm) protein complex, bind sequentially to DNA to form a head-to-head dimer of the MCM ring complex, known as the pre-replication complex (pre-RC). While the yeast pre-RC forms a closed DNA complex, the human pre-RC forms an open complex. At the transition of the G1 stage to the S phase of the cell cycle, S phase–specific cyclin-dependent protein kinase (CDK) and Cdc7/Dbf4 kinase (DDK) transform the inert pre-RC into an active complex capable of assembling two bidirectional replisomes. CryoEM structures showed that two DDKs independently dock onto the interface of the MCM double hexamer straddling across the two rings. The sequential phosphorylation of multiple substrates on the NTEs of Mcm4, Mcm2 and Mcm6 is achieved by a wobble mechanism whereby Dbf4 assumes different wobble states to position Cdc7 over its multiple substrates. Phosphorylation of the MCM double hexamer, the Mcm4-NSD in particular, by DDK is essential for viability in yeast. The recruitment of Cdc45 and GINS follows after the activation of the MCMs by DDK and CDK. Cdc45 protein Cell division cycle 45 (Cdc45) protein is a critical component for the conversion of the pre-replicative complex to the initiation complex. The Cdc45 protein assembles at replication origins before initiation and is required for replication to begin in Saccharomyces cerevisiae, and has an essential role during elongation. Thus, Cdc45 has central roles in both initiation and elongation phases of chromosomal DNA replication. Cdc45 associates with chromatin after the beginning of initiation in late G1 stage and during the S phase of the cell cycle. Cdc45 physically associates with Mcm5 and displays genetic interactions with five of the six members of the Mcm gene family and the ORC2 gene. The loading of Cdc45 onto chromatin is critical for loading other various replication proteins, including DNA polymerase α, DNA polymerase ε, replication protein A (RPA) and proliferating cell nuclear antigen (PCNA) onto chromatin. Within a Xenopus nucleus-free system, it has been demonstrated that Cdc45 is required for the unwinding of plasmid DNA. The Xenopus nucleus-free system also demonstrates that DNA unwinding and tight RPA binding to chromatin occurs only in the presence of Cdc45. Binding of Cdc45 to chromatin depends on Clb-Cdc28 kinase activity as well as functional Cdc6 and Mcm2, which suggests that Cdc45 associates with the pre-RC after activation of S-phase cyclin-dependent kinases (CDKs). As indicated by the timing and the CDK dependence, binding of Cdc45 to chromatin is crucial for commitment to initiation of DNA replication. During S phase, Cdc45 physically interacts with Mcm proteins on chromatin; however, dissociation of Cdc45 from chromatin is slower than that of the Mcm, which indicates that the proteins are released by different mechanisms. GINS The six minichromosome maintenance proteins and Cdc45 are essential during initiation and elongation for the movement of replication forks and for unwinding of the DNA. GINS are essential for the interaction of Mcm and Cdc45 at the origins of replication during initiation and then at DNA replication forks as the replisome progresses. The GINS complex is composed of four small proteins Sld5 (Cdc105), Psf1 (Cdc101), Psf2 (Cdc102) and Psf3 (Cdc103), GINS represents 'go, ichi, ni, san' which means '5, 1, 2, 3' in Japanese. Cdc45, Mcm2-7 and GINS together form the CMG helicase, the replicative helicase of the replisome. Although the Mcm2-7 complex alone has weak helicase activity Cdc45 and GINS are required for robust helicase activity Mcm10 Mcm10 is essential for chromosome replication and interacts with the minichromosome maintenance 2-7 helicase that is loaded in an inactive form at origins of DNA replication. Mcm10 also chaperones the catalytic DNA polymerase α and helps stabilize the polymerase at replication forks. DDK and CDK kinases At the onset of S phase, the pre-replicative complex must be activated by two S phase-specific kinases in order to form an initiation complex at an origin of replication. One kinase is the Cdc7-Dbf4 kinase called Dbf4-dependent kinase (DDK) and the other is cyclin-dependent kinase (CDK). Chromatin-binding assays of Cdc45 in yeast and Xenopus have shown that a downstream event of CDK action is loading of Cdc45 onto chromatin. Cdc6 has been speculated to be a target of CDK action, because of the association between Cdc6 and CDK, and the CDK-dependent phosphorylation of Cdc6. The CDK-dependent phosphorylation of Cdc6 has been considered to be required for entry into the S phase. Both the catalytic subunits of DDK, Cdc7, and the activator protein, Dbf4, are conserved in eukaryotes and are required for the onset of S phase of the cell cycle. Both Dbf4 and Cdc7 are required for the loading of Cdc45 onto chromatin origins of replication. The target for binding of the DDK kinase is the chromatin-bound form of the Mcm complex. High resolution cryoEM structures showed that the Dbf4 subunit of DDK straddles across the hexamer interface of the DNA-bound MCM-DH, contacting Mcm2 of one hexamer and Mcm4/6 of the opposite hexamer. Mcm2, Mcm4 and Mcm6 are all substrates of phosphorylation by DDK but only the N-terminal serine/threonine-rich domain (NSD) of Mcm4 is an essential DDK target. Phosphorylation of the NSD leads to the activation of Mcm helicase activity. Dpb11, Sld3, and Sld2 proteins Sld3, Sld2, and Dpb11 interact with many replication proteins. Sld3 and Cdc45 form a complex that associated with the pre-RC at the early origins of replication even in the G11 phase and with the later origins of replication in the S phase in a mutually Mcm-dependent manner. Dpb11 and Sld2 interact with Polymerase ɛ and cross-linking experiments have indicated that Dpb11 and Polymerase ɛ coprecipitate in the S phase and associate with replication origins. Sld3 and Sld2 are phosphorylated by CDK, which enables the two replicative proteins to bind to Dpb11. Dpb11 had two pairs of BRCA1 C Terminus (BRCT) domains which are known as a phosphopeptide-binding domains. The N-terminal pair of the BRCT domains binds to phosphorylated Sld3, and the C-terminal pair binds to phosphorylated Sld2. Both of these interactions are essential for CDK-dependent activation of DNA budding in yeast. Dpb11 also interacts with GINS and participates in the initiation and elongation steps of chromosomal DNA replication. GINS are one of the replication proteins found at the replication forks and forms a complex with Cdc45 and Mcm. These phosphorylation-dependent interactions between Dpb11, Sld2, and Sld3 are essential for CDK-dependent activation of DNA replication, and by using cross-linking reagents within some experiments, a fragile complex was identified called the pre-loading complex (pre-LC). This complex contains Pol ɛ, GINS, Sld2, and Dpb11. The pre-LC is found to form before any association with the origins in a CDK-dependent and DDK-dependent manner and CDK activity regulates the initiation of DNA replication through the formation of the pre-LC. Elongation The formation of the pre-replicative complex (pre-RC) marks the potential sites for the initiation of DNA replication. Consistent with the minichromosome maintenance complex encircling double stranded DNA, formation of the pre-RC does not lead to the immediate unwinding of origin DNA or the recruitment of DNA polymerases. Instead, the pre-RC that is formed during the G1 of the cell cycle is only activated to unwind the DNA and initiate replication after the cells pass from the G1 to the S phase of the cell cycle. Once the initiation complex is formed and the cells pass into the S phase, the complex then becomes a replisome. The eukaryotic replisome complex is responsible for coordinating DNA replication. Replication on the leading and lagging strands is performed by DNA polymerase ε and DNA polymerase δ. Many replisome factors including Claspin, And1, replication factor C clamp loader and the fork protection complex are responsible for regulating polymerase functions and coordinating DNA synthesis with the unwinding of the template strand by Cdc45-Mcm-GINS complex. As the DNA is unwound the twist number decreases. To compensate for this the writhe number increases, introducing positive supercoils in the DNA. These supercoils would cause DNA replication to halt if they were not removed. Topoisomerases are responsible for removing these supercoils ahead of the replication fork. The replisome is responsible for copying the entire genomic DNA in each proliferative cell. The base pairing and chain formation reactions, which form the daughter helix, are catalyzed by DNA polymerases. These enzymes move along single-stranded DNA and allow for the extension of the nascent DNA strand by "reading" the template strand and allowing for incorporation of the proper purine nucleobases, adenine and guanine, and pyrimidine nucleobases, thymine and cytosine. Activated free deoxyribonucleotides exist in the cell as deoxyribonucleotide triphosphates (dNTPs). These free nucleotides are added to an exposed 3'-hydroxyl group on the last incorporated nucleotide. In this reaction, a pyrophosphate is released from the free dNTP, generating energy for the polymerization reaction and exposing the 5' monophosphate, which is then covalently bonded to the 3' oxygen. Additionally, incorrectly inserted nucleotides can be removed and replaced by the correct nucleotides in an energetically favorable reaction. This property is vital to proper proofreading and repair of errors that occur during DNA replication. Replication fork The replication fork is the junction between the newly separated template strands, known as the leading and lagging strands, and the double stranded DNA. Since duplex DNA is antiparallel, DNA replication occurs in opposite directions between the two new strands at the replication fork, but all DNA polymerases synthesize DNA in the 5' to 3' direction with respect to the newly synthesized strand. Further coordination is required during DNA replication. Two replicative polymerases synthesize DNA in opposite orientations. Polymerase ε synthesizes DNA on the "leading" DNA strand continuously as it is pointing in the same direction as DNA unwinding by the replisome. In contrast, polymerase δ synthesizes DNA on the "lagging" strand, which is the opposite DNA template strand, in a fragmented or discontinuous manner. The discontinuous stretches of DNA replication products on the lagging strand are known as Okazaki fragments and are about 100 to 200 bases in length at eukaryotic replication forks. The lagging strand usually contains longer stretches of single-stranded DNA that is coated with single-stranded binding proteins, which help stabilize the single-stranded templates by preventing a secondary structure formation. In eukaryotes, these single-stranded binding proteins are a heterotrimeric complex known as replication protein A (RPA). Each Okazaki fragment is preceded by an RNA primer, which is displaced by the procession of the next Okazaki fragment during synthesis. RNase H recognizes the DNA:RNA hybrids that are created by the use of RNA primers and is responsible for removing these from the replicated strand, leaving behind a primer:template junction. DNA polymerase α, recognizes these sites and elongates the breaks left by primer removal. In eukaryotic cells, a small amount of the DNA segment immediately upstream of the RNA primer is also displaced, creating a flap structure. This flap is then cleaved by endonucleases. At the replication fork, the gap in DNA after removal of the flap is sealed by DNA ligase I, which repairs the nicks that are left between the 3'-OH and 5'phosphate of the newly synthesized strand. Owing to the relatively short nature of the eukaryotic Okazaki fragment, DNA replication synthesis occurring discontinuously on the lagging strand is less efficient and more time-consuming than leading-strand synthesis. DNA synthesis is complete once all RNA primers are removed and nicks are repaired. Leading strand During DNA replication, the replisome will unwind the parental duplex DNA into a two single-stranded DNA template replication fork in a 5' to 3' direction. The leading strand is the template strand that is being replicated in the same direction as the movement of the replication fork. This allows the newly synthesized strand complementary to the original strand to be synthesized 5' to 3' in the same direction as the movement of the replication fork. Once an RNA primer has been added by a primase to the 3' end of the leading strand, DNA synthesis will continue in a 3' to 5' direction with respect to the leading strand uninterrupted. DNA Polymerase ε will continuously add nucleotides to the template strand therefore making leading strand synthesis require only one primer and has uninterrupted DNA polymerase activity. Lagging strand DNA replication on the lagging strand is discontinuous. In lagging strand synthesis, the movement of DNA polymerase in the opposite direction of the replication fork requires the use of multiple RNA primers. DNA polymerase will synthesize short fragments of DNA called Okazaki fragments which are added to the 3' end of the primer. These fragments can be anywhere between 100 and 400 nucleotides long in eukaryotes. At the end of Okazaki fragment synthesis, DNA polymerase δ runs into the previous Okazaki fragment and displaces its 5' end containing the RNA primer and a small segment of DNA. This generates an RNA-DNA single strand flap, which must be cleaved, and the nick between the two Okazaki fragments must be sealed by DNA ligase I. This process is known as Okazaki fragment maturation and can be handled in two ways: one mechanism processes short flaps, while the other deals with long flaps. DNA polymerase δ is able to displace up to 2 to 3 nucleotides of DNA or RNA ahead of its polymerization, generating a short "flap" substrate for Fen1, which can remove nucleotides from the flap, one nucleotide at a time. By repeating cycles of this process, DNA polymerase δ and Fen1 can coordinate the removal of RNA primers and leave a DNA nick at the lagging strand. It has been proposed that this iterative process is preferable to the cell because it is tightly regulated and does not generate large flaps that need to be excised. In the event of deregulated Fen1/DNA polymerase δ activity, the cell uses an alternative mechanism to generate and process long flaps by using Dna2, which has both helicase and nuclease activities. The nuclease activity of Dna2 is required for removing these long flaps, leaving a shorter flap to be processed by Fen1. Electron microscopy studies indicate that nucleosome loading on the lagging strand occurs very close to the site of synthesis. Thus, Okazaki fragment maturation is an efficient process that occurs immediately after the nascent DNA is synthesized. Replicative DNA polymerases After the replicative helicase has unwound the parental DNA duplex, exposing two single-stranded DNA templates, replicative polymerases are needed to generate two copies of the parental genome. DNA polymerase function is highly specialized and accomplish replication on specific templates and in narrow localizations. At the eukaryotic replication fork, there are three distinct replicative polymerase complexes that contribute to DNA replication: Polymerase α, Polymerase δ, and Polymerase ε. These three polymerases are essential for viability of the cell. Because DNA polymerases require a primer on which to begin DNA synthesis, polymerase α (Pol α) acts as a replicative primase. Pol α is associated with an RNA primase and this complex accomplishes the priming task by synthesizing a primer that contains a short 10 nucleotide stretch of RNA followed by 10 to 20 DNA bases. Importantly, this priming action occurs at replication initiation at origins to begin leading-strand synthesis and also at the 5' end of each Okazaki fragment on the lagging strand. However, Pol α is not able to continue DNA replication and must be replaced with another polymerase to continue DNA synthesis. Polymerase switching requires clamp loaders and it has been proven that normal DNA replication requires the coordinated actions of all three DNA polymerases: Pol α for priming synthesis, Pol ε for leading-strand replication, and the Pol δ, which is constantly loaded, for generating Okazaki fragments during lagging-strand synthesis. Polymerase α (Pol α): Forms a complex with a small catalytic subunit (PriS) and a large noncatalytic (PriL) subunit. First, synthesis of an RNA primer allows DNA synthesis by DNA polymerase alpha. Occurs once at the origin on the leading strand and at the start of each Okazaki fragment on the lagging strand. Pri subunits act as a primase, synthesizing an RNA primer. DNA Pol α elongates the newly formed primer with DNA nucleotides. After around 20 nucleotides, elongation is taken over by Pol ε on the leading strand and Pol δ on the lagging strand. Polymerase δ (Pol δ): Highly processive and has proofreading, 3'->5' exonuclease activity. In vivo, it is the main polymerase involved in both lagging strand and leading strand synthesis. Polymerase ε (Pol ε): Highly processive and has proofreading, 3'->5' exonuclease activity. Highly related to pol δ, in vivo it functions mainly in error checking of pol δ. Cdc45–Mcm–GINS helicase complex The DNA helicases and polymerases must remain in close contact at the replication fork. If unwinding occurs too far in advance of synthesis, large tracts of single-stranded DNA are exposed. This can activate DNA damage signaling or induce DNA repair processes. To thwart these problems, the eukaryotic replisome contains specialized proteins that are designed to regulate the helicase activity ahead of the replication fork. These proteins also provide docking sites for physical interaction between helicases and polymerases, thereby ensuring that duplex unwinding is coupled with DNA synthesis. For DNA polymerases to function, the double-stranded DNA helix has to be unwound to expose two single-stranded DNA templates for replication. DNA helicases are responsible for unwinding the double-stranded DNA during chromosome replication. Helicases in eukaryotic cells are remarkably complex. The catalytic core of the helicase is composed of six minichromosome maintenance (Mcm2-7) proteins, forming a hexameric ring. Away from DNA, the Mcm2-7 proteins form a single heterohexamer and are loaded in an inactive form at origins of DNA replication as a head-to-head double hexamers around double-stranded DNA. The Mcm proteins are recruited to replication origins then redistributed throughout the genomic DNA during S phase, indicative of their localization to the replication fork. Loading of Mcm proteins can only occur during the G1 of the cell cycle, and the loaded complex is then activated during S phase by recruitment of the Cdc45 protein and the GINS complex to form the active Cdc45–Mcm–GINS (CMG) helicase at DNA replication forks. Mcm activity is required throughout the S phase for DNA replication. A variety of regulatory factors assemble around the CMG helicase to produce the ‘Replisome Progression Complex’ which associates with DNA polymerases to form the eukaryotic replisome, the structure of which is still quite poorly defined in comparison with its bacterial counterpart. The isolated CMG helicase and Replisome Progression Complex contain a single Mcm protein ring complex suggesting that the loaded double hexamer of the Mcm proteins at origins might be broken into two single hexameric rings as part of the initiation process, with each Mcm protein complex ring forming the core of a CMG helicase at the two replication forks established from each origin. The full CMG complex is required for DNA unwinding, and the complex of CDC45-Mcm-GINS is the functional DNA helicase in eukaryotic cells. Ctf4 and And1 proteins The CMG complex interacts with the replisome through the interaction with Ctf4 and And1 proteins. Ctf4/And1 proteins interact with both the CMG complex and DNA polymerase α. Ctf4 is a polymerase α accessory factor, which is required for the recruitment of polymerase α to replication origins. Mrc1 and Claspin proteins Mrc1/Claspin proteins couple leading-strand synthesis with the CMG complex helicase activity. Mrc1 interacts with polymerase ε as well as Mcm proteins. The importance of this direct link between the helicase and the leading-strand polymerase is underscored by results in cultured human cells, where Mrc1/Claspin is required for efficient replication fork progression. These results suggest that efficient DNA replication also requires the coupling of helicases and leading-strand synthesis... Proliferating cell nuclear antigen DNA polymerases require additional factors to support DNA replication. DNA polymerases have a semiclosed 'hand' structure, which allows the polymerase to load onto the DNA and begin translocating. This structure permits DNA polymerase to hold the single-stranded DNA template, incorporate dNTPs at the active site, and release the newly formed double-stranded DNA. However, the structure of DNA polymerases does not allow a continuous stable interaction with the template DNA. To strengthen the interaction between the polymerase and the template DNA, DNA sliding clamps associate with the polymerase to promote the processivity of the replicative polymerase. In eukaryotes, the sliding clamp is a homotrimer ring structure known as the proliferating cell nuclear antigen (PCNA). The PCNA ring has polarity with surfaces that interact with DNA polymerases and tethers them securely to the DNA template. PCNA-dependent stabilization of DNA polymerases has a significant effect on DNA replication because PCNAs are able to enhance the polymerase processivity up to 1,000-fold. PCNA is an essential cofactor and has the distinction of being one of the most common interaction platforms in the replisome to accommodate multiple processes at the replication fork, and so PCNA is also viewed as a regulatory cofactor for DNA polymerases. Replication factor C PCNA fully encircles the DNA template strand and must be loaded onto DNA at the replication fork. At the leading strand, loading of the PCNA is an infrequent process, because DNA replication on the leading strand is continuous until replication is terminated. However, at the lagging strand, DNA polymerase δ needs to be continually loaded at the start of each Okazaki fragment. This constant initiation of Okazaki fragment synthesis requires repeated PCNA loading for efficient DNA replication. PCNA loading is accomplished by the replication factor C (RFC) complex. The RFC complex is composed of five ATPases: Rfc1, Rfc2, Rfc3, Rfc4 and Rfc5. RFC recognizes primer-template junctions and loads PCNA at these sites. The PCNA homotrimer is opened by RFC by ATP hydrolysis and is then loaded onto DNA in the proper orientation to facilitate its association with the polymerase. Clamp loaders can also unload PCNA from DNA; a mechanism needed when replication must be terminated. Stalled replication fork DNA replication at the replication fork can be halted by a shortage of deoxynucleotide triphosphates (dNTPs) or by DNA damage, resulting in replication stress. This halting of replication is described as a stalled replication fork. A fork protection complex of proteins stabilizes the replication fork until DNA damage or other replication problems can be fixed. Prolonged replication fork stalling can lead to further DNA damage. Stalling signals are deactivated if the problems causing the replication fork are resolved. Termination Termination of eukaryotic DNA replication requires different processes depending on whether the chromosomes are circular or linear. Unlike linear molecules, circular chromosomes are able to replicate the entire molecule. However, the two DNA molecules will remain linked together. This issue is handled by decatenation of the two DNA molecules by a type II topoisomerase. Type II topoisomerases are also used to separate linear strands as they are intricately folded into a nucleosome within the cell. As previously mentioned, linear chromosomes face another issue that is not seen in circular DNA replication. Due to the fact that an RNA primer is required for initiation of DNA synthesis, the lagging strand is at a disadvantage in replicating the entire chromosome. While the leading strand can use a single RNA primer to extend the 5' terminus of the replicating DNA strand, multiple RNA primers are responsible for lagging strand synthesis, creating Okazaki fragments. This leads to an issue due to the fact that DNA polymerase is only able to add to the 3' end of the DNA strand. The 3'-5' action of DNA polymerase along the parent strand leaves a short single-stranded DNA (ssDNA) region at the 3' end of the parent strand when the Okazaki fragments have been repaired. Since replication occurs in opposite directions at opposite ends of parent chromosomes, each strand is a lagging strand at one end. Over time this would result in progressive shortening of both daughter chromosomes. This is known as the end replication problem. The end replication problem is handled in eukaryotic cells by telomere regions and telomerase. Telomeres extend the 3' end of the parental chromosome beyond the 5' end of the daughter strand. This single-stranded DNA structure can act as an origin of replication that recruits telomerase. Telomerase is a specialized DNA polymerase that consists of multiple protein subunits and an RNA component. The RNA component of telomerase anneals to the single stranded 3' end of the template DNA and contains 1.5 copies of the telomeric sequence. Telomerase contains a protein subunit that is a reverse transcriptase called telomerase reverse transcriptase or TERT. TERT synthesizes DNA until the end of the template telomerase RNA and then disengages. This process can be repeated as many times as needed with the extension of the 3' end of the parental DNA molecule. This 3' addition provides a template for extension of the 5' end of the daughter strand by lagging strand DNA synthesis. Regulation of telomerase activity is handled by telomere-binding proteins. Replication fork barriers Prokaryotic DNA replication is bidirectional; within a replicative origin, replisome complexes are created at each end of the replication origin and replisomes move away from each other from the initial starting point. In prokaryotes, bidirectional replication initiates at one replicative origin on the circular chromosome and terminates at a site opposed from the initial start of the origin. These termination regions have DNA sequences known as Ter sites. These Ter sites are bound by the Tus protein. The Ter-Tus complex is able to stop helicase activity, terminating replication. In eukaryotic cells, termination of replication usually occurs through the collision of the two replicative forks between two active replication origins. The location of the collision varies on the timing of origin firing. In this way, if a replication fork becomes stalled or collapses at a certain site, replication of the site can be rescued when a replisome traveling in the opposite direction completes copying the region. There are programmed replication fork barriers (RFBs) bound by RFB proteins in various locations, throughout the genome, which are able to terminate or pause replication forks, stopping progression of the replisome. Replication factories It has been found that replication happens in a localised way in the cell nucleus. Contrary to the traditional view of moving replication forks along stagnant DNA, a concept of replication factories emerged, which means replication forks are concentrated towards some immobilised 'factory' regions through which the template DNA strands pass like conveyor belts. Cell cycle regulation DNA replication is a tightly orchestrated process that is controlled within the context of the cell cycle. Progress through the cell cycle and in turn DNA replication is tightly regulated by the formation and activation of pre-replicative complexes (pre-RCs) which is achieved through the activation and inactivation of cyclin-dependent kinases (Cdks, CDKs). Specifically it is the interactions of cyclins and cyclin dependent kinases that are responsible for the transition from G1 into S-phase. During the G1 phase of the cell cycle there are low levels of CDK activity. This low level of CDK activity allows for the formation of new pre-RC complexes but is not sufficient for DNA replication to be initiated by the newly formed pre-RCs. During the remaining phases of the cell cycle there are elevated levels of CDK activity. This high level of CDK activity is responsible for initiating DNA replication as well as inhibiting new pre-RC complex formation. Once DNA replication has been initiated the pre-RC complex is broken down. Due to the fact that CDK levels remain high during the S phase, G2, and M phases of the cell cycle no new pre-RC complexes can be formed. This all helps to ensure that no initiation can occur until the cell division is complete. In addition to cyclin dependent kinases a new round of replication is thought to be prevented through the downregulation of Cdt1. This is achieved via degradation of Cdt1 as well as through the inhibitory actions of a protein known as geminin. Geminin binds tightly to Cdt1 and is thought to be the major inhibitor of re-replication. Geminin first appears in S-phase and is degraded at the metaphase-anaphase transition, possibly through ubiquination by anaphase promoting complex (APC). Various cell cycle checkpoints are present throughout the course of the cell cycle that determine whether a cell will progress through division entirely. Importantly in replication the G1, or restriction, checkpoint makes the determination of whether or not initiation of replication will begin or whether the cell will be placed in a resting stage known as G0. Cells in the G0 stage of the cell cycle are prevented from initiating a round of replication because the minichromosome maintenance proteins are not expressed. Transition into the S-phase indicates replication has begun. Replication checkpoint proteins In order to preserve genetic information during cell division, DNA replication must be completed with high fidelity. In order to achieve this task, eukaryotic cells have proteins in place during certain points in the replication process that are able to detect any errors during DNA replication and are able to preserve genomic integrity. These checkpoint proteins are able to stop the cell cycle from entering mitosis in order to allow time for DNA repair. Checkpoint proteins are also involved in some DNA repair pathways, while they stabilize the structure of the replication fork to prevent further damage. These checkpoint proteins are essential to avoid passing down mutations or other chromosomal aberrations to offspring. Eukaryotic checkpoint proteins are well conserved and involve two phosphatidylinositol 3-kinase-related kinases (PIKKs), ATR and ATM. Both ATR and ATM share a target phosphorylation sequence, the SQ/TQ motif, but their individual roles in cells differ. ATR is involved in arresting the cell cycle in response to DNA double-stranded breaks. ATR has an obligate checkpoint partner, ATR-interacting-protein (ATRIP), and together these two proteins are responsive to stretches of single-stranded DNA that are coated by replication protein A (RPA). The formation of single-stranded DNA occurs frequently, more often during replication stress. ATR-ATRIP is able to arrest the cell cycle to preserve genome integrity. ATR is found on chromatin during S phase, similar to RPA and claspin. The generation of single-stranded DNA tracts is important in initiating the checkpoint pathways downstream of replication damage. Once single-stranded DNA becomes sufficiently long, single-stranded DNA coated with RPA are able to recruit ATR-ATRIP. In order to become fully active, the ATR kinase rely on sensor proteins that sense whether the checkpoint proteins are localized to a valid site of DNA replication stress. The RAD9-HUS1-Rad1 (9-1-1) heterotrimeric clamp and its clamp loader RFCRad17 are able to recognize gapped or nicked DNA. The RFCRad17 clamp loader loads 9-1-1 onto the damaged DNA. The presence of 9-1-1 on DNA is enough to facilitate the interaction between ATR-ATRIP and a group of proteins termed checkpoint mediators, such as TOPBP1 and Mrc1/claspin. TOPBP1 interacts with and recruits the phosphorylated Rad9 component of 9-1-1 and binds ATR-ATRIP, which phosphorylates Chk1. Mrc1/Claspin is also required for the complete activation of ATR-ATRIP that phosphorylates Chk1, the major downstream checkpoint effector kinase. Claspin is a component of the replisome and contains a domain for docking with Chk1, revealing a specific function of Claspin during DNA replication: the promotion of checkpoint signaling at the replisome. Chk1 signaling is vital for arresting the cell cycle and preventing cells from entering mitosis with incomplete DNA replication or DNA damage. The Chk1-dependent Cdk inhibition is important for the function of the ATR-Chk1 checkpoint and to arrest the cell cycle and allow sufficient time for completion of DNA repair mechanisms, which in turn prevents the inheritance of damaged DNA. In addition, Chk1-dependent Cdk inhibition plays a critical role in inhibiting origin firing during S phase. This mechanism prevents continued DNA synthesis and is required for the protection of the genome in the presence of replication stress and potential genotoxic conditions. Thus, ATR-Chk1 activity further prevents potential replication problems at the level of single replication origins by inhibiting initiation of replication throughout the genome, until the signaling cascade maintaining cell-cycle arrest is turned off. Replication through nucleosomes Eukaryotic DNA must be tightly compacted in order to fit within the confined space of the nucleus. Chromosomes are packaged by wrapping 147 nucleotides around an octamer of histone proteins, forming a nucleosome. The nucleosome octamer includes two copies of each histone H2A, H2B, H3, and H4. Due to the tight association of histone proteins to DNA, eukaryotic cells have proteins that are designed to remodel histones ahead of the replication fork, in order to allow smooth progression of the replisome. There are also proteins involved in reassembling histones behind the replication fork to reestablish the nucleosome conformation. There are several histone chaperones that are known to be involved in nucleosome assembly after replication. The FACT complex has been found to interact with DNA polymerase α-primase complex, and the subunits of the FACT complex interacted genetically with replication factors. The FACT complex is a heterodimer that does not hydrolyze ATP, but is able to facilitate "loosening" of histones in nucleosomes, but how the FACT complex is able to relieve the tight association of histones for DNA removal remains unanswered. Another histone chaperone that associates with the replisome is Asf1, which interacts with the Mcm complex dependent on histone dimers H3-H4. Asf1 is able to pass newly synthesized H3-H4 dimer to deposition factors behind the replication fork and this activity makes the H3-H4 histone dimers available at the site of histone deposition just after replication. Asf1 (and its partner Rtt109) has also been implicated in inhibiting gene expression from replicated genes during S-phase. The heterotrimeric chaperone chromatin assembly factor 1 (CAF-1) is a chromatin formation protein that is involved in depositing histones onto both newly replicated DNA strands to form chromatin. CAF-1 contains a PCNA-binding motif, called a PIP-box, that allows CAF-1 to associate with the replisome through PCNA and is able to deposit histone H3-H4 dimers onto newly synthesized DNA. The Rtt106 chaperone is also involved in this process, and associated with CAF-1 and H3-H4 dimers during chromatin formation. These processes load newly synthesized histones onto DNA. After the deposition of histones H3-H4, nucleosomes form by the association of histone H2A-H2B. This process is thought to occur through the FACT complex, since it already associated with the replisome and is able to bind free H2A-H2B, or there is the possibility of another H2A-H2B chaperone, Nap1. Electron microscopy studies show that this occurs very quickly, as nucleosomes can be observed forming just a few hundred base pairs after the replication fork. Therefore, the entire process of forming new nucleosomes takes place just after replication due to the coupling of histone chaperones to the replisome. Mitotic DNA Synthesis Mitotic DNA synthesis (MiDAS) is a process of irregular DNA replication where DNA synthesis, naturally occurring in the S phase, takes place in the M phase of the cell cycle. Mitotic DNA synthesis is known to occur when cells are experiencing stress related to DNA replication. Certain loci in the genome, considered common fragile sites (CFS) or ALT-associated replication defects can induce replication stress that may lead to MiDAS. Mitotic DNA synthesis is enabled by a protein known as RAD52, which then recruits enzymes, including MUS81 and POLD3. These enzymes work to promote MiDAS, operating outside of ATR, BRCA2, and RAD51 which are necessary to prevent replication stress at CFS loci throughout S phase. MiDAS has been recorded in mammals and yeast, however, its occurrence in other eukaryotic organisms is yet to be discovered. Comparisons between prokaryotic and eukaryotic DNA replication When compared to prokaryotic DNA replication, namely in bacteria, the completion of eukaryotic DNA replication is more complex and involves multiple origins of replication and replicative proteins to accomplish. Prokaryotic DNA is arranged in a circular shape, and has only one replication origin when replication starts. By contrast, eukaryotic DNA is linear. When replicated, there are as many as one thousand origins of replication. Eukaryotic DNA is bidirectional. Here the meaning of the word bidirectional is different. Eukaryotic linear DNA has many origins (called O) and termini (called T). "T" is present to the right of "O". One "O" and one "T" together form one replicon. After the formation of pre-initiation complex, when one replicon starts elongation, initiation starts in second replicon. Now, if the first replicon moves in clockwise direction, the second replicon moves in anticlockwise direction, until "T" of first replicon is reached. At "T", both the replicons merge to complete the process of replication. Meanwhile, the second replicon is moving in forward direction also, to meet with the third replicon. This clockwise and counter-clockwise movement of two replicons is termed as bidirectional replication. Eukaryotic DNA replication requires precise coordination of all DNA polymerases and associated proteins to replicate the entire genome each time a cell divides. This process is achieved through a series of steps of protein assemblies at origins of replication, mainly focusing the regulation of DNA replication on the association of the MCM helicase with the DNA. These origins of replication direct the number of protein complexes that will form to initiate replication. In bacterial DNA replication, regulation focuses on the binding of the DnaA initiator protein to the DNA, with initiation of replication occurring multiple times during one cell cycle. Both prokaryotic and eukaryotic DNA use ATP binding and hydrolysis to direct helicase loading and in both cases the helicase is loaded in the inactive form. However, eukaryotic helicases are double hexamers that are loaded onto double stranded DNA whereas bacterial helicases are single hexamers loaded onto single stranded DNA. Segregation of chromosomes is another difference between prokaryotic and eukaryotic cells. Rapidly dividing cells, such as bacteria, will often begin to segregate chromosomes that are still in the process of replication. In eukaryotic cells chromosome segregation into the daughter cells is not initiated until replication is complete in all chromosomes. Despite these differences, however, the underlying process of replication is similar for both prokaryotic and eukaryotic DNA. Eukaryotic DNA replication protein list List of major proteins involved in eukaryotic DNA replication: See also DNA replication Prokaryotic DNA replication Processivity References DNA replication
Eukaryotic DNA replication
Biology
11,973
49,652,758
https://en.wikipedia.org/wiki/Chiral%20magnetic%20effect
Chiral magnetic effect (CME) is the generation of electric current along an external magnetic field induced by chirality imbalance. Fermions are said to be chiral if they keep a definite projection of spin quantum number on momentum. The CME is a macroscopic quantum phenomenon present in systems with charged chiral fermions, such as the quark–gluon plasma, or Dirac and Weyl semimetals. The CME is a consequence of chiral anomaly in quantum field theory; unlike conventional superconductivity or superfluidity, it does not require a spontaneous symmetry breaking. The chiral magnetic current is non-dissipative, because it is topologically protected: the imbalance between the densities of left-handed and right-handed chiral fermions is linked to the topology of fields in gauge theory by the Atiyah-Singer index theorem. The experimental observation of CME in a Dirac semimetal, zirconium pentatelluride (ZrTe5), was reported in 2014 by a group from Brookhaven National Laboratory and Stony Brook University. The material showed a conductivity increase in the Lorentz force-free configuration of the parallel magnetic and electric fields. In 2015, the STAR detector at Brookhaven's Relativistic Heavy Ion Collider and ALICE at CERN presented experimental evidence for the existence of CME in the quark–gluon plasma. See also Euler–Heisenberg Lagrangian Chiral anomaly References Electricity Condensed matter physics Quantum field theory
Chiral magnetic effect
Physics,Chemistry,Materials_science,Engineering
325
44,630,287
https://en.wikipedia.org/wiki/Deconica%20neocaledonica
Deconica neocaledonica is a species of mushroom in the family Strophariaceae. It has been found in New Caledonia and in Mount Halimun Salak National Park in Java, Indonesia. It is very similar to Deconica aureicystidiata. References Strophariaceae Fungi described in 1979 Fungi of Asia Fungi of New Caledonia Taxa named by Gastón Guzmán Fungus species
Deconica neocaledonica
Biology
80
17,052,416
https://en.wikipedia.org/wiki/Quantificational%20variability%20effect
Quantificational variability effect (QVE) is the intuitive equivalence of certain sentences with quantificational adverbs (Q-adverbs) and sentences without these, but with quantificational determiner phrases (DP) in argument position instead. 1. (a) A cat is usually smart. (Q-adverb) 1. (b) Most cats are smart. (DP) 2. (a) A dog is always smart. (Q-adverb) 2. (b) All dogs are smart. (DP) Analysis of QVE is widely cited as entering the literature with David Lewis' "Adverbs of Quantification" (1975), where he proposes QVE as a solution to Peter Geach's donkey sentence (1962). Terminology, and comprehensive analysis, is normally attributed to Stephen Berman's "Situation-Based Semantics for Adverbs of Quantification" (1987). See also David Kellogg Lewis Donkey pronoun Existential closure Irene Heim Notes Literature Core texts Berman, Stephen. The Semantics of Open Sentences. PhD thesis. University of Massachusetts Amherst, 1991. Berman, Stephen. 'An Analysis of Quantifier Variability in Indirect Questions'. In MIT Working Papers in Linguistics 11. Edited by Phil Branigan and others. Cambridge: MIT Press, 1989. Pages 1–16. Berman, Stephen. 'Situation-Based Semantics for Adverbs of Quantification'. In University of Massachusetts Occasional Papers 12. Edited by J. Blevins and Anne Vainikka. Graduate Linguistic Student Association (GLSA), University of Massachusetts Amherst, 1987. Pages 45–68. Select bibliography External links Core text Lewis, David. 'Adverbs of Quantification'. In Formal Semantics of Natural Language. Edited by Edward L Keenan. Cambridge: Cambridge University Press, 1975. Pages 3–15. Other texts available online Endriss, Cornelia and Stefan Hinterwimmer. 'The Non-Uniformity of Quantificational Variability Effects: A Comparison of Singular Indefinites, Bare Plurals and Plural Definites'. Belgian Journal of Linguistics 19 (2005): 93–120. Quantifier (logic) Formal semantics (natural language)
Quantificational variability effect
Mathematics
467
58,975,531
https://en.wikipedia.org/wiki/Dirk%20Ahlborn
Dirk Ahlborn is an entrepreneur, investor, and an American businessman. He is from Berlin and currently works in California; he holds a U.S. citizenship and is the founder and Chairman of Hyperloop Transportation Technologies and CEO of Jumpstarter, Inc. Career Early career In 1993, his career started as an investment specialist in Berlin, Germany. When he was 19 years old, he quit his job in the bank. In the 1990s, after moving to Italy, Ahlborn founded several companies in the alternative energy and interior design sphere. Ahlborn left Europe and joined the Girvan Institute of Technology, a non-profit incubator, and co-working space. The institute, located in Southern California, was created to assist NASA's Ames Research Center. While working on his kitchenware company, he started thinking about ways to apply the co-working notions elsewhere. JumpStarter, Inc He co-founded and became CEO of Jumpstarter in 2013, the company that developed the crowdsourcing platform JumpStartFund in El Segundo, California. References Living people 21st-century American businesspeople American technology chief executives American technology company founders Fellows of the Royal Society Hyperloop American industrial designers Year of birth missing (living people) Place of birth missing (living people)
Dirk Ahlborn
Technology,Engineering
265
11,420,915
https://en.wikipedia.org/wiki/Mucor%20racemosus
Mucor racemosus is a rapidly growing, weedy mould belonging to the division Mucoromycota. It is one of the earliest fungi to be grown in pure culture and was first isolated in 1886. It has a worldwide distribution and colonizes many habitats such as vegetational products, soil and houses. The fungus is mostly known for its ability to exhibit both filamentous and yeast-like morphologies, often referred to as dimorphism. Stark differences are seen in both forms and conditions of the environment heavily affect the phases of the M. racemosus. Like many fungi, it also reproduces both sexually and asexually. The dimorphic capacity of this species has been proposed as an important factor in its pathogenicity and has enhanced the industrial importance. This species is considered an opportunistic pathogen, generally limited to immunocompromised individuals. It has also been associated with allergy and inflammations of facial sinuses. Its association with allergy has made it a common fungus used in allergen medical testing. Industrial use of the fungus is in the production of enzymes and the manufacture of certain dairy foods. Morphology The dimorphic form of the species mainly exists and grows vegetatively as either a filamentous hyphae (mould form) or as spherical yeast (yeast form). However, the organism is best known from the mould form which is characterised by the production of asexual reproductive state consisting of tall (up to 2 cm) needle-like sporangiophores with an apical swelling enclosed by a large sporangium filled with ellipsoidal, single-celled, smooth-walled, unpigmented sporangiospores. In the laboratory, the fungus forms dark grey or light grey colonies on most common laboratory media. If subjected to anaerobic conditions, the fungus may convert to the yeast-like form. Anaerobic conditions and 30% carbon dioxide presence stimulate conversion to yeast form. Likewise, cultures supplemented with Tween 80, ergosterol and supplied with 100% nitrogen also converted to yeast. Conversely, increasing oxygen concentration will cause conversion of the yeast form to the mould form. Like many zygomycetes, M. racemosus reproduces both sexually and asexually depending on environmental conditions. During sexual reproduction, hyphae of compatible mating types touch and fuse, ultimately giving rise to a thick-walled zygosporangium containing a single zygospore. Germination from the zygospore leads to growth of new hyphae that give rise to asexual spores of both + and - mating type. Germination of these spores produces new haploid hyphae of the same mating type. Physiology and ecology M. racemosus possesses the ability to exhibit multiple morphology (mainly, filamentous and spherical shape) to withstand various environmental stress. This has given it ability to survive many conditions and it has a worldwide distribution, reported most frequently in Europe as well as Americas. In the tropics, it has been seen at higher altitudes. While the species is primarily soil-based, it has been shown to exist elsewhere such as in horse manure, plant remains, grains, vegetables and nuts. Typically, it is often seen on plant-based materials such as soft fruit, fruit juice and marmalade but it has also been isolated from non-plant sources like soft camembert cheese. M. racemosus has also been isolated from the human gut microbiome of non-obese individuals. It is the most common mould found in the floor dust in houses and is largely considered as an indoor mould. M. racemosus is uniquely known for its ability to display multiple morphologies but most of the time, studies are made based on the dimorphic form of the species. It is a facultative anaerobic zygomycote and fast-growing, conferring it ability to survive in multiple conditions/locations all over the world. M. racemosus possesses the ability to biosynthesize chitin and chitosan, which has been proposed as a mechanism supporting the ability of the fungus to switch between the yeast and the mould phases. Genomic analysis of M. racemosus has revealed genes similar to human RAS genes, and it is proposed that these genes help with germination and dimorphism. Protein kinase A (PKA) genes such as pkaR are highly also expressed during dimorphic shift. Human disease M. racemosus is a rare agent of human disease, typically only associated with opportunistic infection of immunocompromised individuals such as children, elderly and diseased patients (HIV, Ebola etc.). It is an agent of Mucormycosis, a potentially life-threatening infection often involving the head airways. Pulmonary, cutaneous, and gastrointestinal (GI) infections have also been observed leading to an array of clinical presentations in infected individuals. Risk factors such as diabetic ketoacidosis and neutropenia are present in most cases. Treatment of M. racemosus can be difficult due to histopathologic differentiation of the fungus. In addition to commonly used antifungal agents, biological compounds like Lovastatin, Aleuria aurantia lectin (AAL) and antimicrobial peptides (AMPs LR14) have been isolated and showed antimicrobial effects towards M. racemosus. Allergies to M. racemosus have been reported to affect immunologically normal individuals from in a range of places (Netherlands, Turkey and Brazil). Allergy to M. racemosus has been also associated with fungal rhinosinusitis, rhinitis and extrinsic allergic alveolitis. Asthmatic patients have also shown elevated sensitization to M. racemosus. Mucor racemosus-specific IgE antibody is commonly used and available for medical as well as laboratory use in allergen assay (ImmunoCAP). Commercial and biotechnological use The capacity of M. racemosus to grow as a yeast and its various abilities to manufacture biochemicals have led to its use in industry. For example, it can produce a high yield of phytase, an important industrial enzyme. It also has an increased extracellular protease activity, suggesting its biotechnological suitability for the production of other industrial enzymes. In the manufacture of sufu (fermented cheese-like soybean product common in China and Vietnam), the fungal fermentation of soybean curd (tofu) results in moulded tofu, pehtze. The final product (sufu) is obtained by maturing pehtze in a brine containing alcohol and salt for several months. It possesses the ability to adapt phenotypically to several different antibiotics after exposure to a single drug, which makes it a good model for phenotypic multidrug resistance in lower eukaryotes. It has been shown to adapt to famous antibiotics like cycloheximide, trichodermin and amphotericin B. Cells adapted to cycloheximide particularly have been observed to be 40-times more resistant than non-adapted cells to the drug. These adapted cells have been studied to better understand their greater efficiency of membrane transport (efflux of drugs). Mucor racemosus can biotransform lipids like 4-ene-3-one steroids and 20(S)-Protopanaxatriol into several different products, some of which have anticancer properties (as the metabolites resulted in increased intracellular calcium ion content, leading to cell cycle arrest and apoptosis). Two of the products formed from this biotransformation are two novel hydroperoxylated metabolites that have been shown to be effective against prostate cancer cells. Secondary metabolites of M. racemosus do not exhibit genotoxic activity, and the species is not known to be a producer of mycotoxins. However, some secondary metabolites of the fungus have been found to have anti-inflammatory activity similar to the drug dexamethasone . References Mucoraceae Fungi described in 1791 Taxa named by Jean Baptiste François Pierre Bulliard Fungus species
Mucor racemosus
Biology
1,745
4,757,800
https://en.wikipedia.org/wiki/Thomas%20J.%20Cram
Thomas Jefferson Cram (March 1, 1804 – December 20, 1883) was an American topographical engineer from New Hampshire who served in the United States Army Corps of Topographical Engineers from 1839 to 1863 and the United States Army Corps of Engineers from 1863 to 1869. Cram served as general superintendent for harbor works on Lake Michigan and the construction of roads in Wisconsin Territory. He led surveys to determine the border of Michigan and Wisconsin Territory in the Upper Peninsula, to explore Oregon and Washington Territories, and to determine the feasibility of a water route to the Pacific Ocean through Central America. He served under Major General Zachary Taylor in the Army of Occupation during the Mexican-American War and conducted coastal and river surveys in Texas. Cram participated in the United States Lake Survey and led the survey section between Green Bay, Wisconsin, and Chicago, Illinois. He conducted multiple river, canal, and harbor improvement assessments including for the Fox and Wisconsin Rivers in Wisconsin, the Ohio River in Louisville, Kentucky, and the harbor at St. Louis, Missouri, on the Mississippi River. He assisted the United States Coast Survey in New England from 1847 to 1855 and in North Carolina from 1858 to 1861. During the American Civil War (1861–1865), Cram was promoted to lieutenant colonel and colonel and served as aide-de-camp to Major General John E. Wool. Biography Cram was born in Acworth, New Hampshire. He graduated from the United States Military Academy in 1826 and taught mathematics and natural and experimental philosophy at the Academy from 1829 to 1836. He was commissioned a second lieutenant in the 4th U.S. Artillery Regiment. In 1835 he was promoted to first lieutenant, and he resigned his commission in 1836. Cram worked as an assistant engineer for the railroad industry in Maryland and Pennsylvania for two years and returned to United States Army service as a captain in 1838. In 1839, he was assigned as the general superintendent for harbor works in Lake Michigan and road construction in Wisconsin Territory with Howard Stansbury and Lorenzo Sitgreaves assigned to assist him. He made improvements to the harbors of Chicago, Illinois, St. Joseph, Michigan, and Michigan City, Indiana, and built new harbors at Calumet in Illinois and at Kenosha, Milwaukee, and Racine, Wisconsin. He built seven roads in Wisconsin and used timber truss bridges designed by Stephen Long for all bridge spans greater than in length. As part of the settlement of the Toledo War, between Michigan and Ohio, most of the Upper Peninsula of Michigan was granted to Michigan. The United States Congress created the Wisconsin Territory in 1836 and appropriated funds to conduct a survey to determine the boundary between Wisconsin and Michigan. In 1840, Cram and Douglass Houghton led the boundary survey team up the Menominee River to its source at Brule Lake. A previous map incorrectly listed Lac Vieux Desert as the headwater of the Menominee River and the Montreal River. He negotiated a treaty with the Ojibwa Chief Ca-sha-o-sha which allowed the survey to continue. The survey could not be completed in 1840 due to errors in the map used by Congress to determine the boundary. Cram returned to the Upper Peninsula in 1841 to continue the survey. He identified Lac Vieux Desert as the source of the Wisconsin River and recommended a different boundary between Wisconsin and Michigan. Congress used the border Cram recommended when it passed the Wisconsin Enabling Act of 1846 prior to Wisconsin becoming a state in 1848. Michigan refuted the results of the survey and claimed that Cram's interpretation of the boundary cheated Michigan out of of land. The case reached the United States Supreme Court in 1926 and was decided in favor of Wisconsin. In 1841, Cram began work with the United States Lake Survey. His portion of its survey began at Green Bay, Wisconsin, and moved south toward Chicago while William G. Williams began his portion at Green Bay and moved north toward Mackinac Island. In 1843, Cram conducted work in Louisville, Kentucky, to improve navigation of the Falls of the Ohio on the Ohio River. He recommended the expansion of the Louisville and Portland Canal and construction of a second canal to provide two-way river traffic, but Congress did not approve his recommendations and they were not implemented. In 1844, Cram was assigned to improve the harbor works at St. Louis, Missouri. The harbor required improvements because the flow of the Mississippi River had formed sandbars that trapped ships or required long diversions to avoid them. He proposed several works to remedy the situation but they were deemed too experimental and expensive. The construction of a dam was selected and work began on it until it was interrupted by the outbreak of the Mexican-American War in 1846. In 1845, Cram served as chief topographical engineer in the Army of Occupation under Major General Zachary Taylor during the Mexican-American War. He conducted systematic topographic surveys of the Nueces River, the Laguna de la Madre, and Aransas Bay. He fell ill with dysentery and was replaced by George Meade. From 1847 to 1855, he worked as an assistant in the United States Coast Survey and had the responsibility for the New England region. From 1855 to 1858 he was the chief topographical engineer for the Department of the Pacific. He led survey teams on expeditions through the Oregon and Washington Territories and worked to determine the feasibility of a water route to the Pacific Ocean through Central America. The American Civil War broke out in April 1861. Cram was promoted to major in August 1861 and then to lieutenant colonel in September 1861. He served as aide to Brigadier General — from May 1862 Major General — John E. Wool from 1861 to 1863 and was engaged in the campaign to capture Norfolk, Virginia, in May 1862. Cram transferred to the United States Army Corps of Engineers when the Topographical Engineers were disbanded in 1863, and was promoted to colonel at the end of the war in 1865. He was later brevetted to major general to recognize his war service, and served until his retirement in 1869. Cram died in Philadelphia, Pennsylvania, and was interred at Laurel Hill Cemetery. Bibliography Basin of the Mississippi, and its Natural Business Site, Briefly Considered., New York: Narine & Co., 1851 Address of Captain T.J. Cram, U.S. Corps of Topographical Engineers, Delivered at the Board of Trade Rooms, June 28 and Repeated Before the Corn Exchange Association, of Philadelphia, July 11, 1860, Upon Ocean Steam Ships Proposed to Run Between Philadelphia and Europe, and California, In the Lines of a Corporation Titled the "California, Philadelphia, and European Steamship Company.", Philadelphia: Jackson Printer, 1860 Memoir Upon the Northern Inter-Oceanic Route of Commercial Transit, Between Tide Water of the Puget Sound of the Pacific, and, Tide Water of the St. Lawrence Gulf of the Atlantic Ocean., Detroit: Board of Trade, 1868 Citations Sources Beers, Henry P., "A History of the U.S. Topographical Engineers, 1813-1863." 2 parts, The Military Engineer 34 (Jun 1942): pp. 287–91 & (Jul 1942): pp. 348–52. Available as of April 16, 2006 from https://web.archive.org/web/20140926122419/http://topogs.org/History.htm and https://web.archive.org/web/20110728120914/http://www.topogs.org/History2.htm External links Manitowish Waters Historical Society - Thomas Jefferson Cram Maps 1838-1841 Report of the Secretary of War, communicating, in compliance with a resolution of the Senate of the 5th instant, a copy of the report of Captain Thomas J. Cram, Corps of Topographical Engineers of November, 1856, on the oceanic routes to California Topographical memoir and report of Captain T.J. Cram, on Territories of Oregon and Washington University of Wisconsin Milwaukee - American Geographical Society Library Digital Map Collection - Thomas J. Cram Maps 1804 births 1883 deaths American explorers of the Pacific American military personnel of the Mexican–American War American topographers Burials at Laurel Hill Cemetery (Philadelphia) Explorers of Central America Explorers of Oregon Explorers of Texas Explorers of Washington (state) People from Acworth, New Hampshire People of New Hampshire in the American Civil War People from pre-statehood Wisconsin Union army colonels United States Army Corps of Engineers personnel United States Army Corps of Topographical Engineers United States Coast Survey personnel United States Military Academy alumni Upper Peninsula of Michigan
Thomas J. Cram
Engineering
1,732
31,336,127
https://en.wikipedia.org/wiki/Eutectic%20bonding
Eutectic bonding, also referred to as eutectic soldering, describes a wafer bonding technique with an intermediate metal layer that can produce a eutectic system. Those eutectic metals are alloys that transform directly from solid to liquid state, or vice versa from liquid to solid state, at a specific composition and temperature without passing a two-phase equilibrium, i.e. liquid and solid state. The fact that the eutectic temperature can be much lower than the melting temperature of the two or more pure elements can be important in eutectic bonding. Eutectic alloys are deposited by sputtering, dual source evaporation or electroplating. They can also be formed by diffusion reactions of pure materials and subsequently melting of the eutectic composition. Eutectic bonding to transfer epitaxial materials such as GaAs-AlGaAs onto silicon (Si) substrates with high yields for the general purpose of optoelectronics integration with Si electronics as well as to overcome fundamental issues such as lattice mismatch in hetero-epitaxy, was developed and reported by Venkatasubramanian et al. in 1992, and the performance of eutectic-bonded GaAs-AlGaAs materials for solar cells was further validated and reported by the same group in 1994. Eutectic bonding is able to produce hermetically sealed packages and electrical interconnection within a single process (compare ultrasonic images). This procedure is conducted at low temperatures, which results in low resultant stress induced in final assembly, high bonding strength, large fabrication yield and a good reliability. These attributes are dependent on the coefficient of thermal expansion between the substrates. The most important parameters for eutectic bonding are bonding temperature, bonding duration, and tool pressure. Overview Eutectic bonding is based on the ability of silicon (Si) to alloy with numerous metals and form a eutectic system. The most established eutectic formations are Si with gold (Au) or with aluminium (Al). This bonding procedure is most commonly used for Si or glass wafers that are coated with an Au/Al film and partly with adhesive layer (compare with following image). The Si-Au couple has the advantages of an exceptionally low eutectic temperature, an already widespread use in die bonding and the compatibility with Al interconnects. Additionally, often used eutectic alloys for wafer bonding in semiconductor fabrication are shown in the table. Choosing the correct alloy is determined by the processing temperature and compatibility of the materials used. Further, the bonding has less restrictions, concerning substrate roughness and planarity than direct bonding. Compared to anodic bonding, no high voltages are required that can be detrimental to electrostatic MEMS. Additionally, the eutectic bonding procedure promotes a better out-gassing and hermeticity than bonding with organic intermediate layers. Compared to glass frit bonding, the advantage sticks out that the reduction of seal ring geometries, an increase of the hermeticity levels and a shrinking of device size is possible. The geometry of eutectic seals is characterized by a thickness of 1 - 5 μm and a wideness of > 50 μm. The use of eutectic alloy brings the advantage of providing electrical conduction and interfacing with redistribution layers. The temperature of the eutectic bonding procedure is dependent on the used material. The bonding happens at a specific weight-% and temperature, e.g. 370 °C at 2.85 wt-% Si for Au intermediate layer (compare to phase diagram). The procedure of eutectic bonding is divided into following steps: Substrate processing Conditioning prior to bonding (e.g. oxide removal) Bonding process (Temperature, Mechanical Pressure for a few minutes) Cooling process Procedural steps Pre-treatment The surface preparation is the most important step to accomplish a successful eutectic bonding. Prior to preparation the oxide on the silicon surface acts as a diffusion barrier; the eutectic metal bond must be formed against clean silicon. To remove existing native oxide layers wet chemical etching (HF clean), dry chemical etching or chemical vapor deposition (CVD) with different types of crystals can be used. Also some applications require a surface pre-treatment using dry oxide removal processes, e.g. H2 plasma and CF4 plasma. An additional method for the removal of unwanted surface films, i.e. oxide, is applying ultrasound during the attachment process. As soon the tool is lowered a relative vibration between the wafer and the substrate is applied. Commonly, industrial bonders use ultrasonic with 60KHz vibration frequencies and 100 μm vibration amplitude. A successful oxide removal results in a solid, hermetically tight connection. A Second method to ensure the eutectic metal adheres on the Si wafer is by using an adhesion layer. This thin intermediate metal layer adheres well to the oxide and the eutectic metal. Well suitable metals for an Au-Si compound are titanium (Ti) and chromium (Cr) resulting in, e.g. Si-SiO2-Ti-Au or Si-SiO2-Cr-Au. The adhesion layer is used to break up the oxide by diffusion of silicon into the used material. A typical wafer is composed of a silicon wafer with oxide, 30 - 200 nm Ti or Cr layer and Au layer of > 500 nm thickness. In the wafer fabrication a nickel (Ni) or a platinum (Pt) layer is added between the gold and the substrate wafer as diffusion barrier. The diffusion barrier avoids interaction between Au and Ti/Cr and requires higher temperatures to form a reliable and uniform bond. Further, the very limited solubility of silicon in titanium and chromium can prevent the developing of Au-Si eutectic composition based on the diffusion of silicon through titanium into gold. The eutectic materials and optional adhesion layers are usually bonded by deposition as an alloy in one layer by dual component electroplating, dual-source evaporation (physical vapor deposition), or composite alloy sputtering. Bonding process The contacting of the substrates is applied directly after the pre-treatment of the surfaces to avoid oxide regeneration. The bonding procedure for oxidizing metals (not Au) generally takes place in a reduced atmosphere of 4% hydrogen and an inert carrier gas flow, e.g. nitrogen. The requirements for the bonding equipment lies in the thermal and pressure uniformity across the wafer. This enables uniformly compressed seal lines. The substrate is aligned and fixed on a heated stage and the silicon wafer in a heated tool. The substrates inserted in the bonding chamber are contacted maintaining the alignment. As soon the layers are in atomic contact the reaction between those starts. To support the reaction mechanical pressure is applied and heating above the eutectic temperature is carried out. The diffusivity and solubility of gold into silicon substrate increases with rising bonding temperatures. A higher temperature than the eutectic temperature is usually preferred for the bonding procedure. This may result in the formation of a thicker Au-Si alloy layer and further a stronger eutectic bond. The diffusion starts as soon as the layers are in atomic contact at elevated temperatures. The contacted surface layer containing the eutectic composites melts, forming a liquid phase alloy, accelerating further mixing processes and diffusion until the saturation composition is reached. Other common eutectic bonding alloys commonly used for wafer bonding include Au-Sn, Al-Ge, Au-Ge, Au-In and Cu-Sn. The chosen bonding temperature usually is some degrees higher than the eutectic temperature so the melt becomes less viscous and readily flows due to higher roughness to surface areas that are not in atomic contact. To prevent the melt pressed outside the bonding interface the optimization of the bonding parameter control is necessary, e.g. low force on the wafers. Otherwise, it may lead to short circuits or device malfunctions of the used components (electrical and mechanical). The heating of the wafers leads to a change in the surface texture due to formation of fine silicon micro structures on top of the gold surface. Cooling process The material mix is solidified when the temperature decreases below eutectic point or the concentration ratio changes (for Si-Au: ). The solidification leads to epitaxial growth of silicon and gold on top of the silicon substrate resulting in numerous small silicon islands protruding from a polycrystalline gold alloy (compare to cross-section image of the bonding interface). This can result in bonding strengths around 70 MPa. The importance lies in the appropriate process parameters, i.e. sufficient bonding temperature control. Otherwise the bond cracks due to stress caused by a mismatch of the thermal expansion coefficient. This stress is able to relax over time. Potential Uses Because of the high bonding strength this procedure is especially applicable for pressure sensors or fluidics. Micro-mechanical sensors and actuators with electronic or mechanical functions spanning multiple wafers can be fabricated. References Electronics manufacturing Packaging (microfabrication) Semiconductor technology Wafer bonding
Eutectic bonding
Materials_science,Engineering
1,869
72,635,493
https://en.wikipedia.org/wiki/16bit%20Sensation
is a Japanese manga conceptualized by Misato Mitsumi, Tatsuki Amazuyu, and Tamiki Wakaki and illustrated by Wakaki. It was first launched as a at Comic Market 91 in December 2016; Kadokawa Shoten started publishing it in collected volumes in September 2020, with two volumes released as of November 2021. An anime television series adaptation, titled 16bit Sensation: Another Layer and produced by Studio Silver, aired from October to December 2023. Characters Alcohol Soft The protagonist of the manga series. She works at Alcohol Soft as a concept and line art illustrator, and is learning to become a programmer. She pushes to give Konoha a chance at Alcohol Soft. He works as a programmer at Alcohol Soft, and is the son of the company owner. Some years later, he is still at the company, but refuses to make a game on a computer system using Windows 95 as he is very dedicated to the PC-98 platform. In the seventh episode of the anime, he is transported back in time. In the present timeline, he assists Konoha in her time travels. He is the only staff member in Alcohol Soft who knows that Konoha came from the future. She works at Alcohol Soft as a concept art, line art, and CG illustrator. In 1992, she and Meiko bring Konoha into the building after she asks Mamoru for help. Later, she asks Konoha to bring back Mamoru, who refuses to work on any game except one produced with PC-98. In the original manga series, and in some episodes of the anime adaptation, she is often seen wearing a cat hat. The manager of Alcohol Soft. He is Mamoru's father. He later gets unknowingly involved with financial fraud, almost bringing down Alcohol Soft. He is a scenario writer at Alcohol Soft and often wears a mask covering his eyes. Alcohol Soft (Another Layer) The protagonist of the anime series. A 19-year-old budding illustrator working at the video game company Blue Bell, who finds herself in the year 1992 and does part-time work for Alcohol Soft. She is very passionate about games. She has a fang, and usually speaks in the third person. Konoha is an anime-original character, although she appears in the final chapter of the manga series. Other characters A 19-year-old shy girl who Konoha meets, when she time travels to 1996, and buys games with Konoha's moral support. When she comes upon Konoha later, in 1999, she is a well-known illustrator who is promoting her game. In the future, she becomes the CEO of a large video game company. Media Manga Illustrated by Tamiki Wakaki, in collaboration with Misato Mitsumi and Tatsuki Amazuyu on the story, and based on their real-life experiences at Aquaplus, 16bit Sensation was first distributed as a at Comic Market on December 31, 2016. Kadokawa Shoten started publishing the manga in collected volumes on September 14, 2020. As of November 6, 2021, two volumes have been released. Volumes Anime In December 2022, it was announced that the manga would receive an anime adaptation. The adaptation was later announced by Aniplex at AnimeJapan to be a television series titled 16bit Sensation: Another Layer. It was animated by Studio Silver and directed by Takashi Sakuma, with an original story written by Tatsuya Takahashi and Wakaki, character designs by Masakatsu Sakaki, and music composed by Yashikin. It aired from October 5 to December 28, 2023, on Tokyo MX and other networks. The opening theme song is "65535", performed by Shoko Nakagawa, while the ending theme song is "Link~past and future~", performed by Aoi Koga as her character Konoha Akisato. Crunchyroll licensed the series. Muse Communication licensed the series in Southeast Asia. Episodes Reception The anime adaptation was received positively. Christopher Farris reviewed the first three episodes for Anime News Network, arguing it is a series steeped in "niche nostalgia" with references that only some would know, arguing that Konoha is an energetic and effective guide to introduce audiences to this nostalgia, and praising the performance of her voice actress, Aoi Koga. Farris also notes the cultural framework of the adaptation, which is partially a spinoff from the original manga which "entirely took place in the 1990s," and notes the unique approach of the series to its material, and to otaku culture. In her review of the first episode, Cy Catwell of Anime Feminist noted that it tells a story of how Konoha gets a "second chance" to use her passion to more than "her current job" in 2023, called it "really cute", and said there was a potential for the series to become a feminist series which engages with "the feminine experience with eroticism and adult media". In the three-episode check-in on Anime Feminist, Alex Henderson said the series was becoming "something entertaining and pretty interesting," as a "love letter" to the 1990s, said they are enjoying the characters, and hoped that Mamoru and Konoha did not become "love interests". Steven Blackburn of Screen Rant said that the series offers "a unique twist on the typical isekai genre," with parallels between her predicament in 2023 and where she ends up after opening a game. Hosts of Anime News Network's "This Week in Anime" noted that Konoha proselytizes about the "storied output" of Key and other works, describing Konoha as a "wondrous recreation of a modern nerd". Notes References Further reading External links Anime and manga about time travel Anime series based on manga Aniplex Comedy anime and manga Comics set in the 1990s Crunchyroll anime Doujinshi Japanese time travel television series Kadokawa Shoten manga Muse Communication Science fiction anime and manga Television series set in 1985 Television series set in 1992 Television series set in 1996 Television series set in 1999 Television series set in 2023 Time loop anime and manga Tokyo MX original programming Works about video games
16bit Sensation
Technology
1,287
2,161,573
https://en.wikipedia.org/wiki/Pax%20Christi
Pax Christi International is an international Catholic peace movement. The Pax Christi International website declares its mission is "to transform a world shaken by violence, terrorism, deepening inequalities, and global insecurity". History Pax Christi (Latin for Peace of Christ) was established in France in March 1945 by Marthe Dortel-Claudot and Bishop Pierre-Marie Théas, after the Germans had been expelled from France but before the end of World War II in Europe. Both were French citizens interested in reconciliation between French and German citizens in the aftermath of the war. Some of the first actions of Pax Christi were the organisation of kindness pilgrimages and other actions fostering reconciliation between France and Germany. Although Pax Christi initially began as a movement for French-German reconciliation, it expanded its focus and spread to other European countries in the 1950s. It grew as “a crusade of prayer for peace among all nations.” Pax Christi was recognized as “the official international Catholic peace movement” by Pope Pius XII in 1952. It also has chapters in the United States. In the 1960s, it became involved in Mississippi in organizing economic boycotts of businesses that discriminated against blacks, in an effort to support protesters in the civil rights movement, who were trying to end discrimination in facilities and employment. It was active in Greenwood, Mississippi, among other places. In 1983, Pax Christi International was awarded the UNESCO Peace Education Prize. The Pax Christi network membership is made up of 18 national sections and 115 Member Organizations working in over 50 countries. Peace work Pax Christi focuses on human rights, human security, disarmament and demilitarisation, nonviolence, nuclear disarmament, extractives in Latin America, and a renewed peace process for Israel-Palestine. Since 1988, the organisation gives out the Pax Christi International Peace Award to peace organisations and peace activists around the world. Organization Pax Christi is made up of national sections of the movement, affiliated organizations and partner organizations. Its International Secretariat is in Brussels. Pax Christi has consultative status as a non-governmental organization at the United Nations. International presidents Maurice Feltin (1950–1965) Bernard Alfrink (1965–1978) Luigi Bettazzi (1978–1985) Franz König (1985–1990) Godfried Danneels (1990–1999) Michel Sabbah (1999–2007) In 2007, a co-presidency was created with a bishop and a lay woman. Laurent Monsengwo (2007–2010) Marie Dennis (2007–2019) Kevin Dowling (C.SS.R.) (2010–2019) Sr. Teresia Wamuyu Wachira, IBVM (2019–present) Marc Stenger (2019–present) See also Catholic peace traditions Religion and peacebuilding Pope Paul VI Teacher of Peace Award List of anti-war organizations References Further reading External links Pax Christi Peace Stories Catholic Nonviolence Initiative International Christian organizations Peace organizations Catholic lay organisations Organizations established in 1945 Anti-nuclear organizations International Campaign to Abolish Nuclear Weapons
Pax Christi
Engineering
628
25,115,605
https://en.wikipedia.org/wiki/Stratocladistics
Stratocladistics is a technique in phylogenetics of making phylogenetic inferences using both geological and morphobiological data. It follows many of the same rules as cladistics, using Bayesian logic to quantify how good a phylogenetic hypothesis is in terms of debt and parsimony. However, in addition to the morphological debt that is used to determine phylogenetic dissimilarities in cladistics, there is also stratigraphic debt which adds the dimension of time to the equation. Although stratocladistics has been viewed with suspicion by some workers, it represents a total evidence approach that has some advantages over traditional cladistic approaches. For example, stratocladistics has been shown to outperform simple parsimony in tests based on simulated data and stratocladistics has better resolution than simple cladistics, with fewer equally parsimonious trees than in a basic cladistic analysis. References Further reading External links — software for stratocladistic reconstructions Phylogenetics
Stratocladistics
Biology
213
42,285,920
https://en.wikipedia.org/wiki/Laser%20microprobe%20mass%20spectrometer
A laser microprobe mass spectrometer (LMMS), also laser microprobe mass analyzer (LAMMA), laser ionization mass spectrometer (LIMS), or laser ionization mass analyzer (LIMA) is a mass spectrometer that uses a focused laser for microanalysis. It employs local ionization by a pulsed laser and subsequent mass analysis of the generated ions. Methods In laser microprobe mass analysis, a highly focused laser beam is pulsed on a micro sample usually with a volume of approximately 1 microliter. The resulting ions generated by this laser are then analyzed with time-of-flight mass spectrometry to give composition, concentration, and in the case of organic molecules structural information. Unlike other methods of microprobe analysis which involve ions or electrons, the LMMS microproble fires an ultraviolet pulse in order to create ions. Advantages LMMS is relatively simple to operate compared to other methods. Furthermore, its strengths include its ability to analyze biological materials to detect certain compounds (such as metals or organic materials). Sample preparation LAMMA is particular about the sample which is used. The sample must be small and thin. Ionization of too much material results in a large microplasma whose time spread and ion energy distribution entering the mass spectrometer can result in undesired peak deformation. See also Matrix-assisted laser desorption/ionization Franz Hillenkamp References Mass spectrometry Scientific techniques
Laser microprobe mass spectrometer
Physics,Chemistry
304
23,173,874
https://en.wikipedia.org/wiki/Vienna%20rectifier
The Vienna Rectifier is a pulse-width modulation rectifier, invented in 1993 by Johann W. Kolar at TU Wien, a public research university in Vienna, Austria. Features The Vienna Rectifier provides the following features: Three-phase three-level three-switch PWM rectifier with controlled output voltage Three-wire input, no connection to neutral Ohmic mains behaviour Boost system (continuous input current) Unidirectional power flow High power density Low conducted common-mode electro-magnetic interference (EMI) emissions Simple control to stabilize the neutral point potential Low complexity, low realization effort Low Reliable behaviour (guaranteeing ohmic mains behaviour) under heavily unbalanced mains voltages and in case of mains failure Topology The Vienna Rectifier is a unidirectional three-phase three-switch three-level Pulse-width modulation (PWM) rectifier. It can be seen as a three-phase diode bridge with an integrated boost converter. Applications The Vienna Rectifier is useful wherever six-switch converters are used for achieving sinusoidal mains current and controlled output voltage, when no energy feedback from the load into the mains is available. In practice, use of the Vienna Rectifier is advantageous when space is at a sufficient premium to justify the additional hardware cost. These include: Telecommunications power supplies. Uninterruptible power supplies. Input stages of AC-drive converter systems. Figure 2 shows the top and bottom views of an air-cooled 10 kW-Vienna Rectifier (400 kHz PWM), with sinusoidal input current s and controlled output voltage. Dimensions are 250mm x 120mm x 40mm, resulting in a power density of 8.5 kW/dm3. The total weight of the converter is 2.1 kg Current and voltage waveforms Figure 3 shows the system behaviour, calculated using the power-electronics circuit simulator. Between the output voltage midpoint (0) and the mains midpoint (M) the common mode voltage u0M appears, as is characteristic in three-phase converter systems. Current control and balance of the neutral point at the DC-side It is possible to separately control the input current shape in each branch of the diode bridge by inserting a bidirectional switch into the node, as shown in Figure 3. The switch Ta controls the current by controlling the magnetization of the inductor. When the bi-directional switch is turned on, the input voltage is applied across the inductor and the current in the inductor rises linearly. Turning off the switch causes the voltage across the inductor to reverse and the current to flow through the freewheeling diodes Da+ and Da-, decreasing linearly. By controlling the switch on-time, the topology is able to control the current in phase with the mains voltage, presenting a resistive load behavior (Power-factor correction capability). To generate a sinusoidal power input which is in phase with the voltage the average voltage space vector over a pulse-period must satisfy: For high switching frequencies or low inductivities we require () . The available voltage space vectors required for the input voltage are defined by the switching states and the direction of the phase currents. For example, for , i.e. for the phase-range of the period() the phase of the input current space vector is ). Fig. 4 shows the conduction states of the system, and from this we get the input space vectors shows in Fig. 5 See also Warsaw rectifier References Electronic circuits Electric power conversion Power electronics Rectifiers 20th-century inventions
Vienna rectifier
Engineering
770
69,732,690
https://en.wikipedia.org/wiki/Quasimorphism
In group theory, given a group , a quasimorphism (or quasi-morphism) is a function which is additive up to bounded error, i.e. there exists a constant such that for all . The least positive value of for which this inequality is satisfied is called the defect of , written as . For a group , quasimorphisms form a subspace of the function space . Examples Group homomorphisms and bounded functions from to are quasimorphisms. The sum of a group homomorphism and a bounded function is also a quasimorphism, and functions of this form are sometimes referred to as "trivial" quasimorphisms. Let be a free group over a set . For a reduced word in , we first define the big counting function , which returns for the number of copies of in the reduced representative of . Similarly, we define the little counting function , returning the maximum number of non-overlapping copies in the reduced representative of . For example, and . Then, a big counting quasimorphism (resp. little counting quasimorphism) is a function of the form (resp. . The rotation number is a quasimorphism, where denotes the orientation-preserving homeomorphisms of the circle. Homogeneous A quasimorphism is homogeneous if for all . It turns out the study of quasimorphisms can be reduced to the study of homogeneous quasimorphisms, as every quasimorphism is a bounded distance away from a unique homogeneous quasimorphism , given by : . A homogeneous quasimorphism has the following properties: It is constant on conjugacy classes, i.e. for all , If is abelian, then is a group homomorphism. The above remark implies that in this case all quasimorphisms are "trivial". Integer-valued One can also define quasimorphisms similarly in the case of a function . In this case, the above discussion about homogeneous quasimorphisms does not hold anymore, as the limit does not exist in in general. For example, for , the map is a quasimorphism. There is a construction of the real numbers as a quotient of quasimorphisms by an appropriate equivalence relation, see Construction of the reals numbers from integers (Eudoxus reals). Notes References Further reading What is a Quasi-morphism? by D. Kotschick Group theory Additive functions
Quasimorphism
Mathematics
474
11,527,342
https://en.wikipedia.org/wiki/STO-nG%20basis%20sets
STO-nG basis sets are minimal basis sets, where primitive Gaussian orbitals are fitted to a single Slater-type orbital (STO). originally took the values 2 – 6. They were first proposed by John Pople. A minimum basis set is where only sufficient orbitals are used to contain all the electrons in the neutral atom. Thus for the hydrogen atom, only a single 1s orbital is needed, while for a carbon atom, 1s, 2s and three 2p orbitals are needed. The core and valence orbitals are represented by the same number of primitive Gaussian functions . For example, an STO-3G basis set for the 1s, 2s and 2p orbital of the carbon atom are all linear combination of 3 primitive Gaussian functions. For example, a STO-3G s orbital is given by: where The values of c1, c2, c3, α1, α2 and α3 have to be determined. For the STO-nG basis sets, they are obtained by making a least squares fit of the three Gaussian orbitals to the single Slater-type orbitals. (Extensive tables of parameters have been calculated for STO-1G through STO-5G for s orbitals through g orbitals.) This differs from the more common procedure where the criteria often used is to choose the coefficients (c's) and exponents (α'''s) to give the lowest energy with some appropriate method for some appropriate molecule. A special feature of this basis set is that common exponents are used for orbitals in the same shell (e.g. 2s and 2p) as this allows more efficient computation. The fit between the Gaussian orbitals and the Slater orbital is good for all values of r'', except for very small values near to the nucleus. The Slater orbital has a cusp at the nucleus, while Gaussian orbitals are flat at the nucleus. Use of STO-nG basis sets The most widely used basis set of this group is STO-3G, which is used for large systems and for preliminary geometry determinations. This basis set is available for all atoms from hydrogen up to xenon. STO-2G basis set The STO-2G basis set is a linear combination of 2 primitive Gaussian functions. The original coefficients and exponents for first-row and second-row atoms are given as follows. Accuracy The exact energy of the 1s electron of H atom is −0.5 hartree, given by a single Slater-type orbital with exponent 1.0. The following table illustrates the increase in accuracy as the number of primitive Gaussian functions increases from 3 to 6 in the basis set. See also List of quantum chemistry and solid state physics software References Quantum chemistry
STO-nG basis sets
Physics,Chemistry
590
31,893,908
https://en.wikipedia.org/wiki/G.9963
Recommendation G.9963 is a home networking standard under development at the International Telecommunication Union standards sector, the ITU-T. It was begun in 2010 by ITU-T to add multiple-input and multiple-output (known as MIMO) capabilities to the G.hn standard originally defined in Recommendation G.9960. The standard is also known as "G.hn-mimo". As part of the family of G.hn standards, G.9963 was endorsed by the HomeGrid Forum. References Networking standards Network protocols Open standards International standards Computer networks Internet Standards ITU-T recommendations ITU-T G Series Recommendations
G.9963
Technology,Engineering
138