id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
50,984,432
https://en.wikipedia.org/wiki/Comparison%20gallery%20of%20image%20scaling%20algorithms
This gallery shows the results of numerous image scaling algorithms. Scaling methods An image size can be changed in several ways. Consider resizing a 160x160 pixel photo to the following 40x40 pixel thumbnail and then scaling the thumbnail to a 160x160 pixel image. Also consider doubling the size of the following image containing text. References Image processing Image galleries Image scaling algorithms
Comparison gallery of image scaling algorithms
Technology
79
59,117,359
https://en.wikipedia.org/wiki/Ruth%20R.%20Wexler
Ruth Wexler is an American industrial chemist best known as a co-discoverer of apixaban, a marketed anticoagulant; and losartan, a blood pressure treatment. Education Wexler received her B.A. in chemistry from Boston University in 1977, and a Ph.D. in organic chemistry working with Amos B. Smith at the University of Pennsylvania in 1982. Research Wexler started her career at DuPont in 1982, rising to Executive Director in 1998. She then joined Bristol-Myers Squibb as an Executive Director in 2001, moving eventually to New Jersey to head their cardiovascular research unit. She has worked on targets involved in apoptosis, inflammation, obesity, and coagulation. As of 2018, she has over 215 original research publications. Awards 2015 - E.B. Hershberg Award for Discoveries in Medicinally Active Substances 2014 - Inducted into the ACS MEDI Hall of Fame 2011 - BMS Ondetti and Cushman Award 2004 - Outstanding New Jersey Woman in Research References Living people Year of birth missing (living people) American women chemists American organic chemists Boston University College of Arts and Sciences alumni University of Pennsylvania alumni 21st-century American women
Ruth R. Wexler
Chemistry
248
732,444
https://en.wikipedia.org/wiki/Nystatin
Nystatin, sold under the brand name Mycostatin among others, is an antifungal medication. It is used to treat Candida infections of the skin including diaper rash, thrush, esophageal candidiasis, and vaginal yeast infections. It may also be used to prevent candidiasis in those who are at high risk. Nystatin may be used by mouth, in the vagina, or applied to the skin. Common side effects when applied to the skin include burning, itching, and a rash. Common side effects when taken by mouth include vomiting and diarrhea. During pregnancy use in the vagina is safe while other formulations have not been studied in this group. It works by disrupting the cell membrane of the fungal cells. Nystatin was discovered in 1950 by Rachel Fuller Brown and Elizabeth Lee Hazen. It was the first polyene macrolide antifungal. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. It is made from the bacterium Streptomyces noursei. In 2022, it was the 236th most commonly prescribed medication in the United States, with more than 1million prescriptions. Medical uses Skin, vaginal, mouth, and esophageal Candida infections usually respond well to treatment with nystatin. Infections of nails or hyperkeratinized skin do not respond well. When given parenterally, its activity is reduced due to presence of plasma. Oral nystatin is often used as a preventive treatment in people who are at risk for fungal infections, such as AIDS patients with a low CD4+ count and people receiving chemotherapy. It has been investigated for use in patients after liver transplantation, but fluconazole was found to be much more effective for preventing colonization, invasive infection, and death. It is effective in treating oral candidiasis in elderly people who wear dentures. It is also used in very low birth-weight (less than 1500 g or 3 lb 5oz o) infants to prevent invasive fungal infections, although fluconazole is the preferred treatment. It has been found to reduce the rate of invasive fungal infections and also reduce deaths when used in these babies. Liposomal nystatin is not commercially available, but investigational use has shown greater in vitro activity than colloidal formulations of amphotericin B, and demonstrated effectiveness against some amphotericin B-resistant forms of fungi. It offers an intriguing possibility for difficult-to-treat systemic infections, such as invasive aspergillosis, or infections that demonstrate resistance to amphotericin B. Cryptococcus is also sensitive to nystatin. Additionally, liposomal nystatin appears to cause fewer cases of and less severe nephrotoxicity than observed with amphotericin B. Adverse effects Bitter taste and nausea are more common than most other adverse effects. The oral suspension form produces a number of adverse effects including but not limited to: Diarrhea Abdominal pain Rarely, tachycardia, bronchospasm, facial swelling, muscle aches Both the oral suspension and the topical form can cause: Hypersensitivity reactions, including Stevens–Johnson syndrome in some cases Rash, itching, burning and acute generalized exanthematous pustulosis Too high of a dosage can potentially lead to additional side effects such as: Nephrotoxicity Hypokalemia Chills and skin rash Mechanism of action Like amphotericin B and natamycin, nystatin is an ionophore. It binds to ergosterol, a major component of the fungal cell membrane. When present in sufficient concentrations, it forms pores in the membrane that lead to K+ leakage, acidification, and death of the fungus. Ergosterol is a sterol unique to fungi, so the drug does not have such catastrophic effects on animals or plants. However, many of the systemic/toxic effects of nystatin in humans are attributable to its binding to mammalian sterols, namely cholesterol. This is the effect that accounts for the nephrotoxicity observed when high serum levels of nystatin are achieved. Despite the molecular similarities and differences of ergosterol and cholesterol, there is currently no consensus as to why nystatin has a higher binding affinity for ergosterol because it remains unclear how the nystatin pores are formed. Researchers have concluded thus far that nystatin pores are formed from 4-12 nystatin molecules, with an unknown number of the necessary sterol interactions. Nystatin also impacts cell membrane potential and transport by lipid peroxidation. Conjugated double bonds in nystatin's structure steal electron density from ergosterol in fungal cell membranes. Lipid peroxidation alters the hydrophilicity of the interior of channels in the membrane, which is necessary to transport ions and polar molecules. Disruption of membrane transport from nystatin results in rapid cell death. Lipid peroxidation by nystatin also contributes significantly to K+ leakage due to structural modifications of the membrane. Biosynthesis Nystatin A1 (often called nystatin) is biosynthesized by a bacterial strain, Streptomyces noursei. The structure of this active compound is characterized as a polyene macrolide with a deoxysugar D-mycosamine, an aminoglycoside. The genomic sequence of nystatin reveals the presence of the polyketide loading module (nysA), six polyketide syntheses modules (nysB, nysC, nysI, nysJ, and nysK) and two thioesterase modules (nysK and nysE). It is evident that the biosynthesis of the macrolide functionality follows the polyketide synthase I pathway. Following the biosynthesis of the macrolide, the compound undergoes post-synthetic modifications, which are aided by the following enzymes: GDP-mannose dehydratase (nysIII), P450 monooxygenase (nysL and nysN), aminotransferase (nysDII), and glycosyltransferase (nysDI). The biosynthetic pathway is thought to proceed as shown to yield nystatin. The melting point of nystatin is 44 - 46 °C. History Like many other antifungals and antibiotics, nystatin has bacterial origin. It was isolated from Streptomyces noursei in 1950 by Elizabeth Lee Hazen and Rachel Fuller Brown, who were doing research for the Division of Laboratories and Research of the New York State Department of Health. Hazen found a promising micro-organism in the soil of a friend's dairy farm. She named it Streptomyces noursei, after Jessie Nourse, the wife of the farm's owner. Hazen and Brown named nystatin after the New York State Health Department in 1954. The two discoverers patented the drug, and then donated the $13 million in profits to a foundation to fund similar research. Other uses It is also used in cellular biology as an inhibitor of the lipid raft-caveolae endocytosis pathway on mammalian cells, at concentrations around 3 μg/ml. In certain cases, a nystatin derivative has been used to prevent the spread of mold on objects such as works of art. For example, it was applied to wood panel paintings damaged as a result of the Arno River Flood of 1966 in Florence, Italy. Nystatin is also used as a tool by scientists performing "perforated" patch-clamp electrophysiological recordings of cells. When loaded in the recording pipette, it allows for measurement of electrical currents without washing out the intracellular contents, because it forms pores in the cell membrane that are permeable to only monovalent ions, preferably cations such as sodium, potassium, lithium, and cesium. Another electrophysiological measurement that can be made is fusion event duration in a nystatin-ergosterol based system. Fusions are measured while the voltage is held constant, and is characterized by a spike in the current that then returns to the baseline current as the nystatin channels close. When present in smaller concentrations, nystatin momentarily forms pores that allows a vesicle fusion to occur more easily; that fusion then interrupts the pore stability and the nystatin and ergosterol disperse from each other. Conversely, researchers have found that the half-life of these nystatin pores increase with an increased dosage level of nystatin to the membrane systems. This indicates a lower energy of both the lipid membrane and the ionophores when there is a higher concentration of nystatin. Formulations An oral suspension form is used for the prophylaxis or treatment of oropharyngeal thrush, a superficial candidal infection of the mouth and pharynx. A tablet form is preferred for candidal infections in the intestines. Nystatin is available as a topical cream and can be used for superficial candidal infections of the skin. Additionally, a liposomal formulation of nystatin was investigated in the 1980s and into the early 21st century. The liposomal form was intended to resolve problems arising from the poor solubility of the parent molecule and the associated systemic toxicity of the free drug. Nystatin pastilles have been shown to be more effective in treating oral candidiasis than nystatin suspensions. Due to its toxicity profile when high levels in the serum are obtained, no injectable formulations of nystatin are on the US market. However, injectable formulations have been investigated in the past. Brand names Nyamyc Pedi-Dri Pediaderm AF Complete Candistatin Cazetin (oral drop) Nyaderm Bio-Statin PMS-Nystatin Nystan (oral tablets, topical ointment, and pessaries, formerly from Bristol-Myers Squibb) Infestat Nystalocal from Medinova AG Nystamont Nystop (topical powder, Paddock) Nystex Mykinac Nysert (vaginal suppositories, Procter & Gamble) Nystaform (topical cream, and ointment and cream combined with iodochlorhydroxyquine and hydrocortisone; formerly Bayer now Typharm Ltd) Nilstat (vaginal tablet, oral drops, Lederle) Kandistatin (oral drop) Korostatin (vaginal tablets, Holland Rantos) Mycostatin (vaginal tablets, topical powder, suspension Bristol-Myers Squibb) Mycolog-II (topical ointment, combined with triamcinolone; Apothecon) Mytrex (topical ointment, combined with triamcinolone) Mykacet (topical ointment, combined with triamcinolone) Myco-Triacet II (topical ointment, combined with triamcinolone) Flagystatin II (cream, combined with metronidazole) Timodine (cream, combined with hydrocortisone and dimethicone) Nistatina (oral tablets, Antibiotice Iaşi) Nidoflor (cream, combined with neomycin sulfate and triamcinolone acetonide) Stamicin (oral tablets, Antibiotice Iaşi) Lystin Animax (veterinary topical ointment or cream; combined with neomycin sulfate, thiostrepton and triamcinolone acetonide) Nyata (topical powder) References Antifungals Dermatoxins World Health Organization essential medicines Wikipedia medicine articles ready to translate Polyketides Lactones Polyenes Hemiketals Lactols Lipid methods Vicinal diols Secondary alcohols Amines Carboxylic acids Conjugated dienes Acetals
Nystatin
Chemistry,Biology
2,504
13,532,851
https://en.wikipedia.org/wiki/J.%20H.%20Wilkinson%20Prize%20for%20Numerical%20Software
The James H. Wilkinson Prize for Numerical Software is awarded every four years to honor outstanding contributions in the field of numerical software. The award is named to commemorate the outstanding contributions of James H. Wilkinson in the same field. The prize was established by Argonne National Laboratory (ANL), the National Physical Laboratory (NPL), and the Numerical Algorithms Group (NAG). They sponsored the award every four years at the International Congress on Industrial and Applied Mathematics (ICIAM) beginning with the 1991 award. By agreement in 2015 among ANL, NPL, NAG, and SIAM, the prize will be administered by the Society for Industrial and Applied Mathematics (SIAM) starting with the 2019 award. Eligibility and selection criteria Candidates must have worked in the field for at most 12 years after receiving their PhD as of January 1 of the award year. Breaks in continuity are allowed, and the prize committee may make exceptions. The award is given on the basis of: Clarity of the software implementation and documentation. Clarity of the paper accompanying the entry. Portability, reliability, efficiency and usability of the software implementation. Depth of analysis of the algorithm and the software. Importance of application addressed by the software. Quality of the test software Winners 1991 The first prize in 1991 was awarded to Linda Petzold for DASSL, a differential algebraic equation solver. This code is available in the public domain. 1995 The 1995 prize was awarded to Chris Bischof and Alan Carle for ADIFOR 2.0, an automatic differentiation tool for Fortran 77 programs. The code is available for educational and non-profit research. 1999 The 1999 prize was awarded to Matteo Frigo and Steven G. Johnson for FFTW, a C library for computing the discrete Fourier transform. 2003 The 2003 prize was awarded to Jonathan Shewchuk for Triangle, a two-dimensional mesh generator and Delaunay Triangulator. It is freely available. 2007 The 2007 prize was awarded to Wolfgang Bangerth, Guido Kanschat, and Ralf Hartmann for deal.II, a software library for computational solution of partial differential equations using adaptive finite elements. It is freely available. 2011 Andreas Waechter (IBM T. J. Watson Research Center) and Carl Laird (Texas A&M University) were awarded the 2011 prize for IPOPT, an object-oriented library for solving large-scale continuous optimization problems. It is freely available. 2015 The 2015 prize was awarded to Patrick Farrell (University of Oxford), Simon Funke (Simula Research Laboratory), David Ham (Imperial College London), and Marie Rognes (Simula Research Laboratory) for the development of dolfin-adjoint, a package which automatically derives and solves adjoint and tangent linear equations from high-level mathematical specifications of finite element discretisations of partial differential equations. 2019 The 2019 prize was awarded to Jeff Bezanson, Stefan Karpinski, and Viral B. Shah for their development of the Julia programming language. 2023 The 2023 prize was awarded to Field Van Zee and Devin Matthews for the development of BLIS, a portable open-source software framework for instantiating high-performance BLAS-like dense linear algebra libraries on modern CPUs. See also List of computer science awards List of mathematics awards References External links Official Website Computer science awards Awards established in 1991 Awards of the Society for Industrial and Applied Mathematics
J. H. Wilkinson Prize for Numerical Software
Technology
693
69,150,103
https://en.wikipedia.org/wiki/Christa%20Muller-Sieburg
Christa Edith Muller-Sieburg (19 February 1952 – 12 January 2013) was a German-American immunologist and hematologist, whose work became central to the understanding of the clonal heterogeneity of hematopoietic stem cells (HSCs). Muller-Sieburg is known for her contributions to the purification of hematopoietic stem cells, the characterization of individual stem cell clones and her revision of the process of hematopoiesis. Muller-Sieburg was a co-discoverer of the negative marker set of hematopoietic stem cells that led to the modern purification techniques widely used in hematopoietic stem cell research today. She was the first to demonstrate the biased differentiation behavior of individual stem cell clones, thereby sparking a novel and entirely original view of hematopoiesis. Biography Muller-Sieburg received her Abitur in 1972 in Bonn, West Germany. The same year, she moved to Köln to begin her studies in biology at the University of Cologne. She completed her studies under the guidance of Klaus Rajewsky in 1978 with a diploma thesis in immunology entitled "Investigations concerning the Class Specificity of the Fc-Receptor on Murine Lymphocytes Using Monoclonal Antibodies" at the Institut für Genetik. She received her doctorate in the natural sciences in 1983 with a dissertation entitled "Regulation of the Expression of Idiotopic Antibodies by Isotype Variants of Monoclonal Anti-Idiotopic Antibodies" (advisor: Klaus Rajewsky). Muller-Sieburg married Hans B. Sieburg, a mathematician whom she had met in 1972 while studying at the University of Cologne. Muller-Sieburg died on 12 January 2013 of a squamous cell carcinoma, after nine years of illness, during which time she was still actively working. Academic career In 1983, Muller-Sieburg and her husband, Hans B. Sieburg, moved to the United States of America, both as fellows of the Deutsche Forschungsgemeinschaft (German Research Foundation) at Stanford University. There, Muller-Sieburg began her research at the laboratory of Irving Weissman at the Stanford University Medical Center, while H. Sieburg worked and taught at the Stanford Mathematics Department. Muller-Sieburg's research at Weissman's lab was focused on the identification of a common cell precursor for both T cells and B cells. She worked closely with Cheryl Ann Whitlock, who came to Weissman's lab from Owen Witte's lab also to work on the B cell precursor problem. The results of their collaboration were reported in a joint paper, describing for the first time the isolation of an early committed pre-pre-B cell along with the discovery of a hematopoietic stem cell population expressing low levels of Thy-1 antigen. The marker Thy-1(low) was crucial to establishing the exclusion criteria for the purification of HSCs. In 1986, Muller-Sieburg and her husband moved to La Jolla, California, where she continued her work on the characterization and maintenance of hematopoietic stem cells at the Eli Lilly Research Institute led by Dr. Jacques M. Chiller, while Hans Sieburg initially joined the laboratory of Melvin Cohn at the Salk Institute for Biological Studies and later, became faculty at the University of California, San Diego. In 1989, Muller-Sieburg became an independent group leader at the Medical Biology Institute in La Jolla, where she expanded her work on the purification and maintenance of hematopoietic stem cells via long-term bone marrow cultures – a technique she had developed in collaboration with Cheryl Whitlock, George F Tidmarsh and Irving Weissman at Stanford. By using this technique, Muller Sieburg and Elena Deryugina identified the growth factor, namely macrophage-colony stimulating factor (M-CSF) as a cytokine critical for the maintenance of stromal cell support for hematopoietic stem cells. Muller-Sieburg's recognition as a leading scientist in the field of experimental hematology, led to her appointment as a professor and head of the stem cell program at the Sidney Kimmel Cancer Center, La Jolla in 1998, and, subsequently, as a professor at the Sanford Burnham Medical Research Institute (later: Sanford Burnham Prebys), from 2009 until her death. During her research career Muller-Sieburg published more than 50 articles in peer-reviewed journals, wrote several invited book chapters, and co-authored one book on hematopoietic stem cells. Muller-Sieburg was frequently invited to national and international conferences and symposia. Muller-Sieburg gave her last invited lecture "The Life of a Hematopoietic Stem Cell" at the Keystone Symposium "The Life of a Stem Cell: From Birth to Death" in March 2012. In 2013, the Christa Muller-Sieburg award was named after her by the International Society of Experimental Hematology. Research Immunology While working at the University of Cologne, Muller-Sieburg addressed a key element of idiotype network theory postulated by Niels Kaj Jerne, namely the enigmatic shift from one to another class of immunoglobulins produced by the same clone on B-lymphocytes. By making sequential sub-lines from an original hybridoma line, she discovered immunoglobulin class switch and described it in her 1983 paper published with Klaus Rajewsky. The following year, they co-authored an important paper on the regulation of the isotype switch by anti-idiotype antibodies. This ground-breaking paper was recognized and cited by Niels K. Jerne in his Nobel Prize acceptance lecture on 8 December 1984. Hematology Purification of hematopoietic stem cells Muller-Sieburg accomplished separation of whole bone marrow into two fractions, the adherent and non-adherent fractions, and demonstrated that the latter fraction was the one that comprised B cell precursors. She found that it was not the B220-positive fraction that contained B-cell precursors as was expected, but the B220-negative fraction. She confirmed that B220+ cells were too late in the lineage to make B cells let alone T cells and myeloid cell types. Importantly, this B220-negative population was enriched for cells that were capable of reconstituting all types of blood cells for life when transplanted into lethally irradiated hosts ("complete repopulation capacity"). Complete repopulation capacity is the property which distinguishes hematopoietic stem cells from all other blood cell types. For their work on hematopoietic stem cell purification, Muller-Sieburg and collaborators were awarded a United States patent. Genetic control of stem cell frequency Muller-Sieburg was one of the first to recognize the need of maintaining HSC multi-lineage and self-renewal potentials while propagating HSCs in vitro. A sequence of publications in the 1990s established Muller-Sieburg as a pioneer of stromal-stem cell culture methodology. In the course of this work, Muller-Sieburg noticed that the frequency of HSCs - a measure of proliferative capacity - is under genetic control. In a 1996 landmark study, she and collaborators reported the discovery of the hematopoietic stem cell frequency gene on chromosome 1 in the murine system, which they named Scfr1 (stem cell frequency regulator 1). In a follow-up study in 2000, Muller-Sieburg and co-workers showed that the genetic control of HSC frequency is mostly cell-autonomous. By 2008, Scfr1 had become integrated into the group of genes and gene networks that specify "stemness" and cell fate decisions. Heterogeneity of the hematopoietic stem cell population The last 15 years of her life, Muller-Sieburg worked on the clonal fabric of hematopoiesis, making pioneering contributions to the foundations and practice of the science of blood. Based on her 1996 studies of the heterogeneity of the hematopoietic microenvironment, Muller-Sieburg increasingly doubted the then pervasive belief that "all stem cells are created equal", a view that, if true, would imply that blood is mono-clonal. To gain clarity, she followed the kinetics of individual HSCs and showed that blood generated by one individual hematopoietic stem cell differs significantly from the blood of another individual HSC by (a) the lifespan of the underlying stem cell population and (b) the composition by blood cell types relative to each other. Her discovery demonstrated that, in fact, the opposite of the dogmatic view of stem cell homogeneity is the case. Namely, she showed that whole blood is the poly-clonal mixture of the hematopoietic systems generated and maintained by individual stem cells actively functioning during any given period of time. These results that whole blood is composed of many individual bloods were obtained by single-cell experiments using limiting dilution for cell sorting and serial transplantation. In this approach, an initial transplant containing one hematopoietic stem cell extracted from lineage negative (Lin-) blood cells is used to rescue a lethally irradiated host with mono-clonal blood. The results from these serial transplantation experiments, lasting from 7 months up to five years, led Muller-Sieburg to quantitatively analyze sets of stem cell kinetics with H. Sieburg. These analyses led to the discovery of quantitative determinants of clonal heterogeneity and the confirmation of Muller-Sieburg's conjecture that specific purification methods might restrict the repertoire of purified HSC, emphasizing that caution be taken in interpreting experimental results from a specific set of HSCs to be true for all HSCs This work laid the clonal foundations of modern hematology. Quantitative determinants of clonal heterogeneity Based on her experimental data, Muller-Sieburg suggested to replace the dogmatic view of the homogeneity of the stem cell population with the new concept of clonal diversity within the population of hematopoietic stem cells. She showed that the heterogeneity of the differentiation potential of adult hematopoietic stem cells is epigenetically fixed before birth and that no new heterogeneity of differentiation potential is introduced by self-renewal in postnatal hematopoiesis. Muller-Sieburg showed definitively that, therefore, an organism's blood is the mixture of blood cells contributed by distinct hematopoietic stem cell clones during the organism's lifetime. The process of blood formation (hematopoiesis) acts on the fixed repertoire of heterogeneous stem cell clones. Clonal lifespan According to the dogmatic view of stem cell homogeneity the lifespan of individual HSCs (defined as the time period for which an HSC can divide without differentiation) was assumed to be approximately the same. However, Muller-Sieburg experiments demonstrated that the longevity of hematopoietic stem cell clones differed dramatically. Specifically, she showed that clonal bloods became deficient in one or more cell types – a definitive observable of the extinction of their clone-maintaining stem cell population – after significantly different lengths of time. Some of these clone-maintaining hematopoietic stem cells survived multiple sequential in vitro-in vivo transplantations, which exceeded several times the normal life expectancy of the host. These results allowed Muller-Sieburg to establish the clonal lifespan as a quantitative measure of the reliability of self-renewal capacity. At the same time, consistent with clonal heterogeneity, she showed that the differentiation capacity of individual HSCs is (a) limited and (b) dependent on the clone founder. Therefore, Muller-Sieburg also established the variability in differentiation capacity as a quantitative measure of clonal heterogeneity and clonal lifespan. Furthermore, Muller-Sieburg's clonal experiments showed that the life of a hematopoietic stem cell (clone) is highly dependent on the initial conditions given by the epigenetically fixed differentiation and self-renewal capacities of each clone founding HSC. Lineage bias Muller-Sieburg showed that murine hematopoietic stem cells form a heterogeneous cell population with respect to their differentiation and proliferation behaviors. As a consequence of this clonal heterogeneity principle, whole blood represents as a mixture of "bloods" originating from many active stem cell clones. Within each clonal blood, all HSCs form a homogeneous core population whose members have the same lifespan and carry the memory of the differentiation and self-renewal capacities of the founder HSC. By comparing the intra-clonal kinetics of the leukocyte sub-populations, Muller-Sieburg showed that all hematopoietic stem cells belong to and stay for life in one of three classes of repopulation kinetics: Myeloid-biased (My-bi), Balanced or Lymphoid-biased (Ly-bi). Thus, an unexpected organization of HSC differentiation behaviors was discovered, leading to the principle of lineage bias established by Muller-Sieburg in collaboration with Hans Sieburg. Deterministic regulation of hematopoiesis Most theories of hematopoiesis assume that self-renewal and differentiation of hematopoietic stem cells (HSCs) are randomly regulated by intrinsic and environmental influences. Opposite to this "stochastic" view, Muller-Sieburg showed that random regulation is incompatible with the evidence of clonal hematopoiesis involving the heterogeneous core populations of HSCs. Specifically, her data argue that self-renewal does not contribute to the heterogeneity of the adult HSC compartment but, rather, all HSCs in a clone follow a predetermined fate, consistent with the generation-age hypothesis. By extension, the self-renewal and differentiation behavior of HSCs in adult bone marrow is more predetermined than stochastic. Almost a decade later, in a review paper, Timm Schroeder summarized these essential findings in the succinct phrase "subtypes, not unpredictable behavior". The dependence on epigenetically determined initial conditions placed hematopoiesis mathematically into the category of chaotic systems with deterministic evolution. This view was supported by Muller-Sieburg's finding in collaboration with H. Sieburg that the clonal lifespan of HSCs can be predicted from repopulation kinetics. Muller-Sieburg's experimental work, therefore, establishes hematopoiesis as a new highly non-trivial challenge in chaos theory. A new theory of hematopoietic aging Muller-Sieburg expanded her clonal studies to explore the correspondence between the long-term limit behavior of the hematopoietic process and the longevity of the host organism. Specifically, she wondered about the possible dualism of "aged organism" and "old HSCs". Following her own, strict biological definition of HSC aging as intrinsic to the hematopoietic system, she showed that the answer to the dualism problem lies in the long-term dynamics of clonal aging of individual HSCs in the context of the clonal composition of an aging hematopoietic system. The clonal analysis of repopulating HSCs demonstrated that lymphoid-biased (Ly-bi) HSCs are lost earlier compared to the longer-lived myeloid-biased HSCs, which accumulate in the aged organism. Importantly, myeloid-biased (My-bi) HSCs from young and aged sources behave similarly in all aspects tested, indicating that organism aging does not change individual HSCs. Rather, aging (defined as "the totality of observable effects in an entity surviving in the long-term time limit relative to the behavior of the same observables at earlier times") changes the clonal composition of the HSC population, as manifested in the shift in bias classes of HSCs. Specifically, the proportion of the myeloid-biased HSCs is increased compared to the proportion of lymphoid-biased HSCs, while the proportion of balanced HSCs is near unchanged. This important conclusion may have significant implications to understanding the causes of the age-related immune deficiencies. Computational research of hematopoiesis Muller-Sieburg was an early adopter and promoter of the use of abstract mathematics in the field of experimental hematology. In collaboration with Hans Sieburg, this approach proved particularly fruitful in her experimental studies of  HSC clonality. For example, the classification of kinetics or the prediction of lifespans from short initial kinetics or the reliability of self-renewal required symbolic computation, reliability theory and functional programming. Muller-Sieburg generously provided data for other modeling studies and engaged in correspondence discussions of deep principles of modeling hematopoiesis. The important outcomes of Muller-Sieburg's clonal diversity experiments are time-series, which are invaluable in computational research addressing one of the central open problems in hematopoiesis research, namely HSC “fate decisions". In vivo, at multiple million cell scales, "fate decisions" must occur reasonably fast and reliably to uphold all blood functions for extended periods of time. Muller-Sieburg's work showed that hematopoietic "decisions" occur on a largely deterministic basis, which is consistent with the demands for speed and reliability expected for host survival. References External links Christa Muller-Sieburg funding history and related publications, grantome.com Christa Muller-Sieburg award, International Society of Experimental Hematology 1952 births 2013 deaths Women immunologists Women hematologists University of California, San Diego faculty University of Cologne alumni Deaths from cancer in California 21st-century American biologists American medical academics Stem cell researchers American people of German-Jewish descent 20th-century women scientists
Christa Muller-Sieburg
Biology
3,771
13,238,669
https://en.wikipedia.org/wiki/International%20Behavioural%20and%20Neural%20Genetics%20Society
The International Behavioural and Neural Genetics Society (IBANGS) is a learned society that was founded in 1996. The goal of IBANGS is "promote and facilitate the growth of research in the field of neural behavioral genetics". Profile Mission The IBANGS mission statement is to promote the field of neurobehavioural genetics by: organizing annual meetings to promote excellence in research on behavioural and neural genetics publishing a scholarly journal, Genes, Brain and Behavior in collaboration with Wiley-Blackwell Awards Each year IBANGS recognizes top scientists in the field of neurobehavioral genetics with: The IBANGS Distinguished Investigator Award for distinguished lifetime contributions to behavioral neurogenetics The IBANGS Young Scientist Award for promising young scientists Travel Awards to attend an IBANGS Annual Meeting for students, postdocs, and junior faculty, financed by a meeting grant from the National Institute on Alcohol Abuse and Alcoholism A Distinguished Service Award for exceptional contributions to the field is given on a more irregular basis and has been awarded only three times, to Benson Ginsburg (2001), Wim Crusio (2011), and John C. Crabbe (2015). History IBANGS was founded in 1996 as the European Behavioural and Neural Genetics Society, with Hans-Peter Lipp as its founding president. The name and scope of EBANGS were changed to "International" at the first meeting of the society in Orléans, France in 1997. IBANGS is a founding member of the Federation of European Neuroscience Societies. The current president is Karla Kaun (2022–2025). Previous presidents have been: References External links Behavioral neuroscience Neuroscience organizations Scientific organizations established in 1996 Behavioural genetics societies International scientific organizations
International Behavioural and Neural Genetics Society
Biology
352
6,397,382
https://en.wikipedia.org/wiki/Cantellated%2024-cells
In four-dimensional geometry, a cantellated 24-cell is a convex uniform 4-polytope, being a cantellation (a 2nd order truncation) of the regular 24-cell. There are 2 unique degrees of cantellations of the 24-cell including permutations with truncations. Cantellated 24-cell The cantellated 24-cell or small rhombated icositetrachoron is a uniform 4-polytope. The boundary of the cantellated 24-cell is composed of 24 truncated octahedral cells, 24 cuboctahedral cells and 96 triangular prisms. Together they have 288 triangular faces, 432 square faces, 864 edges, and 288 vertices. Construction When the cantellation process is applied to 24-cell, each of the 24 octahedra becomes a small rhombicuboctahedron. In addition however, since each octahedra's edge was previously shared with two other octahedra, the separating edges form the three parallel edges of a triangular prism - 96 triangular prisms, since the 24-cell contains 96 edges. Further, since each vertex was previously shared with 12 faces, the vertex would split into 12 (24*12=288) new vertices. Each group of 12 new vertices forms a cuboctahedron. Coordinates The Cartesian coordinates of the vertices of the cantellated 24-cell having edge length 2 are all permutations of coordinates and sign of: (0, , , 2+2) (1, 1+, 1+, 1+2) The permutations of the second set of coordinates coincide with the vertices of an inscribed runcitruncated tesseract. The dual configuration has all permutations and signs of: (0,2,2+,2+) (1,1,1+,3+) Structure The 24 small rhombicuboctahedra are joined to each other via their triangular faces, to the cuboctahedra via their axial square faces, and to the triangular prisms via their off-axial square faces. The cuboctahedra are joined to the triangular prisms via their triangular faces. Each triangular prism is joined to two cuboctahedra at its two ends. Cantic snub 24-cell A half-symmetry construction of the cantellated 24-cell, also called a cantic snub 24-cell, as , has an identical geometry, but its triangular faces are further subdivided. The cantellated 24-cell has 2 positions of triangular faces in ratio of 96 and 192, while the cantic snub 24-cell has 3 positions of 96 triangles. The difference can be seen in the vertex figures, with edges representing faces in the 4-polytope: Images Related polytopes The convex hull of two cantellated 24-cells in opposite positions is a nonuniform polychoron composed of 864 cells: 48 cuboctahedra, 144 square antiprisms, 384 octahedra (as triangular antipodiums), 288 tetrahedra (as tetragonal disphenoids), and 576 vertices. Its vertex figure is a shape topologically equivalent to a cube with a triangular prism attached to one of its square faces. Cantitruncated 24-cell The cantitruncated 24-cell or great rhombated icositetrachoron is a uniform 4-polytope derived from the 24-cell. It is bounded by 24 truncated cuboctahedra corresponding with the cells of a 24-cell, 24 truncated cubes corresponding with the cells of the dual 24-cell, and 96 triangular prisms corresponding with the edges of the first 24-cell. Coordinates The Cartesian coordinates of a cantitruncated 24-cell having edge length 2 are all permutations of coordinates and sign of: (1,1+,1+2,3+3) (0,2+,2+2,2+3) The dual configuration has coordinates as all permutations and signs of: (1,1+,1+,5+2) (1,3+,3+,3+2) (2,2+,2+,4+2) Projections Related polytopes References T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900 H.S.M. Coxeter: Coxeter, Regular Polytopes, (3rd edition, 1973), Dover edition, , p.296, Table I (iii): Regular Polytopes, three regular polytopes in n-dimensions (n≥5) H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973, p.296, Table I (iii): Regular Polytopes, three regular polytopes in n-dimensions (n≥5) Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26. pp. 409: Hemicubes: 1n1) Norman Johnson Uniform Polytopes, Manuscript (1991) N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966) x3o4x3o - srico, o3x4x3o - grico Uniform 4-polytopes
Cantellated 24-cells
Physics
1,312
51,410,477
https://en.wikipedia.org/wiki/Women%20in%20Antarctica
Women have been exploring the regions around Antarctica for many centuries. The most celebrated "first" for women in Antarctica was in 1935 when Caroline Mikkelsen became the first woman to set foot on one of Antarctica's islands. Early male explorers, such as Richard Byrd, named areas of Antarctica after wives and female heads of state. As Antarctica moved from a place of exploration and conquest to a scientific frontier, women worked to be included in the sciences. The first countries to have female scientists working in Antarctica were the Soviet Union, South Africa and Argentina. Besides exploring and working as scientists, women have also played supportive roles as wives, fund-raisers, publicists, historians, curators and administrators of organizations and services that support Antarctic operations. Many early women on Antarctica were the wives of explorers. Some women worked with Antarctica from afar, crafting policies for a place they had never seen. Women who wished to have larger roles in Antarctica and on the continent itself had to "overcome gendered assumptions about the ice and surmount bureaucratic inertia". As women began to break into fields in Antarctica, they found that it could be difficult to compete against men who already had the "expeditioner experience" needed for permanent science positions. Women who were qualified for expeditions or jobs in Antarctica were less likely to be selected than men, even after a 1995 study by Jane Mocellin showed that women cope better than men with the Antarctic environment. Historic barriers against inclusion Most early policies and practices, including the construction and creation of Antarctic organizations, were created initially by men. Women were originally excluded from early exploration in Antarctica based on the opinion that women could not handle the extremes in temperature or crisis situations. Vivian Fuchs, who was in charge of the British Antarctic Survey in the 1960s, believed that women could not carry heavy equipment and that Antarctic facilities were unsuitable for women. The United States believed for many years that the climate of Antarctic was too harsh for women. Antarctica was seen by many men as a place where men could imagine themselves heroic conquerors. In Western culture, frontier territories are often associated with masculinity. Antarctica itself was envisioned by many male explorers as a "virginal woman" or "monstrous feminine body" to be conquered by men. Women were often "invoked in terms of place naming and territorial conquest and later even encouraged to have babies in Antarctica." Using women as territorial conquest is literal in the way that Argentina flew pregnant women to Antarctica to give birth and stake a national claim to the area. Silvia Morella de Palma was the first woman to give birth in Antarctica, delivering Emilio Palma at the Argentine Esperanza base 7 January 1978. Men enjoyed having a space that was free of women and which, in the late 1940s, "allowed them to continue the kind of male companionship and adventure they had enjoyed during the Second World War." In one news article about Antarctica written in 1958, the writer describes the use of dazzlement: "On the womanless continent, the purpose of the dazzlement is not to catch the eye of a flirtatious blonde, but to attract spotters in the event that the explorers become lost in the frozen waste." Men's space in Antarctica resisted change. In the 1980s, there was an attempt by men to memorialize the "Sistine ceiling" of the Weddell hut in Antarctica as an Australian national heritage site of "high significance." The "Sistine ceiline" was covered in 92 different pinups of women from the 1970s and 1980s. This represented a "male's only club" in which participants believed women would spoil the "purity of a homosocial work, and play, environment." In 1983, the San Bernardino County Sun newspaper published an article about Antarctica stating that it "is still one of the last macho redoubts, where men are men and women are superfluous." One scientist, Lyle McGinnis, who had been going to Antarctica since 1957, resented women in the field saying that "men never grouse." He believed that women complained and needed "comfort." Not all men felt that way. Other men felt that women's presence made life in Antarctica better and one male engineer stated that without women around, "men are pigs." Sociologist Charles Moskos stated that as more women are introduced to a group, there is less aggression and a "more civil culture develops." Many of the careers in Antarctica are in the sciences and women faced barriers there as well. As women attempted to work in science, arguments using biological determinism, evolutionary psychology and popular notions of neurobiology were used as excuses as to why there were fewer women in the sciences. These arguments described how "women are ill-adapted on evolutionary grounds for science and the competitive environment of the laboratory." Some women described feeling that they were "a bit of a joke" working in Antarctica and felt that men regarded them as incapable. Antarctic exploration and science research was often facilitated by national navies, who often did not want women on their ships. The United States Navy used the excuse that "sanitation facilities were too primitive" on Antarctica as an excuse to bar women. The U.S. Navy also considered Antarctica a "male-only bastion." Admiral George Dufek said in 1956 that "women would join American Teams in the Antarctic over his dead body." He also believed that women's presence on Antarctica "would wreck men's illusions of being heroes and frontiersmen." Military groups also were worried about "sexual misconduct." Change was slow as women began to try to become part of Antarctic exploration and research. An article run in The Daily Herald newspaper of Chicago in 1974 described women finally coming to Antarctica as integrating the "land with a definite feminine touch." The article described women's perfumed smells, ways of entertaining guests on Antarctica and the "dainty feet" of Caroline Mikkelsen. Eventually both the "presence and impact of female Antarctic researchers has increased rapidly." Early women involved in Antarctica Oral records from Oceania indicate that women explorers may have traveled to the Antarctic regions like male explorers Ui-te-Rangiora around 650 CE and Te Ara-tanga-nuku in 1000 CE, but this is unconfirmed. The first western woman to visit the Antarctic region was Louise Séguin, who sailed on the Roland with Yves Joseph de Kerguelen in 1773. The oldest known human remains in Antarctica was a skull that belonged to a young Indigenous Chilean woman on Yamana Beach at the South Shetland Islands, which dates back to 1819 to 1825. Her remains were found by the Chilean Antarctic Institute in 1985. In the early twentieth century, women were interested in going to Antarctica. When Ernest Shackleton advertised his 1914 Antarctic expedition, three women wrote to him, requesting to join. The women never became part of the journey. In 1919, newspapers reported that women wanted to go to Antarctica, writing that "several women were anxious to join, but their applications were refused." Later, in 1929, twenty-five women applied to the British, Australian and New Zealand Antarctic Research Expedition (BANZARE). They were also rejected. When a privately funded British Antarctic Expedition was proposed in 1937, 1,300 women applied to join. None of those 1,300 were accepted. After 3 years of attempted funding the expedition was cancelled with the onset of World War Two. Women who were wives of explorers who were left behind "endured years of loneliness and anxiety." Women like Kathleen Scott raised money for their husbands' journeys. The first women involved in exploration of Antarctica were wives and companions of male travelers and explorers. Women accompanied men as "whaling wives" to Antarctic waters. The first women to see the continent of Antarctica was Norwegian Ingrid Christensen and her companion, Mathilde Wegger, both of whom were traveling with Christensen's husband. The first woman to step onto the land of Antarctica, an island, was Caroline Mikkelsen in 1935. Mikkelsen only briefly went ashore and was also there with her husband. Later, after her husband died, Mikkelsen remarried and didn't talk about her experience in Antarctica in order "to spare his feelings." Christensen went back to Antarctica three times after her first glimpse of the land. She eventually landed at Scullin monolith, becoming the first woman to set foot on the Antarctic mainland. She was followed by her daughter, Augusta Sofie Christensen, and two other women, Lillemor Rachlew and Solveig Widerøe. Because the women believed the landing wasn't an actual "first," they didn't make much of their accomplishment. In the years of 1946 and 1947, Jackie Ronne and Jennie Darlington were the first women to spend the year in Antarctica. When Ronne and Darlington decided to accompany their husbands in 1946 to Antarctica, men on the expedition "signed a petition trying to stop it happening." Ronne worked as the mission's "recorder." Ronne and Darlington both wrote about their experiences on the ice and, in the case of Darlington's book, about how conflict between team members also "strained relations between the two women." One of the ways that Darlington tried to fit in with the men of the group was to make herself as "inconspicuous within the group as possible." One man, first seeing Darlington arrive at the Antarctic base, "fled in fright, thinking that he'd gone mad." Both women, upon returning from Antarctica, downplayed their own roles letting "their husbands take most of the honour." In 1948, the British diplomat, Margaret Anstee, was involved in the Falkland Islands Dependency Survey (FIDS) and helped make policy for the program. Further exploration and science Women scientists first began researching Antarctica from ships. The first woman scientist, Maria V. Klenova of the Soviet Union, worked on the ships Ob and Lena just off the Antarctic coastline in 1955 to 1956. Klenova's work helped create the first Antarctic atlas. Women served on Soviet Union ships going to Antarctica after 1963. The first women to visit a US station and the first to fly to Antarctica were Pat Hepinstall and Ruth Kelley, Pan Am flight attendants who spent four hours on the ground at the McMurdo Station on 15 October 1957. Often women going to Antarctica had to be approved in both official and unofficial ways. An early candidate for becoming one of the first women scientists to go to Antarctica was geologist Dawn Rodley. She had been approved of not only by the expedition sponsor, Colin Bull, but also by the wives of the male team-members. Rodley was set to go in 1958, but the United States Navy, who were in charge of Operation Deep Freeze, refused to take her to Antarctica. The Navy decided that sending a four-woman team would be acceptable and Bull began to build a team including Lois Jones, Kay Lindsay, Eileen McSaveney and Terry Tickhill. These four women were part of the group who became the first women to visit the South Pole. Jones's team worked mainly in Wright Valley. After their return, Bull found that several of his male friends resented the addition of women and even called him a "traitor". The first United States all-female team was led by Jones in 1969. Her team, which included the first women to set foot on the South Pole, were used by the navy as a publicity stunt. They were "paraded around" and called "Powderpuff explorers". The first United States woman to step into the Antarctic interior in 1970 was engineer Irene C Peden, who also faced various barriers to her working on the continent. Peden described how a "mythology had been created about the women who'd gone to the coast – that they had been a problem," and that since they had not published their work within the year, they were "heavily criticized." Men in the Navy in charge of approving her trip to Antarctica were "dragging their feet", citing that there were not women's bathrooms available and that without another female companion, she would not be allowed to go. The admiral in charge of transportation to Antarctica suggested that Peden was trying to go there for adventure, or to find a husband, rather than for her research. Despite her setbacks, including not receiving critical equipment in Antarctica, Peden's research on the continent was successful. The first two U.S. woman to winter at a U.S. Antarctic research station were Mary Alice McWhinnie and Mary Odile Cahoon. Mary Alice was the station science leader (chief scientist) at McMurdo Station in 1974 and Mary Odile was a nun and biologist. United States women in 1978 were still using equipment and arctic clothing designed for men, although "officials said that problem is being quickly remedied." American Ann Peoples became the manager of the Berg Field Center in 1986, becoming the first woman to serve in a "significant leadership role". British women had similar problems to the Americans. The director of the British Antarctic Survey (BAS) from 1959 to 1973 was Vivian Fuchs, who "firmly believed that the inclusion of women would disrupt the harmony and scientific productivity of Antarctic stations." British women scientists started working on curating collections as part of the BAS prior to being allowed to visit Antarctica. Women who applied to the BAS were discouraged. A letter from BAS personnel sent to a woman who applied in the 1960s read, "Women wouldn't like it in Antarctica as there are no shops and no hairdresser." The first BAS woman to go to Antarctica was Janet Thomson in 1983 who described the ban on women as a "rather improper segregation." Women were still effectively barred from using UK bases and logistics in 1987. Women didn't overwinter at the Halley Research Station until 1996, forty years after the British station was established. Argentina sent four women scientists, biologist Irene Bernasconi, bacteriologist María Adela Caría, biologist Elena Martinez Fontes and algae expert Carmen Pujals, to Antarctica in 1968. They were the first group of female scientists to conduct research in Antarctica. Bernasconi was the first woman to lead an Antarctic expedition. She was aged 72 at the time. Later, in 1978, Argentina sent a pregnant woman, Silvia Morello de Palma, to the Esperanza Base to give birth and to "use the baby to stake [their] territorial claims" to Antarctica. Once Australia opened up travel to Antarctica for women, Elizabeth Chipman, who first worked as a typist at Casey Station in 1976, chronicled all of the women to travel there up to 1984. Chipman worked to find the names of all women who had ever been to or even near Antarctica and eventually donated 19 folio boxes of her research to the National Library of Australia. Women gain ground The National Science Foundation (NSF) started long-range planning in 1978, looking towards facilities that could accommodate a population made up of 25% women. In the 1979–1980 season, there were only 43 women on the continent. By 1981, there were nearly one woman for every ten men in Antarctica. In 1983, the ratio was back to 20 men for every woman. In the 1980s, Susan Solomon's research in Antarctica on the ozone layer and the "ozone hole" causes her to gain "fame and acclaim." In Spain, Josefina Castellví, helped coordinate and also participated in her country's expedition to Antarctica in 1984. Later, after a Spanish base was constructed in 1988, Castellví was put in charge after the leader, Antoni Ballester, had a stroke. The first female station leader on Antarctica was Australian, Diana Patterson, head of Mawson Station in 1989. The first woman station leader in charge of an American Antarctic station was LT Trina Baldwin, CEC, USN (Civil Engineer Corps, United States Navy). The first all-female overwintering group was from Germany and spent the 1990–1991 winter at Georg von Neumayer. The first German female station leader and medical doctor was Monika Puskeppeleit. In 1991 In-Young Ahn was the first female leader of an Asian research station (King Sejong Station) and the first South Korean woman to step onto Antarctica. There were approximately 180 women in Antarctica during the 1990–1991 season. Women from several different countries were regular members of overwintering teams by 1992. The first all-women expedition reached the South Pole in 1993. Diana Patterson, the first female station leader on Antarctica, saw change coming in 1995. She felt that many of the sexist views of the past had given way so that women were judged not by the fact that they were women, but "by how well you did your job." During the 1994 austral winter, women managed all three of the American Antarctic stations: Janet Phillips at Amundsen-Scott South Pole Station, Karen Schwall at McMurdo Station and Ann Peoples at Palmer Station. Social scientist, Robin Burns, studied the social structures of Antarctica in the 1995–1996 season. She found that while many earlier women struggled, there was more acceptance of women in Antarctica during the 1995 - 1996 season. One of the station managers, Ann Peoples, felt that a tipping point had been reached during the 1990s and that life for women on Antarctica became more normal. There were still men in Antarctica who were not afraid to voice their opinion that women should not "be on the ice," but many others enjoyed having "women as colleagues and friends." Women around this time began to feel like it was "taken for granted now that women go to the Antarctic." Studies done in the early 2000s showed that women's inclusion in Antarctic groups were beneficial overall. In the early 2000s, Robin Burns had found that female scientists who enjoyed their experience in Antarctica, were the ones who were able to finish their scientific work and to complete their projects. Recent history American Lynne Cox swam a mile in Antarctic water in 2003. In 2005, writer Gretchen Legler described how there were many more women in Antarctica that year and that some were lesbians. International Women's Day in 2012 saw more than fifty women celebrating in Antarctica and who made up 70% of the International Antarctic Expedition. In 2013, when the Netherlands opened their first Antarctic Lab, Corina Brussaard was there to help set it up. Homeward Bound was a 10-year program designed to encourage women's participation in science and planned to send the first large (78 member) all-women expedition to Antarctica in 2016. The first group, consisting of 76 women, arrived in Antarctica for three weeks in December 2016. Fabian Dattner and Jess Melbourne-Thomas founded the project and the Dattner Grant provided funding. Each participant contributed $15,000 to the project. Homeward bound included businesswomen and scientists who look at climate change and women's leadership. The plan was to create a network of 1,000 women who would become leaders in the sciences. The first voyage departed South America in December 2016 An all-woman team of United Kingdom Army soldiers, called Exercise Ice Maiden, started recruiting members in 2015 to cross the continent under their own power in 2017. It intended to study women's performance in the extreme antarctic summer environment. A team of six women completed the journey in 62 days after starting on 20 November 2017. Currently, women make up 55% of membership in the Association of Polar Early Career Scientists (APECS). In 2016, nearly a third of all researchers at the South Pole were women. The Australian Antarctic Program (AAP) makes a "conscious effort to recruit women." A social media network has recently been created, "Women in Polar Science". It aims to connect women working in the Arctic and Antarctic sciences and provides them with a platform to share and exchange knowledge, experiences and opportunities. Sexual harassment and sexism When heavy equipment operator, Julia Uberuaga, first went to Antarctica in the late 70s and early 80s, she recalled that "the men stared at her, or leered at her, or otherwise let her know she was unwelcome on the job." Rita Matthews, who went to Antarctica during the same period, said that the "men were all over the place. There were some that would never stop going after you." In 1983, Marilyn Woody described living at McMurdo station and said, "It makes your head spin, all this attention from all these men." Then she said, "You realize you can put a bag over your head and they'll still fall in love with you." Another scientist, Cynthia McFee, had been completely shut out of the "male camaraderie" at her location and had to deal with loneliness for long periods of time. Martha Kane, the second woman to overwinter at the South Pole, experienced "negative pressure" from men with "some viewing her as an interloper who had insinuated herself into a male domain." In the 1990s, some women experienced stigma in Antarctica. These women were labeled "whores" for interacting with men and those who did not interact with men were called "dykes." In the late 1990s and early 2000s, women felt that Antarctic operations were "not at all sympathetic to the needs of mothers and that there is a deep concern lest a pregnant woman give birth in Antarctica." Sexual harassment is still a problem for women working in Antarctica, with many women scientists fielding unwanted sexual advances over and over again. Women continue to be outnumbered in many careers in Antarctica, including fleet operations and trades. Some organizations, such as the Australian Antarctic Division, have created and adopted policies to combat sexual harassment and discrimination based on gender. The United States Antarctic Program (USAP) encourages women and minorities to apply. Women record-breakers Silvia Morella de Palma was the first woman to give birth in Antarctica, delivering Emilio Palma at the Argentine Esperanza base on 7 January 1978. In 1988 American Lisa Densmore became the first woman to reach the summit Mount Vinson. In 1993, American Ann Bancroft led the first all woman expedition to the South Pole. Bancroft, and Norwegian Liv Arnesen, were the first women to ski across Antarctica in 2001. In 2010, the first female chaplain to serve on the continent of Antarctica was Chaplain, Lt Col Laura Adelia of the U.S. Air Force, where she served the people at McMurdo Station. Maria Leijerstam became the first person to cycle to the South Pole from the edge of the continent in 2013. She cycled on a recumbent tricycle. Anja Blacha set the record for the longest solo, unsupported, unassisted polar expedition by a woman in 2020. Honors and awards In 1975, Eleanor Honnywill became the first woman to be awarded the Fuchs Medal from the British Antarctic Survey (BAS). The first woman to receive a Polar Medal was Virginia Fiennes, in 1986. She was honored for her work in the Transglobe Expedition. She was also the first woman to "winter in both polar regions." Denise Allen was the first woman awarded the Australian Antarctic Medal in 1989. See also Arctic exploration European and American voyages of scientific exploration Farthest South First women to fly to Antarctica Heroic Age of Antarctic Exploration History of Antarctica List of Antarctic women List of polar explorers Timeline of women in Antarctica Women in science Women in space References Citations Sources External links Women in Antarctica Guide to the Papers of Elizabeth Chipman Women in Antarctic science editathons Scientific Committee on Antarctic Research Women in Red Women scientists People of Antarctica Women in Antarctica
Women in Antarctica
Technology
4,820
40,643,899
https://en.wikipedia.org/wiki/Neuroanatomy%20of%20intimacy
Even though intimacy has been broadly defined in terms of romantic love and sexual desire, the neuroanatomy of intimacy needs further explanation in order to fully understand their neurological functions in different components within intimate relationships, which are romantic love, lust, attachment, and rejection in love. Also, known functions of the neuroanatomy involved can be applied to observations seen in people who are experiencing any of the stages in intimacy. Research analysis of these systems provide insight on the biological basis of intimacy, but the neurological aspect must be considered as well in areas that require special attention to mitigate issues in intimacy, such as violence against a beloved partner or problems with social bonding. Components of intimacy and neuroanatomy Attachment Pair bonding, or intense social attachment, normally initiates partner preference in sexual situations and monogamy in many mammalian species. Monogamous species generally exhibit an exclusive responsibility to each other as well as co-parenting to their offspring. Studies using monogamous prairie voles (Microtus ochrogaster) showed that forming a pair bond stimulated the mesolimbic dopaminergic pathway. In this pathway, dopamine is released from the ventral tegmental area (VTA) to the nucleus accumbens and prefrontal cortex, which then signals the ventral pallidum to complete reward processing in the pathway. Two important neuropeptides that mediated pair bond formation were oxytocin and arginine vasopressin (AVP). Even though both males and females have both molecules, oxytocin was shown to be predominantly in females and vasopressin predominantly promoted pair bonding in males. Receptor specificity was shown essential for mating by activating the dopamine D2 receptors in the nucleus accumbens in both male and female prairie voles. Other locations that were also activated in the study were gender specific, such as oxytocin receptors (OTR) in the prefrontal cortex and AVP 1a receptors (V1aR) in the ventral pallidum. Romantic love Romantic love is described as involving an individual who pays closer attention to another individual in special ways, involving attention on traits worthy to pursue. Through functional magnetic resonance imaging (fMRI), studies have shown that the right ventral tegmental area (VTA) is stimulated when subjects are shown a picture of their beloved. As part of the reward mechanism, the VTA signals to other parts of the brain, such as the caudate nucleus to release dopamine for reward. Older studies have generally attributed love to the limbic system consisting of the temporal lobes, hypothalamus, amygdala as well as the hippocampus. These functional components of the limbic system are important components of emotional processing, motivation, and memory. Specifically, current research also suggests components, such as the hypothalamus, as playing a role in romantic love because it possesses the penchant for bonding in mammals by secreting the neuropeptides, oxytocin and vasopressin. Other research has implicated nerve growth factor (NGF), a neurotrophin that is fundamental in the neuron survival and development in the nervous system, in early-stage romantic love in subjects experiencing euphoria and emotional dependency, which is often a characteristic in romantic love. Lust Lust, also known as libido, is defined as pursuing sexual gratification. It is primarily driven by the endocrine system, but the brain is also involved in neural processing. Specifically, the hypothalamic–pituitary–gonadal (HPG) and hypothalamic–pituitary–adrenal (HPA) axes play primary roles in the priming for sex as well as the stress response, respectively. Because intimacy is motivated by the reward system, steroid hormones activate desire to promote partner preference and social attachment in the process of sexual union. Dopamine is then released when an individual is aroused, which associates lust as a product of the dopaminergic reward system. However, interactions of sex and romantic love do not have the same goal orientation, which helps to confirm the difference in brain activation patterns. Contrasting with the primary goal of romantic love, copulation can occur without two individuals being in romantic love or having a monogamous bond. Sometimes, copulation might not even occur in romantic love relationships. However, it still does play a role in successful reproduction when it is supplemented with romantic love. Rejection in love Rejection in love is considered unrequited or unreciprocated love. Separation from a loved one can cause grief and sometimes lead to an individual expressing characteristics of depression. In a study, symptoms seen in nine women who had experienced a recent breakup suggested involvement of certain neuroanatomy. Eating, sleeping, and neuroendocrine regulation was associated with the hypothalamus, anhedonia was associated with the ventral striatum and the amygdala was associated with emotional processing in these women. Other neuroanatomy that registered unrequited love included the cerebellum, insular cortex, anterior cingulate cortex, and prefrontal cortex. All of the areas that were activated showed decreased activity when subjects emotionally reflected about the beloved rejecter. In contrast, another study observed significant increase in activation in the VTA as well as the nucleus accumbens. Further, those rejected in love had higher stimulation in the right nucleus accumbens and ventral putamen/pallidum compared to subjects who were in romantic love This study ultimately showed that areas that are activated in romantic love are also activated in rejection in love. Results from this study suggest that rejected lovers have same stimulation of brain regions because they are still "in love" with their rejecters. Since romantic love follows the dopaminergic reward system, the anticipatory nature of receiving a reward as well as deciding on losses and gains in decision making, allows the neural circuitry to become adaptable. This allows the rejected to change their behavior through two stages. The first is the "protest" stage where they try to win back the rejecter. The second stage or the "rejection" stage is where they feel resignation and despair, eventually leading to continuing life without the rejecter. On the other hand, the involvement of the reward gain/loss pathways intrinsic to survival provides insight on behaviors of stalking, suicide, obsessiveness and depression. Other neurological implications of intimate brain systems Mother–child pair bond An attachment between a mother and child arises from behavioral changes during birth, which includes lactation. Release of oxytocin is important in the birthing process for the mother–child pair bond to occur in both individuals. Lactation relies on the constant release of oxytocin for the release of milk in the breast, which strengthens the first social bond of the infant and the mother. Although this is considered another type of social attachment that activates the same reward system, maternal attachment activates different regions of the brain compared to those in partner attachment. In one study, overlap of activated brain regions with romantic love was found to include the nucleus accumbens, putamen, caudate nucleus, which are important in social attachment. However, the only regions that were specific to maternal love were the orbitofrontal and lateral prefrontal cortex as well as the occipital and lateral fusiform cortex. Moreover, oxytocin is important between the mother and her offspring, so it is suggested that oxytocin deficiency can influence how successfully the offspring is able to form a monogamous pair bond with another individual in the future. This may provide insight on issues with formation of pair bonds as well as psychological problems from an inefficient upbringing. Addictiveness Love activates the same neural circuitry as maladaptive drugs, such as cocaine. Dopaminergic reward pathways are involved to elicit a response of gaining a reward and reinforcement, thereby leading some researchers to believe that love is addictive. Love and drugs of abuse stimulate similar levels of dopamine for reward and reinforcement from the VTA. Actions between the two mental states are very similar with those in love experiencing excessive exhilaration, insomnia, anxiety, and loss of appetite also seen in drug users. Also, brain activity observed through single-photon emission computed tomography (SPECT) showed that dopamine release in the basal ganglia of a subject who was romantically in love appeared similar to a subject addicted to cocaine. Although love is suggested to be addictive based on its neurological circuitry, it cannot be simplified as addictive because it is expressed in different ways across a wide spectrum. Gender differences in the intimate brain Emotional processing The amygdala, a key player in emotional processing, is suggested different between men and women. In males, emotions are considered to be primarily directed from the right hemisphere; on the other hand, it is primarily directed from the left hemisphere in females. One study that tested positively and negatively valenced words on both male and female subjects found that emotional processing was indeed gender specific. In males, positively valenced words activated the left sensorimotor cortex, angular gyrus, left hippocampus, left frontal eye field and the right cerebellum, while females had activations in the right putamen, right superior temporal gyrus, left supramarginal gyrus, left inferior frontal gyrus and the left sensorimotor cortex. By contrast, negatively valenced words stimulated greater activation in the right supramarginal gyrus in males, while greater activation in the left part of the hippocampus with negative stimuli. Therefore, different brain regions in males and females could allude to differential responses emotional processing in intimate situations. Jealousy Known as the insecure feeling of a partner in regards to losing their loved one to another, jealousy can result in extreme situations such as violence and abuse from the insecure partner to their beloved. In one study, men and women were shown sentences that suggested sexual and emotional infidelity and rated the intensity of jealousy that they felt. Sexual infidelity In males, activation of brain areas that were induced by sexual infidelity laden statements included the visual cortex, middle temporal gyrus, amygdala, hippocampus, and claustrum. In females, the visual cortex, middle frontal gyrus, thalamus, and cerebellum were activated. It was found that males showed more stimulation in the amygdala in regards to sexual infidelity, while females showed greater activation in the visual cortex and thalamus. The regions in the male brain provided insight on neuroanatomy associated with sexual and aggressive behavior. These regions could be studied further in increased violent cases against partners, which are commonly due to male aggression. Emotional infidelity In males, the visual cortex, medial frontal gyrus, middle frontal gyrus, precentral gyrus, cingulate cortex, insula, hippocampus, thalamus, caudate, hypothalamus, and cerebellum were shown to be activated. In females, activations in the visual cortex, medial frontal gyrus, middle frontal gyrus, angular gyrus, thalamus, and cerebellum were noted. Male activations were greater in the precentral gyrus, insula, hippocampus, hypothalamus, and cerebellum, while women shower greater activations in the visual cortex, angular gyrus, and thalamus. Regions in the female brain have been implicated in detection of intention, deception, and trustworthiness of others. It is ultimately suggested that the different emotional processing in males and females contributes to the different responses in issues in intimate relationships. References External links The Chemistry Between Us: Larry Young at TEDxEmory Intimate relationships Interpersonal relationships Sociobiology Neuroanatomy Love
Neuroanatomy of intimacy
Biology
2,478
23,829,925
https://en.wikipedia.org/wiki/Space%20industry%20of%20Russia
Russia's space industry comprises more than 100 companies and employs 250,000 people. Most of the companies are descendants of Soviet design bureau and state production companies. The industry entered a deep crisis following the dissolution of the Soviet Union, with its fullest effect occurring in the last years of the 1990s. Funding of the space program declined by 80% and the industry lost a large part of its work force before recovery began in the early 2000s. Many companies survived by creating joint-ventures with foreign firms and marketing their products abroad. In the mid-2000s, as part of the general improvement in the economy, funding of the country's space program was substantially increased and a new ambitious federal space plan was introduced, resulting in a great boost to the industry. Its largest company is RKK Energiya, the main crewed space flight contractor. Leading launch vehicle producers are Khrunichev and TsSKB-Progress. The largest satellite developer is ISS Reshetnev, while NPO Lavochkin is the main developer of interplanetary probes. a major reorganization of the Russian space industry is underway, with increased state supervision and involvement of the ostensibly private companies formed in the early 1990s following the dissolution of the Soviet Union. History Post-Soviet adjustments The space industry of the Soviet Union was a formidable, capable and well-funded complex, which scored a number of great successes. Spending on the space program peaked in 1989, when its budget totaled 6.9 billion rubles, amounting to 1.5% of the Soviet Union's gross domestic product. During the perestroika period of the late 1980s, the space program's funding began to decrease, and this was seriously accelerated by the economic hardships of the 1990s. The Russian Federation inherited the major part of the infrastructure and companies of the Soviet program (while others, such as Yuzhnoye Design Bureau, became Ukrainian), but found itself unable to continue the appropriate level of financing. By 1998, the space program's funding had been cut by 80%. To coordinate the country's space activities, on 25 February 1992, the Russian Federal Space Agency was created. During Soviet times, there had been no central agency; instead, the design bureaus had been very powerful. To an extent, this continued during the first years of the agency, which suffered from a lack of authority while the design bureaus fought to survive in the difficult environment. The crisis years In 1993, the most prestigious program of the industry, the Buran space shuttle, was canceled. It had been worked on for 20 years by the industry's best companies, and the cancelation immediately resulted in a 30% reduction in the industry's work force. 300,000 people worked in the industry at the end of 1994, down from 400,000 in 1987, and the space program's funding now amounted to just 0.23% of the country's budget. The final phase of the space program's contraction took place during the 1998 Russian financial crisis. Much of the budgeted money never arrived at the companies. The space industry continued to shed work force, and soon only 100,000 people remained. Wages were also cut: for example at the leading rocket engine producer NPO Energomash, the average monthly salary during this time was 3,000 rubles ($104). The space industry's physical infrastructure declined greatly, and this was symbolised by a roof collapse in 2001 at the Baikonur Cosmodrome which destroyed the Buran shuttle which had flown the one and only flight of the program in 1988. No funds were available to look after the shuttle's hangar in Baikonur and it collapsed on the shuttle in May 2002. Foreign partnerships During the crisis years, of the main ways for the industry's companies to survive was to look for foreign partnerships. In this respect, Khrunichev was particularly successful. On 15 April 1993 Khrunichev created the Lockheed-Khrunichev-Energia joint venture with the American company Lockheed. In 1995, due to the merger of Lockheed and Martin Marietta, it was transformed into International Launch Services (ILS). The joint venture marketed launches on both the Proton and the American Atlas rockets. The United States had given permission for the appearance of Proton on the international launch market, but introduced a quota to protect the launch market from "Russian dumping." Despite this, the Proton, built by Khrunichev, was successful and by the end of 2000 had earned launch contracts worth over $1.5 billion. Since 1994, the Proton has earned $4.3 billion for the Russian space industry as a whole, and in 2011 this figure is expected to raise to $6 billion. Another successful company was NPO Energomash, whose extremely powerful RD-180 engine was installed on American Atlas V rockets. The rocket's manufacturer Lockheed Martin initially bought 101 RD-180 engines from Energomash, earning the company $1 billion in hard currency. New federal space plan In the early 2000s, during Vladimir Putin's presidency, the Russian economy started recovering, growing more each year than in all of the previous decade. The funding outlook for Russia's space program started to look more favourable. In 2001, the development of the GLONASS satellite navigation system was made a government priority with the introduction of a new Federal Targeted Program. The main contractor for GLONASS, NPO PM (later renamed ISS Reshetnev), thus received a boost in its finances. In total, 4.8 billion rubles was allocated for the space program in 2001, of which 1.6 billion was earmarked for GLONASS. By 2004, Russia's space spending had grown to 12 billion rubles. In 2005, a new strategy for the development of the country's space program, titled the Federal Space Plan 2006–2015, was approved. It stipulated the completion of the International Space Station, development of the Angara rocket family, introduction of a new crewed spacecraft and completion of the GLONASS constellation, among others. In the mid-2000s, funding of the space program continued to improve substantially, amounting to 21.59 billion rubles in 2005 and rising to 23 billion rubles in 2006. In 2007, 24.4 billion rubles was spent on the civilian space program, while the military space program's budget was 11 billion rubles. The industry also continued to receive very substantial funds from exports and foreign partnerships. 2013 reorganization of the Russian space sector As a result of a series of reliability problems, and proximate to the failure of a July 2013 Proton M launch, a major reorganization of the Russian space industry was undertaken. The United Rocket and Space Corporation was formed as a joint-stock corporation by the government in August 2013 to consolidate the Russian space sector. Deputy Prime Minister Dmitry Rogozin said "the failure-prone space sector is so troubled that it needs state supervision to overcome its problems." Three days following the Proton M launch failure, the Russian government had announced that "extremely harsh measures" would be taken "and spell the end of the [Russian] space industry as we know it." Structure of the industry The largest company of Russia's space industry is RKK Energiya. It is the country's main human spaceflight contractor, the lead developer of the Soyuz-TMA and Progress spacecraft and the Russian end of the International Space Station. It employs around 22,000-30,000 people. Progress State Research and Production Rocket Space Center (TsSKB Progress) is the developer and producer of the famous Soyuz launch vehicle. The Soyuz-FG version is used to launch crewed spacecraft, while the international joint-venture Starsem markets commercial satellite launches on the other versions. TsSKB Progress is currently leading the development of a new launcher called Rus-M, which is to replace the Soyuz. Moscow-based Khrunichev State Research and Production Space Center is one of the commercially most successful companies of the space industry. It is the developer of the Proton-M rocket and the Fregat upper stage. The company's new Angara rocket family is expected to be put into service 2013. The largest satellite manufacturer in Russia is ISS Reshetnev (formerly called NPO PM). It is main contractor for the GLONASS satellite navigation program and produces the Ekspress series of communications satellites. The company is located in Zheleznogorsk, Krasnoyarsk Krai, and employs around 6,500 people. The leading rocket engine company is NPO Energomash, designer and producer of the famous RD-180 engine. In electric spacecraft propulsion, OKB Fakel, located in Kaliningrad Oblast, is one of the top companies. NPO Lavochkin is Russia's main planetary probe designer. It is responsible for the high-profile Fobos-Grunt mission, Russia's first attempt at an interplanetary probe since Mars 96. List of main companies Launcher manufacturers Progress Rocket Space Centre: Soyuz-2, Soyuz-2-1v Khrunichev: Proton-M, Angara Engines A large experience gained by the Russian propulsion industry on all types of rocket engines but in particular in oxygen hydrocarbon propellant and staged combustion system. NPO Energomash Production Corporation Polyot KBKhA KBKhM Kuznetsov Design Bureau Keldysh Research Center OKB Fakel NIIMash TsNIIMash Proton-PM Voronezh Mechanical Plant Spacecraft Energiya: Soyuz MS (crewed), Progress MS (cargo) Interplanetary missions NPO Lavochkin: Fobos-Grunt Satellite developers ISS Reshetnev: GLONASS, Express NPO Lavochkin: Elektro–L Gazprom Space Systems SPUTNIX DAURIA Aerospace Success Rockets Environmental impact Critics claim that Proton rocket fuel (unsymmetrical dimethylhydrazine, UDMH) and debris created by Russia's space programme is poisoning areas of Russia and Kazakhstan. Clusters of cancers have been found in the Republic of Altai and residents claim that acid rain falls after some launches. Anatoly Kuzin, deputy director of the Khrunichev State Research and Production Space Center, has denied these claims, saying: "We did special research into the issue. The level of acidity in the atmosphere is not affected by the rocket launches [and] there is no data to prove any link between the illnesses [in Altai] and the influence of rocket fuel components or space activity of any kind". See also Aircraft industry of Russia Defence industry of Russia Success Rockets — Russian private space company References Literature Economy of Russia Russia +
Space industry of Russia
Astronomy
2,203
32,560,034
https://en.wikipedia.org/wiki/Hexasulfur
Hexasulfur is an inorganic chemical with the chemical formula . This allotrope was first prepared by M. R. Engel in 1891 by treating thiosulfate with HCl. Cyclo- is orange-red and forms a rhombohedral crystal. It is called ρ-sulfur, ε-sulfur, Engel's sulfur and Aten's sulfur. Another method of preparation involves the reaction of a polysulfane with sulfur monochloride: (dilute solution in diethyl ether) Nomenclature The name hexasulfur is the most commonly used and preferred IUPAC name and is constructed according to the compositional nomenclature, and cyclohexasulfane. It is also the final member of the thiane heterocyclic series, where every carbon atom is substituted with a sulfur atom, thus the systematic name hexathiane, a valid IUPAC name, is constructed according to the substitutive nomenclature. Another valid IUPAC systematic name cyclo-hexasulfur is constructed according to the additive nomenclature. Structure This chemical consists of rings of 6 sulfur atoms. It is thus a simple cyclosulfane and an allotrope of sulfur. Hexasulfur adopts a chair configuration similar to that of cyclohexane, with bond angles of 102.2°. The sulfur atoms are equivalent. References Six-membered rings Allotropes of sulfur
Hexasulfur
Chemistry
303
11,570,270
https://en.wikipedia.org/wiki/Inonotus%20arizonicus
Inonotus arizonicus is a plant pathogen. I. arizonicus is a locally common saprotrophic polypore that induces white rot in sycamore trees in southwestern North America. Host species include Platanus wrightii (Arizona sycamores) and Platanus racemosa (California sycamores). The fruiting bodies, shaped like hooves or a plate or a stack of plates, can appear on trunks, at the base of living trees, or on stumps or snags. In California this species is generally found south of the San Francisco Bay Area. References Sources External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases arizonicus Fungi described in 1969 Fungus species
Inonotus arizonicus
Biology
150
40,031,725
https://en.wikipedia.org/wiki/Optical%20braille%20recognition
Optical braille recognition is technology to capture and process images of braille characters into natural language characters. It is used to convert braille documents for people who cannot read them into text, and for preservation and reproduction of the documents. History In 1984, a group of researchers at the Delft University of Technology designed a braille reading tablet, in which a reading head with photosensitive cells was moved along set of rulers to capture braille text line-by-line. In 1988, a group of French researchers at the Lille University of Science and Technology developed an algorithm, called Lectobraille, which converted braille documents into plain text. The system photographed the braille text with a low-resolution CCD camera, and used spatial filtering techniques, median filtering, erosion, and dilation to extract the braille. The braille characters were then converted to natural language using adaptive recognition. The Lectobraille technique had an error rate of 1%, and took an average processing time of seven seconds per line. In 1993, a group of researchers from the Katholieke Universiteit Leuven developed a system to recognize braille that had been scanned with a commercially available scanner. The system, however, was unable to handle deformities in the braille grid, so well-formed braille documents were required. In 1999, a group at the Hong Kong Polytechnic University implemented an optical braille recognition technique using edge detection to translate braille into English or Chinese text. In 2001, Murray and Dais created a handheld recognition system, that scanned small sections of a document at once. Because of the small area scanned at once, grid deformation was less of an issue, and a simpler, more efficient algorithm was employed. In 2003, Morgavi and Morando designed a system to recognize braille characters using artificial neural networks. This system was noted for its ability to handle image degradation more successfully than other approaches. Challenges Many of the challenges to successfully processing braille text arise from the nature of braille documents. Braille is generally printed on solid-color paper, with no ink to produce contrast between the raised characters and the background paper. However, imperfections in the page can appear in a scan or image of the page. Many documents are printed inter-point, meaning they are double-sided. As such, the depressions of the braille of one side appear interlaid with the protruding braille of the other side. Techniques Some optical braille recognition techniques attempt to use oblique lighting and a camera to reveal the shadows of the depressions and protrusions of the braille. Others make use of commercially available document scanners. See also Optical character recognition References Optical character recognition Applications of computer vision Automatic identification and data capture Computational linguistics Braille technology Machine translation Disability software
Optical braille recognition
Technology
567
24,150,850
https://en.wikipedia.org/wiki/C11H13NO3
{{DISPLAYTITLE:C11H13NO3}} The molecular formula C11H13NO4 may refer to: Hydrastinine Methylone Streptazolin Toloxatone
C11H13NO3
Chemistry
43
21,305,818
https://en.wikipedia.org/wiki/Donlen
Donlen LLC is an American fleet leasing and management company headquartered in Bannockburn, Illinois, a suburb of Chicago. With offices throughout the U.S. and Canada, the company provides consultation, maintenance, and outsourcing for corporate vehicle fleets. Donlen currently has over 165,000 vehicles under lease and management with over 300 employees. History Donlen Corporation was founded in 1965 by Donald Rappeport and Leonard Vine. In September 2011 the company was acquired by Hertz Global Holdings, Inc for $250 million in cash and the assumption of $770 million in Donlen fleet debt and operated as a subsidiary of the Hertz Corporation. On March 30, 2021, Hertz completed a sale of the business to Athene Holding. Business overview Donlen is a fleet management provider focusing on the management and consultation of a company's commercial vehicle fleet, including cars, vans, and trucks. Donlen performs fleet management functions such as vehicle financing, vehicle acquisition, vehicle registration services, and vehicle re-marketing. Partnerships Donlen partnered with the EDF in a strategic alliance to enable commercial and municipal vehicle fleets to monitor and reduce their carbon emissions. In addition, Donlen partnered with the EPA's SmartWay Program to display Smartway Vehicle Certifications on qualifying vehicles to help drivers and fleet managers choose the best performers. References External links Climbing the mountain: How Gary Rappeport put Donlen Corp. on the ascent by creating a plan to retain top talent DonlenGreenKey.com Donlen Announces Release Of Award-Winning FleetWeb 3.0 Transport operations Companies based in Lake County, Illinois Leasing companies 2011 mergers and acquisitions American companies established in 1965 Transport companies established in 1965 Fleet management Companies that filed for Chapter 11 bankruptcy in 2020
Donlen
Physics
354
5,686,025
https://en.wikipedia.org/wiki/Breast%20ironing
Breast ironing, also known as breast flattening, is the pounding and massaging of a pubescent girl's breasts, using hard or heated objects, to try to make them stop developing or disappear. The practice is typically performed by a close female figure to the victim, traditionally fulfilled by a mother, grandmother, aunt, or female guardian who will say she is trying to protect the girl from sexual harassment and rape, to prevent early pregnancy that would tarnish the family name, to prevent the spread of sexually transmitted infections such as HIV/AIDS, or to allow the girl to pursue education rather than be forced into early marriage. It is mostly practiced in parts of Cameroon, where boys and men may think that girls whose breasts have begun to grow are ready for sex. Evidence suggests that it has spread to the Cameroonian diaspora, for example to Britain, where the law defines it as child abuse. The most widely used implement for breast ironing is a wooden pestle normally used for pounding tubers. Other tools used include leaves, bananas, coconut shells, grinding stones, ladles, spatulas, and hammers heated over coals. The ironing practice is generally performed around dusk or dawn in a private area such as the household kitchen to prevent others from seeing the victim or becoming aware of the process, particularly fathers or other male figures. The massaging process could occur anywhere between one week to several months, depending on the victim's refusal and the resistance of the breasts; in cases where the breasts appear to be consistently protruding, the ironing practice may occur more than once a day for weeks or months at a time. History Breast ironing may be derived from the ancient practice of breast massage. Breast massage aims to help even out different breast sizes and reduce the pain of nursing mothers by massaging the breast with warm objects, see Treatment for mastitis. Incidence The breast ironing practice has been documented in Nigeria, Togo, Republic of Guinea, Côte d'Ivoire, Kenya, and Zimbabwe. Additionally it has been found in other African countries, including Burkina Faso, Central African Republic (CAR), Benin, and Guinea-Conakry. Breast "sweeping" has been reported in South Africa. The practice has become commonly associated with Cameroon as a result of media attention and local levels of activism from human rights groups. All of Cameroon's 200 ethnic groups engage in breast ironing, with no known relation to religion, socio-economic status, or any other identifier. A 2006 survey by the German development agency GIZ of more than 5,000 Cameroonian girls and women between the ages of 10 and 82 estimated that nearly one in four had undergone breast ironing, corresponding to four million girls. The survey also reported that it is most commonly practiced in urban areas, where mothers fear their daughters could be more exposed to sexual abuse. Incidence is 53 percent in the Cameroon's southeastern region of Littoral. Compared with Cameroon's Christian and animist south, breast ironing is less common in the Muslim north, where only 10 percent of women are affected. Some hypothesize that this is related to the practice of early marriage, which is more common in the north, making early sexual development irrelevant or even preferable. Research suggests that 16% of girls, particularly in the far North regions where child marriages are highly common, try to flatten their own breasts in an attempt to delay early sexual maturity and early marriage. A 2007 journal suggested that social norms in Cameroon result in women lacking bodily autonomy, as Cameroonian women are not socialized to negotiate safer sex practices, while Cameroonian men are encouraged to engage in polygyny and to take concubines. This lack of bodily autonomy contributes to an increased incidence of breast ironing, sexual coercion, and the normalization of early marriage practices. In an interview, one human rights activist stated that parents who resist under-aged marriages "usually point to the fact that the girlʼs breasts have not grown meaning that she is not yet ready for sexual intercourse. For parents who practice child marriage, by ironing the breasts of the prospective bride, they can continue receiving goods and services from their in-laws." A 2008 report suggested that the rise in the incidence of breast ironing is due to the earlier onset of puberty, caused by dietary improvements in Cameroon over the previous 50 years. Half of Cameroonian girls who develop under the age of nine have their breasts ironed, and 38% of those who develop before eleven. Additionally, since 1976, the percentage of women married by the age of 19 has decreased from nearly 50% to 20%, leading to an increasingly long gap between childhood and marriage. The later age of marriage may be due to changed social norms that allow girls and women to attend school through university and to hold jobs in the formal sector; previously, girls entered married life young, wed to an older man without informed consent. Women who delay marriage in pursuit of education and career are more likely to be financially independent later in life, whereas girls who become pregnant are often forced to drop out of school and forgo formal employment. One of the only full-length reports on breast ironing dates from 2011, when a Cameroonian NGO sponsored by GIZ called it "a harmful traditional practice that has been silenced for too long". There are fears that the practice has spread to the Cameroonian diaspora, for example to Britain. A charity, CAME Women and Girls Development Organisation, is working with London's Metropolitan Police Service and social services departments to raise awareness of breast ironing. Health consequences Breast ironing is extremely painful and can cause tissue damage. , there have been no medical studies on its effects. However, medical experts warn that it might contribute toward breast cancer, cysts and depression, and perhaps interfere with breastfeeding later. In addition to this, breast ironing puts girls at risk of abscesses, cysts, infections, and permanent tissue damage, resulting in breast pimples, imbalance in breast size, and milk infection from scarring. In extreme cases of damage, there are currently ten cases of diagnosed breast cancer reported from women who identified as victims of breast ironing. Other possible side effects reported by GIZ include malformed breasts and the eradication of one or both breasts. The practice ranges dramatically in its severity, from using heated leaves to press and massage the breasts, to using a scalding grinding stone to crush the budding gland. Due to this variation, health consequences vary from benign to acute. The Child Rights Information Network (CRIN) reports the delay of breast milk development after giving birth, endangering the life of newborns. Breast ironing can cause women to fear sexual activity. Men have said that breast loss detracts from women's sexual experiences, although this has not been corroborated by women. Many women also suffer mental trauma after undergoing breast ironing. Victims feel as if it is punishment and often internalise blame, and fear breastfeeding in the future. Opposition As well as being dangerous, breast ironing is criticised as being ineffective for stopping early sex and pregnancy. GIZ (then called "GTZ") and the Network of Aunties (RENATA), a Cameroonian non-governmental organization that supports young mothers, campaign against breast ironing, and are supported by the Ministry for the Promotion of Women and the Family. Some have also advocated a law against the practice; however, no such law has been passed. Some consider the practice to be an emerging human rights issue, recognized as an act of gender-based violence as breast ironing affects women and girls regardless of race, class, religion, socioeconomic background, or age. In regards to recent opposition, in 2000, the United Nations (UN) identified breast ironing as one of five intersecting forms of discrimination and overlooked crimes against women. According to one Cameroonian lawyer, if a medical doctor determines that damage has been caused to the breasts, the perpetrator can be punished by up to three years in prison, provided the matter is reported within a few months. However, it is unclear if such a law exists as there are no recorded instances of legal enforcement. The GIZ survey found that in 2006, 39 percent of Cameroonian women opposed breast ironing, with 41 percent expressing support and 26 percent indifferent. Reuters reported in 2014 that nationwide campaigning against the practice had helped reduce the rate of breast ironing by 50 percent in the country. See also Breast reduction Breast binding Female genital mutilation Mastectomy Amazons Thelarche, the stage of pubertal development at which breast buds appear Precocious puberty References External links Breast ironing in the UK – BBC, 2019 Plastic Dream – photographic work and writing of testimonies by Gildas Paré Abuse Body modification Breast Culture of Cameroon Children's rights Violence against women in Cameroon Women's rights in Cameroon Child abuse in Africa Violence against children in Africa Children's rights in Africa Gender-related violence Child sexual abuse Sexual violence in Africa
Breast ironing
Biology
1,842
20,280,494
https://en.wikipedia.org/wiki/Cyclopropenylidene
Cyclopropenylidene, or c-C3H2, is a partially aromatic molecule belonging to a highly reactive class of organic molecules known as carbenes. On Earth, cyclopropenylidene is only seen in the laboratory due to its reactivity. However, cyclopropenylidene is found in significant concentrations in the interstellar medium (ISM) and on Saturn's moon Titan. Its C2v symmetric isomer, propadienylidene (CCCH2) is also found in the ISM, but with abundances about an order of magnitude lower. A third C2 symmetric isomer, propargylene (HCCCH), has not yet been detected in the ISM, most likely due to its low dipole moment. History The astronomical detection of c-C3H2 was first confirmed in 1985. Four years earlier, several ambiguous lines had been observed in the radio region of spectra taken of the ISM, but the observed lines were not identified at the time. These lines were later matched with a spectrum of c-C3H2 using an acetylene-helium discharge. Surprisingly, c-C3H2 has been found to be ubiquitous in the ISM. Detections of c-C3H2 in the diffuse medium were particularly surprising because of the low densities. It had been believed that the chemistry of the diffuse medium did not allow for the formation of larger molecules, but this discovery, as well as the discovery of other large molecules, continue to illuminate the complexity of the diffuse medium. More recently, observations of c-C3H2 in dense clouds have also found concentrations that are significantly higher than expected. This has led to the hypothesis that the photodissociation of polycyclic aromatic hydrocarbons (PAHs) enhances the formation of c-C3H2. Titan (Moon of Saturn) On 15 October 2020, it was announced that small amounts of cyclopropenylidene had been found in the atmosphere of Titan, the largest moon of Saturn using the ALMA telescope. Cyclopropenylidene became the fifth C3Hn (three carbon) hydrocarbon molecule detected on Titan, after previous detections of CH3CCH (propyne) and C3H8 (propane); C3H6 (propene); and CH2CCH2 (propadiene). Formation The formation reaction of c-C3H2 has been speculated to be the dissociative recombination of c-. + e− → C3H2 + H c- is a product of a long chain of carbon chemistry that occurs in the ISM. Carbon insertion reactions are crucial in this chain for forming . However, as for most ion-molecule reactions speculated to be important in interstellar environments, this pathway has not been verified by laboratory studies. The protonation of ammonia by c- is another formation reaction. However, under typical dense cloud conditions, this reaction contributes less than 1% of the formation of C3H2. Crossed molecular beam experiments indicate that the reaction of the methylidyne radical (CH) with acetylene (C2H2) forms cyclopropenylidene plus atomic hydrogen and also propadienylidene plus atomic hydrogen. The neutral–neutral reaction between atomic carbon and the vinyl radical (C2H3) also forms cyclopropenylidene plus atomic hydrogen. Both reactions are rapid at 10 K and have no entrance barrier and provide efficient formation pathways in cold interstellar environments and hydrocarbon-rich atmospheres of planets and their moons. Matrix isolated cyclopropenylidene has been prepared by flash vacuum thermolysis of a quadricyclane derivative in 1984. Destruction Cyclopropenylidene is generally destroyed by reactions between ions and neutral molecules. Of these, protonation reactions are the most common. Any species of the type HX+ can react to convert the c-C3H2 back to c-. Due to rate constant and concentration considerations, the most important reactants for the destruction of c-C3H2 are HCO+, , and H3O+. C3H2 + HCO+ → + CO Notice that c-C3H2 is mostly destroyed by converting it back to . Since the major destruction pathways only regenerate the major parent molecule, C3H2 is essentially a dead end in terms of interstellar carbon chemistry. However, in diffuse clouds or in the photodissociation region (PDR) of dense clouds, the reaction with C+ becomes much more significant and C3H2 can begin to contribute to the formation of larger organic molecules. Spectroscopy Detections of c-C3H2 in the ISM rely on observations of molecular transitions using rotational spectroscopy. Since c-C3H2 is an asymmetric top, the rotational energy levels are split and the spectrum becomes complicated. Also, it should be noticed that C3H2 has spin isomers much like the spin isomers of hydrogen. These ortho and para forms exist in a 3:1 ratio and should be thought of as distinct molecules. Although the ortho and para forms look identical chemically, the energy levels are different, meaning that the molecules have different spectroscopic transitions. When observing c-C3H2 in the interstellar medium, there are only certain transitions that can be seen. In general, only a few lines are available for use in astronomical detection. Many lines are unobservable because they are absorbed by the Earth's atmosphere. The only lines that can be observed are those that fall in the radio window. The more commonly observed lines are the 110 to 101 transition at and the 212 to 101 transition at of ortho-c-C3H2. See also List of molecules in interstellar space References Carbenes
Cyclopropenylidene
Chemistry
1,228
49,004,087
https://en.wikipedia.org/wiki/AN/FYQ-93
FYQ-93 was a computer system used from 1983 to 2006, and built for the Joint Surveillance System (JSS) by the Hughes Aircraft Company. The system consisted of a fault tolerant central computer complex using a two string concept that interfaced with many display consoles and interfaced with external radars to provide a region-sector display of air traffic. This system was composed of a suite of computers and peripheral equipment configured to receive plot data from ground radar systems, perform track processing, and present track data to both weapons controllers forward and lateral communications links. The HMD-22 consoles displayed data from various radars including the AN/GSQ-235. The data was routed to the Cheyenne Mountain Complex from installations located in the continental United States (CONUS), Canada, Alaska and Hawaii. The need for the FYQ-93 system became apparent in the 1970s when the Semi-Automatic Ground Environment (SAGE) system became technologically obsolete and logistically unsupportable. The FYQ-93 system was conceived and specified in the late 1970s. It was manufactured and delivered during the first half of the 1980s and by the end of 1984, all nine facilities were in place. Enough of the system was in place in mid 1983 for the SAGE system to officially shut down and the JSS became the air defense system of the United States and Canada. The large network of military long range radar sites was closed and a much smaller number (43) of FAA Joint Use sites replaced them. The JSS was a joint USAF/FAA radar use program. The ACC portion of the JSS was composed of four CONUS SOCCs equipped with FYQ-93 computers, and 47 ground-based FPS-93 Search Radars. FAA equipment was a mix of Air Route Surveillance Radar (ARSR) 1, 2, and 3 systems. Collocated with most radar sites were UHF ground-air-ground (G/A/G) transmitter/receiver (GATR) facilities. Fourteen sites have VHF radios also. The GATR facility provided radio access to fighters and AWACS aircraft from the SOCCs. The JSS radars sent surveillance data to the SOCCs who then forwarded tracks of interest to the CONUS ROCC and North American Air Defense Command (NORAD). Radar and track data were sent through landlines as TADIL-B data and through HF radio links as TADIL-A data. Both TADIL links were provided by the Radar Data Information Link (RADIL). CONUS SOCCs communicated with the CONUS ROCC and NORAD by voice and data landline circuits. Internally a single "string" of the FYQ-93 system included one Hughes H5118ME Central Computer and two Hughes HMP-1116 Peripheral computers. Radar data was input and buffered in one 1116 for orderly transfer to the 5118, which then constructed the "air picture". The second 1116 on the string handled program loading, console commands, and data storage. The output of the string fed another 1116 called a "Display Controller" (DC), which sent data to and received switch actions from the HMD-22 consoles. Typically there were two strings and two DCs processing in parallel, one on standby in case of a malfunction in its counterpart. Either string could feed either DC for further equipment reliability. The software was written in a proprietary version of the programming language JOVIAL termed JSS JOVIAL. The system was updated over time to change tape drives to disk cartridges and single-line printers to multi-line printers. The memory in the H5118ME was expanded at least twice to the system maximum of 512,000 18-bit words. The H5118E was eventually upgraded to the H5118M computer which had 1 megabyte of memory and could handle 1.2 million instructions per second while the original model had a memory of 256 kilobytes and a clock speed of 150,000 instructions per second. Although the H5118M was part of the NATO Integrated Air Defense System it is unclear if JSS received the same upgrades. Internal to Hughes, the next generation Air Defense and Air Traffic Control systems were being developed as JSS was being deployed. The next generation was based on using any computer of a certain processing class to replace the 5118 computer. Examples include DEC VAX and Norsk Data Systems. This was driven in part by the needs of different sovereign states who wanted their computers used for their in-country systems. This was also driven by the great miniaturization of computer hardware. The next generation Hughes systems used 2K X 2K resolution 20" X 20" color raster displays, touch entry, voice synthesis and recognition consoles, dual redundant Fiber Optic Token ring buses to link all consoles and computers, extensive processing in the consoles including mission processing, and movement into software written in the programming language Ada. The FYQ-93 was part of a long history of developing air defense Systems starting in the 1950s. The FYQ-93 was based on the Combat Grande System which was one of the first systems to extensively use science and engineering principals to develop software. This allowed for extensive re-use and optimization for the needs of each nation state installing and using the Hughes Systems. Classification of radar systems Under the Joint Electronics Type Designation System (JETDS), all U.S. military radar and tracking systems are assigned a unique identifying alphanumeric designation. The letters “AN” (for Army-Navy) are placed ahead of a three-letter code. The first letter of the three-letter code denotes the type of platform hosting the electronic device, where A=Aircraft, F=Fixed (land-based), S=Ship-mounted, and T=Ground transportable. The second letter indicates the type of equipment, where P=Radar (pulsed), Q=Sonar, R=Radio, and Y=Data Processing The third letter indicates the function or purpose of the device, where G=Fire control, Q=Special Purpose, R=Receiving, S=Search, and T=Transmitting. Thus, the AN/FYQ-93 represents the 93rd design of an Army-Navy “Fixed, Data Processing, Special Purpose” electronic device. See also Joint Surveillance System NATO Integrated Air Defense System References Computing by computer model Joint Surveillance System radar stations
AN/FYQ-93
Technology
1,302
37,177,749
https://en.wikipedia.org/wiki/HD%2085951
HD 85951 (HR 3923), formally named Felis , is a solitary orange hued star in the constellation Hydra. It has an apparent magnitude of 4.94, making it faintly visible to the naked eye under ideal conditions. Based on parallax measurements, the object is about 570 light-years away from the Sun and is receding with a heliocentric radial velocity of . Nomenclature HD 85951 was the brightest star in the now-obsolete constellation of Felis. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Felis for this star on 1 June 2018 and it is now so included in the List of IAU-approved Star Names. Properties This an evolved red giant with a stellar classification of K5 III. It is currently on the asymptotic giant branch, generating energy via fusion of hydrogen and helium shells around an inert carbon core. At present Felis has 6.4 times the mass of the Sun and due to its evolved status, has an enlarged radius of . It radiates at a bolometric luminosity 721 times that of the Sun from its photosphere at an effective temperature of . Felis has an iron abundance 66% that of the Sun, making it metal deficient. References Hydra (constellation) K-type giants Durchmusterung objects 085951 048615 3923 Felis
HD 85951
Astronomy
308
47,113,160
https://en.wikipedia.org/wiki/RW3%20Technologies
RW3 Technologies is a software company that provides SaaS Intelligent in store execution, data driven field sales, survey, Dashboard/reporting consumer packaged goods (CPG) industryand Online — Instore Competitive Pricing to Retailers. It is headquartered in Austin Texas. History RW3 Technologies was founded in the Bay Area by Bruce Nagle in 1992. The company's primary focus was to streamline daily data entry processes for the sales industry. In 1992, RW3 introduced one of the first Land-Line CPG broker sales systems that allowed for mobile data entry. It was initially used in food brokerage, though eventually expanded to include functionality for the consumer packaged goods industry. In 2000, the company expanded its business model to include business-to-business account management. In 2010, the company began development of their first general SaaS product application since the late 1990s; the SaaS application called InStore Mobile (now MarketCheck) was released in 2011. The in-store survey application allows for two-way communication between the account rep and broker, allowing organizations to improve and track retail conditions. In 2013, RW3 released the BI Suite, a business intelligence environment that enables organizations to create views across departments and utilize multiple data sources to align sales strategies. In 2014 Smartcall was launched and marketed to the CPG industry. The application enables field sales reps to conduct traditional store calls and manage sales routes. It is packaged with MarketCheck into their InStore Execution Suite, providing retail execution and monitoring applications for the consumer goods industry. Products RW3 offers four SaaS products for the retail, wholesale, and B2B industries: MarketCheck - An application designed around the workflow of a field manager; enables users to access sales and analytics and other tools to help manage their brokers. SmartCall - An application designed around the workflow of a direct rep; enables users to organize their routes and daily activities. The BI Suite - A business intelligence environment that provides teams with device-agnostic reports and custom dashboards. PriceCheck - A mobile data collection application that allows reps to collect in-store pricing and validate it with a two-stage data validation process. Markets RW3 serves three primary markets: Small to Blue Chip CPG Manufactures CPG Marketing Merchandisers and Brokers Small to Blue Chip Retailers Awards In 2015, RW3 was recognized by readers of Consumer Goods Technology magazine as a A Best-In-Class Service Provider of Retail Execution. In 2012, RW3 Technologies was named the Consumer Goods Technology Readers’ Choice Award Winner for #1 CRM Customer Experience. In 2009, RW3 Technologies was ranked by Consumer Goods Technology as One of the Top 27 Companies to Consider. In 2007, RW3 Technologies was awarded by Consumer Goods Technology the Outside The Box Industry Award, presented to vendors that just don't fit within today's business walls. In 2004, RW3 Technologies was ranked in the Top 50 Fastest Growing Companies, by the East Bay Business Times. References Customer relationship management software companies Data and information visualization software Customer relationship management software Business intelligence companies Mobile technology companies 1992 establishments in California Software companies based in California Companies established in 1992 Defunct software companies of the United States
RW3 Technologies
Technology
660
5,062,097
https://en.wikipedia.org/wiki/Satellite%20revisit%20period
The satellite revisit period is the time elapsed between observations of the same point on Earth by a satellite. It depends on the satellite's orbit, target location, and swath of the sensor. "Revisit" is related to the same ground trace, a projection of the satellite's orbit on to the Earth. Revisit requires a very close repeat of the ground trace. In the case of polar orbit or highly inclined low-Earth-orbit reconnaissance satellites, the sensor must have the variable swath, to look longitudinally (east-west, or sideways) at a target, in addition to direct overflight observation, looking nadir. In the case of the Israeli EROS Earth observation satellite, the ground trace repeat is 15 days, but the actual revisit time is 3 days, because of the swath ability of the camera payload. See also Orbit period Satellite watching, spotting satellites in the sky as they pass References Satellites Orbits
Satellite revisit period
Astronomy
198
31,666,860
https://en.wikipedia.org/wiki/IRAS%2018162%E2%88%922048
IRAS 18162-2048 is a far-infrared source discovered by IRAS spacecraft in 1983. It is associated with a massive (~10 solar masses) protostar, which accretes gas from a disk that surrounds it. IRAS 18162-2048 emits two collimated radio jets along its axis of rotation. The jets are made of chains of radio sources aligned in a southwest-northeast direction. The northern jet terminates in Herbig–Haro object HH 81N, while the southern one terminates in Herbig–Haro objects HH 80 and HH 81. The total luminosity of IRAS 18162-2048 is about 17,000 solar luminosities. The total extent of this system of jets and radio sources is about 5 pc. In 2010 HH 80–81 jet of IRAS 18162-2048 were found to emit polarized radio waves, which indicated that they were produced by relativistic electrons moving along the magnetic field estimated at 20 nT. This observation was the first of kind demonstrating that a protostar can have a magnetized jet. References Sagittarius (constellation) Emission nebulae Protostars 18162−2048 ? Herbig–Haro objects
IRAS 18162−2048
Astronomy
257
1,039,945
https://en.wikipedia.org/wiki/Royal%20Meteorological%20Society
The Royal Meteorological Society is a long-established institution that promotes academic and public engagement in weather and climate science. Fellows of the Society must possess relevant qualifications, but Members can be lay enthusiasts. Its Quarterly Journal is one of the world's leading sources of original research in the atmospheric sciences. The chief executive officer is Liz Bentley. Constitution The Royal Meteorological Society traces its origins back to 3 April 1850 when the British Meteorological Society was formed as "a society the objects of which should be the advancement and extension of meteorological science by determining the laws of climate and of meteorological phenomena in general". Along with nine others, including James Glaisher, John Drew, Edward Joseph Lowe, The Revd Joseph Bancroft Reade, and Samuel Charles Whitbread, Dr John Lee, an astronomer, of Hartwell House, near Aylesbury, Buckinghamshire founded in the library of his house the British Meteorological Society, which became the Royal Meteorological Society. It became The Meteorological Society in 1866, when it was incorporated by Royal Charter, and the Royal Meteorological Society in 1883, when Her Majesty Queen Victoria granted the privilege of adding 'Royal' to the title. Along with 74 others, the famous meteorologist Luke Howard joined the original 15 members of the Society at its first ordinary meeting on 7 May 1850. As of 2008 it has more than 3,000 members worldwide. The chief executive of the Society is Professor Liz Bentley. Paul Hardaker previously served as chief executive from 2006 to 2012. Membership There are four membership categories: Honorary Fellow Fellow (FRMetS) Member Corporate member Awards The society regularly awards a number of medal and prizes, of which the Symons Gold Medal (established in 1901) and the Mason Gold Medal (established in 2006) are pre-eminent. The two medals are awarded alternately. Other awards include the Buchan Prize, the Hugh Robert Mill Award, the L F Richardson Prize, the Michael Hunt Award, the Fitzroy Prize, the Gordon Manley Weather Prize, the International Journal of Climatology Prize, the Society Outstanding Service Award and the Vaisala Award. Journals The society has a number of regular publications: Atmospheric Science Letters: a monthly journal that provides a peer-reviewed publication route for new shorter contributions in the field of atmospheric and closely related sciences. Weather: a monthly journal with many full colour illustrations and photos for specialists and general readers with an interest in meteorology. It uses a minimum of mathematics and technical language. Quarterly Journal of the Royal Meteorological Society: one of the world's leading journals for meteorology, publishing original research in the atmospheric sciences. There are eight issues per year. Meteorological Applications: this is a journal for applied meteorologists, forecasters and users of meteorological services and has been published since 1994. It is aimed at a general readership and authors are asked to take this into account when preparing papers. International Journal of Climatology: has 15 issues a year and covers a broad spectrum of research in climatology. WIREs Climate Change: a journal about climate change Geoscience Data Journal: an online, open-access journal. Climate Resilience and Sustainability: an interdisciplinary, open-access journal. All publications are available online but a subscription is required for some. However certain "classic" papers are freely available on the Society's website. Local centres and special interest groups The society has several local centres across the UK. There are also a number of special interest groups which organise meetings and other activities to facilitate exchange of information and views within specific areas of meteorology. These are informal groups of professionals interested in specific technical areas of the profession of meteorology. The groups are primarily a way of communicating at a specialist level. Presidents Source: 1850–1853: Samuel Charles Whitbread, first time 1853–1855: George Leach 1855–1857: John Lee 1857–1858: Robert Stephenson 1859–1860: Thomas Sopwith 1861–1862: Nathaniel Beardmore 1863–1864: Robert Dundas Thomson, died in office 1864: Samuel Charles Whitbread, second time 1865–1866: Charles Brooke 1867–1868: James Glaisher 1869–1870: Charles Vincent Walker 1871–1872: John William Tripe 1873–1875: Robert James Mann 1876–1877: Henry Storks Eaton 1878–1879: Charles Greaves 1880–1881: George James Symons, first time 1882–1883: Sir John Knox Laughton 1884–1885: Robert Henry Scott 1886–1887: William Ellis 1888–1889: William Marcet 1890–1891: Baldwin Latham 1892–1893: Charles Theodore Williams, first time 1894–1895: Richard Inwards 1896–1897: Edward Mawley 1898–1899: Francis Campbell Bayard 1900: George James Symons, second time; died in office 1900: Charles Theodore Williams, second time 1901–1902: William Henry Dines 1903–1904: Captain David W. Barker 1905–1906: Richard Bentley 1907–1908: Hugh Robert Mill 1910–1911: Henry Mellish 1911–1912: Henry Newton Dickson 1913–1914: Charles John Philip Cave, first time 1915–1917: Sir Henry George Lyons 1918–1919: Sir Napier Shaw 1920–1921: Reginald Hawthorn Hooker 1922–1923: Charles Chree 1924–1925: Charles John Philip Cave, second time 1926–1927: Sir Gilbert Walker 1928–1929: Richard Gregory 1930–1931: Rudolf Gustav Karl Lempfert 1932–1933: Sydney Chapman 1934–1935: Ernest Gold 1936–1937: Francis John Welsh Whipple 1938–1939: Sir Bernard A. Keen 1940–1941: Sir George Clarke Simpson 1942–1944: David Brunt 1945–1946: Gordon Manley 1947–1949: G. M. B. Dobson 1949–1951: Sir Robert Alexander Watson-Watt 1951–1953: Sir Charles Normand 1953–1955: Sir Graham Sutton 1955–1957: Reginald Sutcliffe 1957–1959: Percival Albert Sheppard 1959–1961: James Martin Stagg 1961–1963: Howard Latimer Penman 1963–1965: John Stanley Sawyer 1965–1967: G. D. Robinson 1967–1968: F. Kenneth Hare 1968–1970: John Mason 1970–1972: Frank Pasquill 1972–1974: Robert B. Pearce 1974–1976: Raymond Hide 1976–1978: John T. Houghton 1978–1980: John Monteith 1980–1982: Philip Goldsmith 1982–1984: Henry Charnock 1984–1986: Andrew Gilchrist 1986–1988: Richard S. Scorer 1988–1990: Keith Anthony Browning 1990–1992: Stephen Austen Thorpe 1992–1994: Paul James Mason 1994–1996: John E. Harries 1996–1998: David J. Carson 1998–2000: Sir Brian Hoskins 2000–2002: David Burridge 2002–2004: Howard Cattle 2004–2006: Chris Collier 2006–2008: Geraint Vaughan 2008–2010: Julia Slingo 2010–2012: Tim Palmer 2012–2014: Joanna Haigh 2014–2016: Jennie Campbell 2016–2018: Ellie Highwood 2018–2020: David Warrilow 2020–2022: David Griggs Notable fellows John Farrah (1849–1907). See also List of atmospheric dispersion models UK Dispersion Modelling Bureau Met Office References External links The RMetS website UK Atmospheric Dispersion Modelling Liaison Committee (ADMLC) web site Meteorological societies Meteorological Scientific organisations based in the United Kingdom Atmospheric dispersion modeling Climatological research organizations Climate of the United Kingdom Geographic societies Learned societies of the United Kingdom Scientific organizations established in 1850 1850 establishments in the United Kingdom
Royal Meteorological Society
Chemistry,Engineering,Environmental_science
1,514
14,245,076
https://en.wikipedia.org/wiki/Hundred-dollar%2C%20Hundred-digit%20Challenge%20problems
The Hundred-dollar, Hundred-digit Challenge problems are 10 problems in numerical mathematics published in 2002 by . A $100 prize was offered to whoever produced the most accurate solutions, measured up to 10 significant digits. The deadline for the contest was May 20, 2002. In the end, 20 teams solved all of the problems perfectly within the required precision, and an anonymous donor aided in producing the required prize monies. The challenge and its solutions were described in detail in the book . The problems From : A photon moving at speed 1 in the xy-plane starts at t = 0 at (x, y) = (0.5, 0.1) heading due east. Around every integer lattice point (i, j) in the plane, a circular mirror of radius 1/3 has been erected. How far from the origin is the photon at t = 10? The infinite matrix A with entries is a bounded operator on . What is ? What is the global minimum of the function Let , where is the gamma function, and let be the cubic polynomial that best approximates on the unit disk in the supremum norm . What is ? A flea starts at on the infinite 2D integer lattice and executes a biased random walk: At each step it hops north or south with probability , east with probability , and west with probability . The probability that the flea returns to (0, 0) sometime during its wanderings is . What is ? Let A be the 20000×20000 matrix whose entries are zero everywhere except for the primes 2, 3, 5, 7, ..., 224737 along the main diagonal and the number 1 in all the positions with . What is the (1, 1) entry of ? A square plate is at temperature . At time , the temperature is increased to along one of the four sides while being held at along the other three sides, and heat then flows into the plate according to . When does the temperature reach at the center of the plate? The integral depends on the parameter α. What is the value of α in [0, 5] at which I(α) achieves its maximum? A particle at the center of a 10×1 rectangle undergoes Brownian motion (i.e., 2D random walk with infinitesimal step lengths) till it hits the boundary. What is the probability that it hits at one of the ends rather than at one of the sides? Solutions 0.3233674316 0.9952629194 1.274224152 −3.306868647 0.2143352345 0.06191395447 0.7250783462 0.4240113870 0.7859336743 3.837587979 × 10−7 These answers have been assigned the identifiers , , , , , , , , , and in the On-Line Encyclopedia of Integer Sequences. References Review (June 2005) from Bulletin of the American Mathematical Society. Numerical analysis Recreational mathematics Mathematics competitions
Hundred-dollar, Hundred-digit Challenge problems
Mathematics
626
67,303,095
https://en.wikipedia.org/wiki/Simple-As-Possible%20computer
The Simple-As-Possible (SAP) computer is a simplified computer architecture designed for educational purposes and described in the book Digital Computer Electronics by Albert Paul Malvino and Jerald A. Brown. The SAP architecture serves as an example in Digital Computer Electronics for building and analyzing complex logical systems with digital electronics. Digital Computer Electronics successively develops three versions of this computer, designated as SAP-1, SAP-2, and SAP-3. Each of the last two build upon the immediate previous version by adding additional computational, flow of control, and input/output capabilities. SAP-2 and SAP-3 are fully Turing-complete. The instruction set architecture (ISA) that the computer final version (SAP-3) is designed to implement is patterned after and upward compatible with the ISA of the Intel 8080/8085 microprocessor family. Therefore, the instructions implemented in the three SAP computer variations are, in each case, a subset of the 8080/8085 instructions. Variants Ben Eater's Design YouTuber and former Khan Academy employee Ben Eater created a tutorial building an 8-bit Turing-complete SAP computer on breadboards from logical chips (7400-series) capable of running simple programs such as computing the Fibonacci sequence. Eater's design consists of the following modules: An adjustable-speed (upper limitation of a few hundred Hertz) clock module that can be put into a "manual mode" to step through the clock cycles. Three register modules (Register A, Register B, and the Instruction Register) that "store small amounts of data that the CPU is processing." An arithmetic logic unit (ALU) capable of adding and subtracting 8-bit 2's complement integers from registers A and B. This module also has a flags register with two possible flags (Z and C). Z stands for "zero," and is activated if the ALU outputs zero. C stands for "carry," and is activated if the ALU produces a carry-out bit. A RAM module capable of storing 16 bytes. This means that the RAM is 4-bit addressable. As Eater's website puts it, "this is by far its [the computer's] biggest limitation". A 4-bit program counter that keeps track of the current processor instruction, corresponding to a 4-bit addressable RAM. An output register that displays its content on four 7-segment displays, capable of displaying both unsigned and 2's complement signed integers. The 7-segment display outputs are controlled by EEPROMs, which are programmed using an Arduino microcontroller. A bus that connects these components together. The components connect to the bus using tri-state buffers. A "control logic" module that defines "the opcodes the processor recognizes and what happens when it executes each instruction," as well as enabling the computer to be Turing-complete. The CPU microcodes are programmed into EEPROMs using an Arduino microcontroller. Ben Eater's design has inspired multiple other variants and improvements, primarily on Eater's Reddit forum. Some examples of improvements are: An expanded RAM module capable of storing 256 bytes, utilizing the entire 8-bit address space. With the help of segmentation registers, the RAM module can be further expanded to a 16-bit address space, matching the standard for 8-bit computers. A stack register that allows incrementing and decrementing the stack pointer. References External links SAP-1 online simulator (in English, Spanish and Catalan) Design and Implementation of a Simple-As-Possible 1 (SAP-1) Computer using an FPGA and VHDL An implementation of Simple As Possible computer - SAP1, written in VHDL (in English and Portuguese) SAP-1 simulation using Digital Works (in English and Portuguese) Some of Ben Eater's computer videos including the 8-bit computer. Computer architecture
Simple-As-Possible computer
Technology,Engineering
804
8,166,145
https://en.wikipedia.org/wiki/HP%20Compaq%20tc4400
The HP Compaq TC4400 is a tablet-style personal computer. It can be used in the position of a normal laptop or the screen can be turned and folded down for writing. Specifications As with many manufactured tablets, there are multiple pre-configured models with various options, as well as the ability to customize a model. The following is a list of common specs on current models: Pricing As of November 2006, prices on pre-configured models range from US$1,449 to US$1,849. Creating a custom model can bring the price over US$3,000. Notes Further reading Laptop Magazine's Review Use of the Compaq at a suburban intermediate school References Compaq TC1100 Microsoft Tablet PC Computer-related introductions in 2006
HP Compaq tc4400
Technology
161
43,133,368
https://en.wikipedia.org/wiki/Operation%20Dominic
Operation Dominic was a series of 31 nuclear test explosions ("shots") with a total yield conducted in 1962 by the United States in the Pacific. This test series was scheduled quickly, in order to respond in kind to the Soviet resumption of testing after the tacit 1958–1961 test moratorium. Most of these shots were conducted with free fall bombs dropped from B-52 bomber aircraft. Twenty of these shots were to test new weapons designs; six to test weapons effects; and several shots to confirm the reliability of existing weapons. The Thor missile was also used to lift warheads into near-space to conduct high-altitude nuclear explosion tests; these shots were collectively called Operation Fishbowl. Operation Dominic occurred during a period of high Cold War tension between the United States and the Soviet Union, since the Cuban Bay of Pigs Invasion had occurred not long before. Nikita Khrushchev announced the end of a three-year moratorium on nuclear testing on 30 August 1961, and Soviet tests recommenced on 1 September, initiating a series of tests that included the detonation of the Tsar Bomba. President John F. Kennedy responded by authorizing Operation Dominic. It was the largest nuclear weapons testing program (by total yield) ever conducted by the United States and the last atmospheric test series conducted by the U.S., as the Limited Test Ban Treaty was signed in Moscow the following year. The operation was undertaken by Joint Task Force 8. The U.S. Atomic Energy Commission (AEC) performed Operation DOMINIC II, an atmospheric nuclear test series, at the Nevada Test Site (NTS) from July 7 to 17, 1962. The test series included four low-yield shots, three of which were near-surface detonations and one a tower shot. Exercise IVY FLATS included one of the near-surface shots, fired from a DAVY CROCKETT rocket launcher. Shots Sunset The shot report lists the yield as  ±20% measured from a Bhangmeter and  ±10% from fireball analysis. Other sources give the yield as . Full list of shots Gallery See also List of United States nuclear weapons tests References External links Joint Task Force 8 video report on Operation Dominic Operation Dominic at Carey Sublette's NuclearWeaponArchive.org More info on U.S. testing Operation Dominic at RECA UK (Radiation Exposure Compensation Act) Operation DOMINIC I Fact Sheet Defense Threat Reduction Agency Explosions in 1962 Dominic Exoatmospheric nuclear weapons testing Johnston Atoll American nuclear explosive tests Kiritimati 1962 in military history 1962 in Oceania 1962 in the environment Dominic Military projects of the United States
Operation Dominic
Engineering
529
31,376,465
https://en.wikipedia.org/wiki/PottersWheel
PottersWheel is a MATLAB toolbox for mathematical modeling of time-dependent dynamical systems that can be expressed as chemical reaction networks or ordinary differential equations (ODEs). It allows the automatic calibration of model parameters by fitting the model to experimental measurements. CPU-intensive functions are written or – in case of model dependent functions – dynamically generated in C. Modeling can be done interactively using graphical user interfaces or based on MATLAB scripts using the PottersWheel function library. The software is intended to support the work of a mathematical modeler as a real potter's wheel eases the modeling of pottery. Seven modeling phases The basic use of PottersWheel covers seven phases from model creation to the prediction of new experiments. Model creation The dynamical system is formalized into a set of reactions or differential equations using a visual model designer or a text editor. The model is stored as a MATLAB *.m ASCII file. Modifications can therefore be tracked using a version control system like subversion or git. Model import and export is supported for SBML. Custom import-templates may be used to import custom model structures. Rule-based modeling is also supported, where a pattern represents a set of automatically generated reactions. Example for a simple model definition file for a reaction network A → B → C → A with observed species A and C: function m = getModel() % Starting with an empty model m = pwGetEmtptyModel(); % Adding reactions m = pwAddR(m, 'A', 'B'); m = pwAddR(m, 'B', 'C'); m = pwAddR(m, 'C', 'A'); % Adding observables m = pwAddY(m, 'A'); m = pwAddY(m, 'C'); end Data import External data saved in *.xls or *.txt files can be added to a model creating a model-data-couple. A mapping dialog allows to connect data column names to observed species names. Meta information in the data files comprise information about the experimental setting. Measurement errors are either stored in the data files, will be calculated using an error model, or are estimated automatically. Parameter calibration To fit a model to one or more data sets, the corresponding model-data-couples are combined into a fitting-assembly. Parameters like initial values, rate constants, and scaling factors can be fitted in an experiment-wise or global fashion. The user may select from several numerical integrators, optimization algorithms, and calibration strategies like fitting in normal or logarithmic parameter space. Interpretation of the goodness-of-fit The quality of a fit is characterized by its chi-squared value. As a rule of thumb, for N fitted data points and p calibrated parameters, the chi-squared value should have a similar value as N − p or at least N. Statistically, this is expressed using a chi-squared test resulting in a p-value above a significance threshold of e.g. 0.05. For lower p-values, the model is either not able to explain the data and has to be refined, the standard deviation of the data points is actually larger than specified, or the used fitting strategy was not successful and the fit was trapped in a local minimum. Apart from further chi-squared based characteristics like AIC and BIC, data-model-residual analyses exist, e.g. to investigate whether the residuals follow a Gaussian distribution. Finally, parameter confidence intervals may be estimated using either the Fisher information matrix approximation or based on the profile-likelihood function, if parameters are not unambiguously identifiable. If the fit is not acceptable, the model has to be refined and the procedure continues with step 2. Else, the dynamic model properties can be examined and predictions calculated. Model refinement If the model structure is not able to explain the experimental measurements, a set of physiologically reasonable alternative models should be created. In order to avoid redundant model paragraphs and copy-and-paste errors, this can be done using a common core-model which is the same for all variants. Then, daughter-models are created and fitted to the data, preferably using batch processing strategies based on MATLAB scripts. As a starting point to envision suitable model variants, the PottersWheel equalizer may be used to understand the dynamic behavior of the original system. Model analysis and prediction A mathematical model may serve to display the concentration time-profile of unobserved species, to determine sensitive parameters representing potential targets within a clinical setting, or to calculate model characteristics like the half-life of a species. Each analysis step may be stored into a modeling report, which may be exported as a Latex-based PDF. Experimental design An experimental setting corresponds to specific characteristics of driving input functions and initial concentrations. In a signal transduction pathway model the concentration of a ligand like EGF may be controlled experimentally. The driving input designer allows investigating the effect of a continuous, ramp, or pulse stimulation in combination with varying initial concentrations using the equalizer. In order to discriminate competing model hypotheses, the designed experiment should have as different observable time-profiles as possible. Parameter identifiability Many dynamical systems can only be observed partially, i.e. not all system species are accessible experimentally. For biological applications the amount and quality of experimental data is often limited. In this setting parameters can be structurally or practically non-identifiable. Then, parameters may compensate each other and fitted parameter values strongly depend on initial guesses. In PottersWheel non-identifiability can be detected using the Profile Likelihood Approach. For characterizing functional relationships between the non-identifiable parameters PottersWheel applies random and systematic fit sequences. References External links Profile Likelihood Approach Applied mathematics Cross-platform software Mathematical modeling Numerical software Pharmacokinetics Visual programming languages Statistical software Simulation software
PottersWheel
Chemistry,Mathematics
1,245
19,829,595
https://en.wikipedia.org/wiki/2%2C2%2C2-Trichloroethoxycarbonyl%20chloride
Trichloroethyl chloroformate is used in organic synthesis for the introduction of the trichloroethyl chloroformate (Troc) protecting group for amines, thiols and alcohols. It readily cleaves vs other carbamates and can be used in an overall protecting group strategy. The troc group is traditionally removed via Zn insertion in the presence of acetic acid, resulting in elimination and decarboxylation. Amine protection – 2,2,2-Trichloroethoxycarbonyl (Troc) 2,2,2-Trichloroethoxycarbonyl (Troc) group is largely used as a protecting group for amines in organic synthesis. Most common amine protection methods 2,2,2-Trichloroethyl chloroformate, pyridine or aqueous sodium hydroxide at ambient temperature Electrolysis Deprotection using zinc metal References Chloroformates Protecting groups Reagents for organic chemistry
2,2,2-Trichloroethoxycarbonyl chloride
Chemistry
219
78,055,482
https://en.wikipedia.org/wiki/Intent-based%20network
Intent-Based Networking (IBN) is an approach to network management that shifts the focus from manually configuring individual devices to specifying desired outcomes or business objectives, referred to as "intents". Description Rather than relying on low-level commands to configure the network, administrators define these high-level intents, and the network dynamically adjusts itself to meet these requirements. IBN simplifies the management of complex networks by ensuring that the network infrastructure aligns with the desired operational goals. For example, an implementer can explicitly state a network purpose with a policy such as "Allow hosts A and B to communicate with X bandwidth capacity" without the need to understand the detailed mechanisms of the underlying devices (e.g. switches), topology or routing configurations. Architecture Advances in Natural Language Understanding (NLU) systems, along with neural network-based algorithms like BERT, RoBERTa, GLUE, and ERNIE, have enabled the conversion of user queries into structured representations that can be processed by automated services. This capability is crucial for managing the increasing complexity of network services. Intent-Based Networking (IBN) leverages these advancements to simplify network management by abstracting network services, reducing operational complexity, and lowering costs. A proposed three-layered architecture integrates intent-based automation into network management systems. In the business layer, intents are based on Key Performance Indicators (KPIs) and Service Level Agreements (SLAs), reflecting business objectives. The intent layer evaluates and re-plans actions dynamically, where a Knowledge module abstracts and reasons about intents, while an Agent interfaces with network objects to execute actions. The data layer observes network objects, updates topology information, and interacts with the Knowledge and Agent modules to ensure accurate and timely responses to network changes. At the bottom, the network layer contains the physical infrastructure, transforming network data into a usable format for the intent layer to act upon. References Network management Computer networks
Intent-based network
Engineering
397
3,264,764
https://en.wikipedia.org/wiki/UMIST%20linear%20system
The UMIST Linear System (ULS) is a gas target divertor simulator located on the former UMIST campus of the University of Manchester in the UK. It enables physicists to study the recombination processes of a detached plasma in a hydrogen target chamber. Research on detached plasma and on its recombination modes is of primary importance in order to design an appropriate divertor region in a future nuclear fusion power plant, where huge amounts of energy will be deposited by the fast-moving particles generated in the main reactor. The major goal of the ULS as for many other linear divertor simulators, is to reproduce the same temperature and density conditions of the scrape-off layer of a tokamak in a linear environment and therefore to make easier the study of its properties. ULS has been used to analyze the molecular activation and the electron-ion recombination modes, and to determine the conditions for their activation: diffusive processes have also been considered. Research on these subjects is still ongoing, and understanding of the elementary processes involved in a detached plasma is still far from being satisfactory. References Further reading Kay, Michael J. (1998) A Study of Plasma Attenuation and Recombination in the Gas Target Chamber of a Divertor Simulator. UMIST Ph.D. thesis Plasma physics facilities Research institutes in Manchester University of Manchester
UMIST linear system
Physics
278
37,887,259
https://en.wikipedia.org/wiki/Uniformat
Uniformat is a standard for classifying building specifications, cost estimating, and cost analysis in the U.S. and Canada. The elements are major components common to most buildings. The system can be used to provide consistency in the economic evaluation of building projects. It was developed through an industry and government consensus and has been widely accepted as an ASTM standard. History Hanscomb Associates, a cost consultant, developed a system called MASTERCOST in 1973 for the American Institute of Architects (AIA). The U.S. General Services Administration (GSA), which is responsible for government buildings, was also developing a system. The AIA and GSA agreed on a system and named it UNIFORMAT. The AIA included it in their practice on construction management, and the GSA included it in their project estimating requirements. In 1989, ASTM International began developing a standard for classifying building elements, based on UNIFORMAT. It was renamed to UNIFORMAT II. In 1995, the Construction Specifications Institute (CSI) and Construction Specifications Canada (CSC) began to revise Uniformat. UniFormat is now a registered trademark of CSI and CSC and was most recently published in 2010. A new strategy to classify the built environment, named OmniClass, incorporates the elemental building classification in its Table 21 Elements. The numbering system is changed in OmniClass. Uniformat level 1 categories A SUBSTRUCTURE B SHELL C INTERIORS D SERVICES E EQUIPMENT AND FURNISHINGS F SPECIAL CONSTRUCTION AND DEMOLITION G BUILDING SITEWORK Z GENERAL Uniformat levels 2 and 3 categories An example of how the numbering system expands to provide additional detail below level 1 is shown for A SUBSTRUCTURE A10 FOUNDATIONS A1010 Standard Foundations A1020 Special Foundations A40 SLABS-ON-GRADE A4010 Standard Slabs-on-Grade A4020 Structural Slabs-on-Grade A4030 Slab Trenches A4040 Pits and Bases A4090 Slab-on-Grade Supplementary Components CSI/CSC UniFormat level 1 numbers and titles PROJECT DESCRIPTION A SUBSTRUCTURE B SHELL C INTERIORS D SERVICES E EQUIPMENT AND FURNISHINGS F SPECIAL CONSTRUCTION AND DEMOLITION G BUILDING SITEWORK Z GENERAL References External links Construction Specifications Canada (CSC) Specifications Institute (CSI) Construction documents
Uniformat
Engineering
456
24,516,888
https://en.wikipedia.org/wiki/Paleobiota%20of%20the%20Hell%20Creek%20Formation
This is an overview of the fossil flora and fauna of the Maastrichtian-Danian Hell Creek Formation. Invertebrates Insects Insects from the groups Diptera, Zygoptera, and possibly Hemiphlebiidae have been unearthed in Hell Creek in amber. Fossils found in the Hell Creek Formation and the Fort Union Formation of these insects went extinct during the K-T Event. Molluscs Amphibians Fish Bony fish Cartilaginous fish Dinosaurs A paleo-population study is one of the most difficult of analyses to conduct in field paleontology. Here is the most recent estimate of the proportions of the eight most common dinosaurian families in the Hell Creek Formation, based on detailed field studies by White, Fastovsky and Sheehan. Ceratopsidae 61% Hadrosauridae 23% Ornithomimidae 5% Tyrannosauridae 4% Hypsilophodontidae 3% Dromaeosauridae 2% Pachycephalosauridae 1% Ankylosauridae 1% Troodontidae 1% (represented only by teeth) Outcrops sampled by the Hell Creek Project were divided into three sections: lower, middle and upper slices. The top and bottom sections were the focus of the PLoS One report, and within each portion many remains of Triceratops, Edmontosaurus, and Tyrannosaurus were found. Triceratops was the most common in each section, but, surprisingly, Tyrannosaurus was just as common, if not slightly more common, than the hadrosaur Edmontosaurus. In the upper Hell Creek section, for example, the census included twenty two Triceratops, five Tyrannosaurus, and five Edmontosaurus. The dinosaurs Thescelosaurus, Ornithomimus, Pachycephalosaurus and Ankylosaurus were also included in the breakdown, but were relatively rare. Other dinosaurs, such as Sphaerotholus, Denversaurus, Torosaurus, Struthiomimus, Acheroraptor, Dakotaraptor, Pectinodon, Richardoestesia, Paronychodon, Anzu, Leptorhynchos and Troodon (more likely Pectinodon), were reported as being rare and are not included in the breakdown. The dinosaur collections made over the past decade during the Hell Creek Project yielded new information from an improved genus-level collecting schema and robust data set that revealed relative dinosaur abundances that were unexpected, and ontogenetic age classes previously considered rare. We recognize a much higher percentage of Tyrannosaurus than previous surveys. Tyrannosaurus equals Edmontosaurus in U3 and in L3 comprises a greater percentage of the large dinosaur fauna as the second-most abundant taxon after Triceratops, followed by Edmontosaurus. This is surprisingly consistent in (1) the two major lag deposits (MOR loc. HC-530 and HC-312) in the Apex sandstone and Jen-rex sand where individual bones were counted and (2) in two thirds of the formation reflected in L3 and U3 records of dinosaur skeletons only. Triceratops is by far the most common dinosaur at 40% (n = 72), Tyrannosaurus is second at 24% (n = 44), Edmontosaurus is third at 20% (n = 36), followed by Thescelosaurus at 8% (n = 15), Ornithomimus at 5% (n = 9), and Pachycephalosaurus and Ankylosaurus both at 1% (n = 2) are relatively rare. Fossil footprints of dinosaurs from the Hell Creek Formation are very rare. As of 2017, there is only one find of a possible Tyrannosaurus rex footprint, dating from 2007 and described a year later. A trackway made by mid-sized theropod, possibly a small tyrannosaurid individual, was discovered in South Dakota in 1997, and in 2014 these footprints were named Wakinyantanka styxi. Ornithischians Ankylosaurs Indeterminate nodosaur remains have been unearthed in the Hell Creek Formation and other nearby areas. Pachycephalosaurs An undescribed and unnamed pachycephalosaur is present in North Dakota. Pachycephalosaur remains have been unearthed in Montana as in the case of Platytholus and the now invalid genus Stenotholus kohleri, which is now a junior synonym of Pachycephalosaurus. Ceratopsians Indeterminate ceratopsid teeth and some identifiable bones from Triceratops can be extremely common. 8.31% of all vertebrate remains from the Hell Creek Formation are unassigned ceratopsids. In 2012, a new unidentified species of chasmosaur ceratopsian with noticeable differences from Triceratops was unearthed in South Dakota by a fossil hunter named John Carter. Ornithopods and relatives Indeterminate hadrosaurid remains are very common in the Hell Creek Formation. Theropods Theropod tracks have been found in South Dakota. A trackway from South Dakota, named Wakinyantanka, was made by a mid-sized theropod with three slender toes, possibly a small tyrannosaurid. A second footprint that may have been made by a specimen of Tyrannosaurus was first reported in 2007 by British paleontologist Phil Manning, from the Hell Creek Formation of Montana. This second track measures long, shorter than the track described by Lockley and Hunt. Whether or not the track was made by Tyrannosaurus is unclear, though Tyrannosaurus is the only large theropod known to have existed in the Hell Creek Formation, though in past albertosaurine remains have described here but its most likely that they are the remains of Tyrannosaurus rex. Theropod remains are very common in Hell Creek, some of which belong to indeterminate species on maniraptorans. Alvarezsaurs Tyrannosaurids Ornithomimosaurs Ornithomimid remains are not uncommon in the Hell Creek Formation. Fifteen specimens from the Hell Creek Formation are undetermined ornithomimids Oviraptorosaurs Oviraptorosaur fossils have been found at the Hell Creek Formation for many years, most notably from isolated elements until the discovery of Anzu. In the past, oviraptorosaur fossils found were thought to have belonged to Caenagnathus, Chirostenotes, and Elmisaurus. In 2016, an undescribed large-bodied caenagnathid was unearthed in Montana. Eumaniraptorans Historically, numerous teeth have been attributed to various dromaeosaurid and troodontid taxa with known body fossils from only older formations, including Saurornithoides, Zapsalis, Dromaeosaurus, Saurornitholestes, and Troodon. However, in a 2013 study, Evans et al. concluded that there is little evidence for more than a single dromaeosaurid taxon, Acheroraptor, in the Hell Creek-Lance assemblages, which would render these taxa invalid for this formation. This was disproved in a 2015 study, DePalma et al., when they described the new genus Dakotaraptor, a large species of dromaeosaur. Fossilized teeth of various troodontids and coelurosaurs are common throughout the Hell Creek Formation; the best known examples include Paronychodon, Pectinodon and Richardoestesia, respectively. Teeth belonging to possible intermediate species of Dromaeosaurus and Saurornitholestes have been unearthed at the Hell Creek Formation and the nearby Lance Formation. Pterosaurs Undescribed pterosaur remains were reported from North Dakota. A specimen of an azhdarchid pterosaur from Montana likely belongs to Quetzalcoatlus, though it is not diagnostic to the species level. Crocodylomorphs Turtles Squamata Choristoderans Mammals Multituberculates Metatherians Eutherians Flora The Hell Creek Formation was a low floodplain at the time before the sea retreated, and in the wet ground of the dense woodland, the diversity of angiosperms and conifers were present. An endless diversity of herbaceous flowering plants, ferns and moss grew in the forest understory. On the exposed point bars of large river systems, there were shrubs and vines. The evidence of the forested environment is overwhelmingly supported by petrified wood, rooted gley paleosols, and ubiquitous tree leaves. The presence of the simple and lobed leaves, combined with an extremely high dicot diversity, extinct cycadeoid Nilssoniocladus, Ginkgo, many types of monocots, and several types of conifers is different from any modern plant community. There are numerous types of leaves, seeds, flowers and other structures from Angiosperms, or flowering plants. The Hell Creek Formation of this layer contains over 300 tablets, of which angiosperms are by far the most diverse and dominant flora of the entire population, about 90 percent, followed by about 5% of conifers, 4% of ferns, and others. Compared to today Hell Creek's flora which is prairie, then Hell Creek's flora was hardwood forest mixed with deciduous and evergreen forest. In sharp contrast to the Great Plains today, the presence of many thermophilous taxa such as palm trees and gingers meant the climate was warmer and wetter then. The plants of the Hell Creek Formation generally represent angiosperm-dominated riparian forests of variable diversity, depending on stratigraphic position and sedimentary environment. There appears to be floral transitions visible on a stratigraphic range from the lower to the upper Hell Creek Formation. For this reason, Kirk Johnson and Leo Hickey divided it into five zones and described them as HCIa, HCIb, HCIIa, HCIIb, and HCIII as a reflection of floral change through time. For example, the HCIa zone is dominated by "Dryophyllum" subfalcatum, Leepierceia preartocarpoides, "Vitis" stantonii, and "Celastrus" taurenensis, and is located 55 to 105 meters below the K-Pg boundary layer. Although the HCIb zone is a very thin layer, about 5 meters of rock, it bears unusually high diversity of herbaceous and shrubby plants, including Urticaceae, Ranunculaceae, Rosaceae, and Cannabaceae. There is evidence of transitional floras in the middle of the Hell Creek Formation as shown by HCII and HCIII zones. The HCII flora represents a transitional period where taxa from the lower Hell Creek are replaced by the HCIII flora. The diversity of the HCIII zone is very high, and its composition is more uniform than that of HCII, many of which were rare or absent from the zones below, and some others that used to be common below became rarer in the HCIII zone. These forms include Elatides longifolia, "Dryophyllum" tennessensis, Liriodendrites bradacii, and many members of the Laurales including Bisonia niemii, "Ficus" planicostata, and Marmarthia trivialis, while "Celastrus" taurenensis, Leepierceia preartocarpoides, and many cupressaceous conifers became rarer. This phenomenon suggests that the global temperature was warming during the last 300,000-500,000 years of the Cretaceous period. Johnson claims that there are no grasses, oaks, maples, beeches, figs, or willows in the Hell Creek Formation. There is no evidence of fern prairie either. However, there was an extremely high angiosperm diversity — common plane trees, "Dryophyllum" subfalcatum, Leepierceia preartocarpoides, and palm trees — along with extinct cycadeoid Nilssoniocladus, Ginkgo, araucariaceous, podocarpaceous, and cupressaceous conifers. This represents the mixed deciduous and evergreen broad-leaved forest as the Hell Creek landscape. The nature of these forests is uncertain because Johnson found that the majority of the angiosperm and conifer genera are now extinct. He also believes that, very roughly 80% of the terrestrial plant taxa died out in what is now Great Plains at the K-Pg boundary. On other hand, there is a great increase in the abundance of fossil fern spores in the two centimeters of rock that directly overlies the impact fallout layer (the famous K-Pg boundary layer). This increase in fern spore abundance is commonly referred as "the fern spike" (meaning that if the abundance of spores as a function of stratigraphic position were plotted out, the graph would show a spike just above the impact fallout layer). Many of the modern plant affinities in the Hell Creek Formation (e.g., those with the prefix "aff." or with quotes around the genus name) may not in reality belong to these genera; instead they could be entirely different plants that resemble modern genera. Therefore, there is some question regarding whether the modern Ficus or Juglans, as two examples, actually lived in the Late Cretaceous. Compared to the rich Hell Creek Formation fossil plant localities of the Dakotas, relatively few plant specimens have been collected from Montana. A few taxa were collected at Brownie Butte Montana by Shoemaker, but most plants were collected from North Dakota (Slope County) and from South Dakota. Among the localities, the Mud Buttes, located in Bowman County, North Dakota, is probably the richest megaflora assemblage known and the most diverse leaf quarry from the Hell Creek Formation. "TYPE" after the binomial means that it is represented by a type specimen found in the Yale-Peabody Museum collections. "YPM" is the prefix for the Yale-Peabody Museum specimen number; "DMNH" is for the Denver Museum of Nature & Science; "USNM" is for Smithsonian National Museum of Natural History; and so on. The majority of Hell Creek megafloral specimens are collected at the Denver Museum of Nature & Science. Overview (from Johnson, 2002) 302 plant morphotypes based on leaf only, including: 1 bryophyte (mosses and liverworts) 11 ferns 1 sphenopsid 10 conifers 1 ginkgo (uncommon) 278 angiosperms (roughly 92% of all taxa found) Paleoflora Liverworts Ferns Cycadophytes Ginkgoales Conifers Angiosperms Palynology See also List of fossil sites (with link directory) Lists of dinosaur-bearing stratigraphic units Paleobiota of the Morrison Formation Lance fauna Cretaceous-Paleogene formations Tremp Formation, Spain Tremp Formation, Spain Lefipán Formation, Argentina Lopez de Bertodano Formation, Antarctica References Bibliography General Geology Paleontology External links Cretaceous Hell Creek Faunal Facies provides a faunal list Phillip Bigelow, "Hell Creek life: Fossil Flora & Fauna, a Paleoecosystem" Paleobiology Database: MPM locality 3850 (Hell Creek Formation): Maastrichtian, Montana Maastrichtian life Cretaceous–Paleogene boundary Danian life Natural history of Montana Natural history of North Dakota Natural history of South Dakota Natural history of Wyoming Paleontology in the United States National Natural Landmarks in Montana . . Hell Creek Paleobiota Cretaceous fossil record Hell Creek Formation Hell Creek Formation Dinosaur-related lists
Paleobiota of the Hell Creek Formation
Biology
3,268
15,030,520
https://en.wikipedia.org/wiki/EMP1
Epithelial membrane protein 1 is a protein that in humans is encoded by the EMP1 gene. References Further reading Biomarkers
EMP1
Biology
29
12,566,358
https://en.wikipedia.org/wiki/V%20Aquilae
V Aquilae (V Aql) is a carbon star and semiregular variable star in the constellation Aquila. It has an apparent magnitude which varies between 6.6 and 8.4 and is located around away. V Aquilae is a type of star with a spectrum that is dominated by strong absorption lines of the molecules C2 and CN, hence known as carbon stars. The enhanced levels of carbon in the atmosphere originate from recently nucleosynthesized material that has been dredged up to the surface by deep convection during temporary shell burning events known as thermal pulses. Published spectral types for the star vary somewhat from C54 to C64, or N6 under an older system of classification. The subscript 4 refers to the strength of the molecular carbon bands in the spectrum, an indicator of the relatively abundances of carbon in the atmosphere. V Aquilae is a variable star of type SRb. Its variability was first announced by George Knott in 1871. It has a published period of 400 days, but other periods are found including 350 days and 2,270 days. References External links Image V Aquilae Aquila (constellation) Semiregular variable stars 177336 Carbon stars Aquilae, V Asymptotic-giant-branch stars 7220 093666 Durchmusterung objects
V Aquilae
Astronomy
276
4,216,735
https://en.wikipedia.org/wiki/Analyser
An analyser (British English) or analyzer (American English; see spelling differences) is a tool used to analyze data. For example, a gas analyzer tool is used to analyze gases. It examines the given data and tries to find patterns and relationships. An analyser can be a piece of hardware or software. Autoanalysers are machines that perform their work with little human involvement. Operation Analysis can be done directly on samples or the analyser can process data acquired from a remote sensor. The source of samples for automatic sampling is commonly some kind of industrial process. Analysers that are connected to a process and conduct automatic sampling, can be called online (or on-line) analysers or sometimes inline (or in-line) analysers. For inline analysis, a sensor can be placed in a process vessel or stream of flowing material. Another method of online analysis is allowing a sample stream to flow from the process equipment into an analyser, sometimes conditioning the sample stream e.g., by reducing pressure or changing the sample temperature. Many analysers are not designed to withstand high pressure. Such sampling is typically for fluids (either liquids or gases). If the sample stream is not substantially modified by the analyser, it can be returned to the process. Otherwise, the sample stream is discarded; for example, if reagents were added. Pressure can be lowered by a pressure reducing valve. Such valves may be used to control the flow rate to the online analyser. The temperature of a hot sample may be lowered by use of an online sample cooler. Analysis can be done periodically (for example, every 15 minutes), or continuously. For periodic sampling, valves (or other devices) can be switched open to allow a fluid sample stream to flow to the analyser and shut when not sampling. Some methods of inline analysis are so simple, such as electrical conductivity or pH, the instruments are usually not even called analysers. Salinity determined from simple online analysis is often determined from a conductivity measurement where the output signal is calibrated in terms of salinity concentration (for example ppm of NaCl). Various types of other analyses can be devised. Physical properties can include electrical conductivity (or effectively electrical resistivity), refractive index, and radioactivity measurement. Simple processes that use inline electrical conductivity determination are water purification processes which test how effectively salts have been removed from the output water. Electrical conductivity variations include cation and anion conductivity. Chromatography such as ion chromatography or HPLC often tests the output stream continuously by measuring electrical conductivity, particularly cation or anion conductivity, refractive index, colorimetry or ultraviolet/visible absorbance at a certain wavelength. InlineOnline and offline analysers are available for other types of analytes. Many of these add reagents to the samples or sample streams. Types of analysers Automated analyser Breathalyzer (breath analyzer) Bus analyser Differential analyser – early analogue computer Electron microprobe Lexical analyser Logic analyser Network analyser Protocol analyser (packet sniffer) Quadrupole mass analyser Spectrum analyser Vector signal analyser References Measuring instruments
Analyser
Technology,Engineering
682
43,295,831
https://en.wikipedia.org/wiki/Local%20attraction
While compass surveying, the magnetic needle is sometimes disturbed from its normal position under the influence of external attractive forces. Such a disturbing influence is called as local attraction. The external forces are produced by sources of local attraction which may be current carrying wire (magnetic materials) or metal objects. The term is also used to denote amount of deviation of the needle from its normal position. It mostly causes errors in observations while surveying and thus suitable methods are employed to negate these errors. Sources The sources of local attraction may be natural or artificial. Natural sources include iron ores or magnetic rocks while as artificial sources consist of steel structures, iron pipes, current carrying conductors. The iron made surveying instruments such as metric chains, ranging rods and arrows should also be kept at a safe distance apart from compass. Detection Local attraction at a place can be detected by observing bearings from both ends of the line in the area. If fore bearing and back bearing of a line differ exactly by 180°, there is no local attraction at either station. But if this difference is not equal to 180°, then local attraction exists there either at one or both ends of the line. Remedies There are two common methods of correcting observed bearings of the lines taken in the area affected by Local Attraction. The first method involves correcting the bearing with the help of corrected included angles and the second method involves correcting the bearing of traverse from one correct bearing ( in which difference between fore bearing and back bearing is exactly equal to 180°) by the process of distribution of error to other bearings. References Surveying Civil engineering
Local attraction
Engineering
318
30,334,381
https://en.wikipedia.org/wiki/Fictitious%20domain%20method
In mathematics, the fictitious domain method is a method to find the solution of a partial differential equations on a complicated domain , by substituting a given problem posed on a domain , with a new problem posed on a simple domain containing . General formulation Assume in some area we want to find solution of the equation: with boundary conditions: The basic idea of fictitious domains method is to substitute a given problem posed on a domain , with a new problem posed on a simple shaped domain containing (). For example, we can choose n-dimensional parallelotope as . Problem in the extended domain for the new solution : It is necessary to pose the problem in the extended area so that the following condition is fulfilled: Simple example, 1-dimensional problem Prolongation by leading coefficients solution of problem: Discontinuous coefficient and right part of equation previous equation we obtain from expressions: Boundary conditions: Connection conditions in the point : where means: Equation (1) has analytical solution therefore we can easily obtain error: Prolongation by lower-order coefficients solution of problem: Where we take the same as in (3), and expression for Boundary conditions for equation (4) same as for (2). Connection conditions in the point : Error: Literature P.N. Vabishchevich, The Method of Fictitious Domains in Problems of Mathematical Physics, Izdatelstvo Moskovskogo Universiteta, Moskva, 1991. Smagulov S. Fictitious Domain Method for Navier–Stokes equation, Preprint CC SA USSR, 68, 1979. Bugrov A.N., Smagulov S. Fictitious Domain Method for Navier–Stokes equation, Mathematical model of fluid flow, Novosibirsk, 1978, p. 79–90 Domain decomposition methods Applied mathematics
Fictitious domain method
Mathematics
365
864,149
https://en.wikipedia.org/wiki/Arrangement%20of%20lines
In geometry, an arrangement of lines is the subdivision of the Euclidean plane formed by a finite set of lines. An arrangement consists of bounded and unbounded convex polygons, the cells of the arrangement, line segments and rays, the edges of the arrangement, and points where two or more lines cross, the vertices of the arrangement. When considered in the projective plane rather than in the Euclidean plane, every two lines cross, and an arrangement is the projective dual to a finite set of points. Arrangements of lines have also been considered in the hyperbolic plane, and generalized to pseudolines, curves that have similar topological properties to lines. The initial study of arrangements has been attributed to an 1826 paper by Jakob Steiner. An arrangement is said to be simple when at most two lines cross at each vertex, and simplicial when all cells are triangles (including the unbounded cells, as subsets of the projective plane). There are three known infinite families of simplicial arrangements, as well as many sporadic simplicial arrangements that do not fit into any known family. Arrangements have also been considered for infinite but locally finite systems of lines. Certain infinite arrangements of parallel lines can form simplicial arrangements, and one way of constructing the aperiodic Penrose tiling involves finding the dual graph of an arrangement of lines forming five parallel subsets. The maximum numbers of cells, edges, and vertices, for arrangements with a given number of lines, are quadratic functions of the number of lines. These maxima are attained by simple arrangements. The complexity of other features of arrangements have been studied in discrete geometry; these include zones, the cells touching a single line, and levels, the polygonal chains having a given number of lines passing below them. Roberts's triangle theorem and the Kobon triangle problem concern the minimum and maximum number of triangular cells in a Euclidean arrangement, respectively. Algorithms in computational geometry are known for constructing the features of an arrangement in time proportional to the number of features, and space linear in the number of lines. As well, researchers have studied efficient algorithms for constructing smaller portions of an arrangement, and for problems such as the shortest path problem on the vertices and edges of an arrangement. Definition As an informal thought experiment, consider cutting an infinite sheet of paper along finitely many lines. These cuts would partition the paper into convex polygons. Their edges would be one-dimensional line segments or rays, with vertices at the points where two cut lines cross. This can be formalized mathematically by classifying the points of the plane according to which side of each line they are on. Each line produces three possibilities per point: the point can be in one of the two open half-planes on either side of the line, or it can be on the line. Two points can be considered to be equivalent if they have the same classification with respect to all of the lines. This is an equivalence relation, whose equivalence classes are subsets of equivalent points. These subsets subdivide the plane into shapes of the following three types: The cells or chambers of the arrangement are two-dimensional regions not part of any line. They form the interiors of bounded convex polygons or unbounded convex regions. These are the connected components of the points that would remain after removing all points on lines. The edges or panels of the arrangement are one-dimensional regions belonging to a single line. They are the open line segments and open infinite rays into which each line is partitioned by its crossing points with the other lines. That is, if one of the lines is cut by all the other lines, these are the connected components of its uncut points. The vertices of the arrangement are isolated points belonging to two or more lines, where those lines cross each other. The boundary of a cell is the system of edges that touch it, and the boundary of an edge is the set of vertices that touch it (one vertex for a ray and two for a line segment). The system of objects of all three types, linked by this boundary operator, form a cell complex covering the plane. Two arrangements are said to be isomorphic or combinatorially equivalent if there is a one-to-one boundary-preserving correspondence between the objects in their associated cell complexes. The same classification of points, and the same shapes of equivalence classes, can be used for infinite but locally finite arrangements, defined as arrangements in which every bounded subset of the plane is crossed by finitely many lines. In this case the unbounded cells may have infinitely many sides. Complexity of arrangements It is straightforward to count the maximum numbers of vertices, edges, and cells in an arrangement, all of which are quadratic in the number of lines: An arrangement with lines has at most vertices (a triangular number), one per pair of crossing lines. This maximum is attained for simple arrangements, those in which each two lines cross at a vertex that is disjoint from all the other lines. The number of vertices is smaller when some lines are parallel, or when some vertices are crossed by more than two lines. An arrangement can be rotated, if necessary, to avoid axis-parallel lines. After this step, each ray that forms an edge of the arrangement extends either upward or downward from its endpoint; it cannot be horizontal. There are downward rays, one per line, and these rays separate cells of the arrangement that are unbounded in the downward direction. The remaining cells all have a unique bottommost vertex (again, because there are no axis-parallel lines). For each pair of lines, there can be only one cell where the two lines meet at the bottom vertex, so the number of downward-bounded cells is at most the number of pairs of Adding the unbounded and bounded cells, the total number of cells in an arrangement can be These are the numbers of the lazy caterer's sequence. The number of edges of the arrangement is as may be seen either by using the Euler characteristic to calculate it from the numbers of vertices and cells, or by observing that each line is partitioned into at most edges by the other lines. Simple arrangements have exactly edges. More complex features go by the names of "zones", "levels", and "many faces": The zone of a in a line arrangement is the collection of cells having edges belonging The zone theorem states that the total number of edges in the cells of a single zone is linear. More precisely, the total number of edges of the cells belonging to a single side of is and the total number of edges of the cells belonging to both sides is More generally, the total complexity of the cells of a line arrangement that are intersected by any convex curve where denotes the inverse Ackermann function, as may be shown using Davenport–Schinzel sequences. The sum of squares of cell complexities in an arrangement as can be shown by summing the zones of all lines. The of an arrangement is the polygonal chain formed by the edges that have other lines directly below them. The is the portion of the arrangement below the Finding matching upper and lower bounds for the complexity of a remains a major open problem in discrete geometry. The best upper bound known while the best lower bound known In contrast, the maximum complexity of the is known to A is a special case of a monotone path in an arrangement; that is, a sequence of edges that intersects any vertical line in a single point. However, monotone paths may be much more complicated than there exist arrangements and monotone paths in these arrangements where the number of points at which the path changes direction Although a single cell in an arrangement may be bounded by all lines, it is not possible in general for different cells to all be bounded by lines. Rather, the total complexity of cells is almost the same bound as occurs in the Szemerédi–Trotter theorem on point-line incidences in the plane. A simple proof of this follows from the crossing number inequality: if cells have a total of edges, one can form a graph with nodes (one per cell) and edges (one per pair of consecutive cells on the same line). The edges of this graph can be drawn as curves that do not cross within the cells corresponding to their endpoints, and then follow the lines of the arrangement. Therefore, there are crossings in this drawing. However, by the crossing number inequality, there are crossings. In order to satisfy both bounds, must Projective arrangements and projective duality It is convenient to study line arrangements in the projective plane as every pair of lines has a crossing point. Line arrangements cannot be defined using the sides of lines, because a line in the projective plane does not separate the plane into two distinct sides. One may still define the cells of an arrangement to be the connected components of the points not belonging to any line, the edges to be the connected components of sets of points belonging to a single line, and the vertices to be points where two or more lines cross. A line arrangement in the projective plane differs from its Euclidean counterpart in that the two Euclidean rays at either end of a line are replaced by a single edge in the projective plane that connects the leftmost and rightmost vertices on that line, and in that pairs of unbounded Euclidean cells are replaced in the projective plane by single cells that are crossed by the projective line at infinity. Due to projective duality, many statements about the combinatorial properties of points in the plane may be more easily understood in an equivalent dual form about arrangements of lines. For instance, the Sylvester–Gallai theorem, stating that any non-collinear set of points in the plane has an ordinary line containing exactly two points, transforms under projective duality to the statement that any projective arrangement of finitely many lines with more than one vertex has an ordinary point, a vertex where only two lines cross. The earliest known proof of the Sylvester–Gallai theorem, by Eberhard Melchior in , uses the Euler characteristic to show that such a vertex must always exist. Triangles in arrangements An arrangement of lines in the projective plane is said to be simplicial if every cell of the arrangement is bounded by exactly three edges. Simplicial arrangements were first studied by Melchior. Three infinite families of simplicial line arrangements are known: A near-pencil consisting of lines through a single point, together with a single additional line that does not go through the same point, The family of lines formed by the sides of a regular polygon together with its axes of symmetry, and The sides and axes of symmetry of an even regular polygon, together with the line at infinity. Additionally there are many other examples of sporadic simplicial arrangements that do not fit into any known infinite family. As Branko Grünbaum writes, simplicial arrangements "appear as examples or counterexamples in many contexts of combinatorial geometry and its applications." For instance, simplicial arrangements form counterexamples to a conjecture on the relation between the degree of a set of differential equations and the number of invariant lines the equations may have. The two known counterexamples to the Dirac–Motzkin conjecture (which states that any arrangement has at least ordinary points) are both simplicial. The dual graph of a line arrangement has one node per cell and one edge linking any pair of cells that share an edge of the arrangement. These graphs are partial cubes, graphs in which the nodes can be labeled by bitvectors in such a way that the graph distance equals the Hamming distance between labels. In the case of a line arrangement, each coordinate of the labeling assigns 0 to nodes on one side of one of the lines and 1 to nodes on the other side. Dual graphs of simplicial arrangements have been used to construct infinite families of 3-regular partial cubes, isomorphic to the graphs of simple zonohedra. It is also of interest to study the extremal numbers of triangular cells in arrangements that may not necessarily be simplicial. Any arrangement in the projective plane must have at least triangles. Every arrangement that has only triangles must be simple. For Euclidean rather than projective arrangements, the minimum number of triangles by Roberts's triangle theorem. The maximum possible number of triangular faces in a simple arrangement is known to be upper bounded by and lower bounded the lower bound is achieved by certain subsets of the diagonals of a regular For projective arrangements that are not required to be simple, there exist arrangements with triangles for all , and all arrangements with have at most triangles. The closely related Kobon triangle problem asks for the maximum number of non-overlapping finite triangles in an arrangement in the Euclidean plane, not counting the unbounded faces that might form triangles in the projective plane. Again, the arrangements are not required to be simple. For some but not all values there exist arrangements with triangles. Multigrids and rhombus tilings The dual graph of a simple line arrangement may be represented geometrically as a collection of rhombi, one per vertex of the arrangement, with sides perpendicular to the lines that meet at that vertex. These rhombi may be joined together to form a tiling of a convex polygon in the case of an arrangement of finitely many lines, or of the entire plane in the case of a locally finite arrangement with infinitely many lines. This construction is sometimes known as a Klee diagram, after a publication of Rudolf Klee in 1938 that used this technique. Not every rhombus tiling comes from lines in this way, however. In , N. G. de Bruijn investigated special cases of this construction in which the line arrangement consists of sets of equally spaced parallel lines. For two perpendicular families of parallel lines this construction gives the square tiling of the plane, and for three families of lines at 120-degree angles from each other (themselves forming a trihexagonal tiling) this produces the rhombille tiling. However, for more families of lines this construction produces aperiodic tilings. In particular, for five families of lines at equal angles to each other (or, as de Bruijn calls this arrangement, a pentagrid) it produces a family of tilings that include the rhombic version of the Penrose tilings. There also exist three infinite simplicial arrangements formed from sets of parallel lines. The tetrakis square tiling is an infinite arrangement of lines forming a periodic tiling that resembles a multigrid with four parallel families, but in which two of the families are more widely spaced than the other two, and in which the arrangement is simplicial rather than simple. Its dual is the truncated square tiling. Similarly, the triangular tiling is an infinite simplicial line arrangement with three parallel families, which has as its dual the hexagonal tiling, and the bisected hexagonal tiling is an infinite simplicial line arrangement with six parallel families and two line spacings, dual to the great rhombitrihexagonal tiling. These three examples come from three affine reflection groups in the Euclidean plane, systems of symmetries based on reflection across each line in these arrangements. Algorithms Constructing an arrangement means, given as input a list of the lines in the arrangement, computing a representation of the vertices, edges, and cells of the arrangement together with the adjacencies between these objects. For instance, these features may be represented as a doubly connected edge list. Arrangements can be constructed efficiently by an incremental algorithm that adds one line at a time to the arrangement of the previously added lines. Each new line can be added in time proportional to the size of its zone, linear by the zone theorem. This results in a total construction time The memory requirements of this algorithm are also . It is possible instead to report the features of an arrangement without storing them all at once, in and by an algorithmic technique known as topological sweeping. Computing a line arrangement exactly requires a numerical precision several times greater than that of the input coordinates: if a line is specified by two points on it, the coordinates of the arrangement vertices may need four times as much precision as these input points. Therefore, computational geometers have also studied algorithms for constructing arrangements with limited numerical precision. As well, researchers have studied efficient algorithms for constructing smaller portions of an arrangement, such as zones, or the set of cells containing a given set of points. The problem of finding the arrangement vertex with the median arises (in a dual form) in robust statistics as the problem of computing the Theil–Sen estimator of a set of points. Marc van Kreveld suggested the algorithmic problem of computing shortest paths between vertices in a line arrangement, where the paths are restricted to follow the edges of the arrangement, more quickly than the quadratic time that it would take to apply a shortest path algorithm to the whole arrangement graph. An approximation algorithm is known, and the problem may be solved efficiently for lines that fall into a small number of parallel families (as is typical for urban street grids), but the general problem remains open. Non-Euclidean line arrangements A pseudoline arrangement is a family of curves that share similar topological properties with a line arrangement. These can be defined in the projective plane as simple closed curves any two of which meet in a single crossing point. A pseudoline arrangement is said to be stretchable if it is combinatorially equivalent to a line arrangement. Determining stretchability is a difficult computational task: it is complete for the existential theory of the reals to distinguish stretchable arrangements from non-stretchable ones. Every arrangement of finitely many pseudolines can be extended so that they become lines in a "spread", a type of non-Euclidean incidence geometry in which every two points of a topological plane are connected by a unique line (as in the Euclidean plane) but in which other axioms of Euclidean geometry may not apply. Another type of non-Euclidean geometry is the hyperbolic plane, and arrangements of lines in this geometry have also been studied. Any finite set of lines in the Euclidean plane has a combinatorially equivalent arrangement in the hyperbolic plane (e.g. by enclosing the vertices of the arrangement by a large circle and interpreting the interior of the circle as a Klein model of the hyperbolic plane). However, parallel (non-crossing) pairs of lines are less restricted in hyperbolic line arrangements than in the Euclidean plane: in particular, the relation of being parallel is an equivalence relation for Euclidean lines but not for hyperbolic lines. The intersection graph of the lines in a hyperbolic arrangement can be an arbitrary circle graph. The corresponding concept to hyperbolic line arrangements for pseudolines is a weak pseudoline arrangement, a family of curves having the same topological properties as lines such that any two curves in the family either meet in a single crossing point or have no intersection. History In a survey on arrangements, Pankaj Agarwal and Micha Sharir attribute the study of arrangements to Jakob Steiner, writing that "the first paper on this topic is perhaps" an 1826 paper of Steiner. In this paper, Steiner proved bounds on the maximum number of features of different types that an arrangement may have. After Steiner, the study of arrangements turned to higher-dimensional arrangements of hyperplanes, focusing on their overall structure and on single cells in these arrangements. The study of arrangements of lines, and of more complex features such as zones within these arrangements, returned to interest beginning in the 1980s as part of the foundations of computational geometry. See also Configuration (geometry), an arrangement of lines and a set of points with all lines containing the same number of points and all points belonging to the same number of lines Arrangement (space partition), a partition of the plane given by overlaid curves or of a higher dimensional space by overlaid surfaces, without requiring the curves or surfaces to be flat Mathematical Bridge, a bridge in Cambridge, England whose beams form an arrangement of tangent lines to its arch Notes References External links Database of Combinatorially Different Line Arrangements Discrete geometry Euclidean plane geometry
Arrangement of lines
Mathematics
4,107
32,096,489
https://en.wikipedia.org/wiki/British%20Constructional%20Steelwork%20Association
BCSA Ltd is a trade association for the structural steel industry in the UK and Ireland. It lobbies on behalf of its members, and provides them with education and technical services. A subsidiary, Steel Construction Certification Scheme Ltd, runs the UKAS accredited Steel Construction Certificate Scheme (SCCS). It provides certification for steelwork contracting organisations under ISO 9001, ISO 3834, ISO 14001 and ISO 45001. The association, its marketing initiative Steel for Life Ltd, and the Steel Construction Institute manage online resource, Steel Construction Info. In addition to London headquarters, it maintains offices near Doncaster Sheffield Airport. History The association arose from a series of mergers involving regional and sector specific associations. Five steelwork contractors in Manchester began to collaborate in 1906, and then formally established the Steelwork Society in 1908. The Rules were only finalised in 1911. Steel producers had benefited from trade associations as a forum to collude on pricing, and steelwork contractors sought the same advantages. Similar groups established themselves around the country, and joint meetings were held. In the early 1930s the British Steelwork Association operated from London as a national, federated association funded by, and representing, the local associations. The British Constructional Steelwork Association was formed, in 1936, to succeed the British Steelwork Association. In return for recognition from the steel manufacturers in raw material negotiations, their fabrication subsidiaries were permitted to join the new association. Membership immediately jumped from 92 to 159. In 1966 The British Constructional Steelwork Association Ltd incorporated to take over all the activities of the British Constructional Steelwork Association, Bridge and Constructional Ironwork Association, London Constructional Engineers Association, Midland Structural Association, Scottish Structural Steel Association, Steelwork Society, Northern Ireland Steelwork Association, and Structural Export Association. The name changed to BCSA Ltd in 1990 though it commonly operates under the name of a subsidiary called the British Constructional Steelwork Association Ltd, incorporated at that time. Membership of the association was initially limited to structural steel contractors until in 1987, other companies that shared the association's objects began to be admitted as associates. The rules of the association were amended accordingly in 1994. The British Constructional Steelwork Association Ltd purchased a 99 year lease on its Whitehall Court headquarters in 1989 for £610,000. It previously operated from nearby premises at 35 Old Queen Street. Price fixing Collusion on pricing had been an important part of early trade associations in the iron and steel industries. Trade associations of structural steel contractors were no different, and even then this was controversial. The British Constructional Steelwork Association identify instances of members of their predecessor organisations, cautious about the legality of these schemes, hiding behind code names and numbers. Association practice was to share tender lists for contracts, and where that consisted wholly of members, to add % to the tender price of the chosen contractor, to be shared amongst the other members on the tender list. During the 1920s, economic pressures encouraged almost all structural steel contractors to join the associations. Tenders were routinely member only, significantly curtailing competition. Some contractors were alleged to have joined tender lists with no intention of bidding, merely to claim their share of the %. Government imposed prohibitive tariffs on imported fabricated steel in 1932. Real competition to the structural steel contractors came only from domestic steel manufacturers with their own, in house, fabrication capability, and emerging construction techniques with reinforced concrete. The 1936 arrangement to admit fabrication subsidiaries of steel manufacturers to the association drew them also into the cartel. During the Second World War the Ministry of Supply enforced control on maximum structural steel prices through an Iron and Steel Control department. Post war, it was common for structural steel contractors to submit identical bids in response to tenders. Government became more concerned with anti-competitive behaviour, and the structural steel industry's highly developed, overt bid rigging received particular attention. The Monopolies and Restrictive Practices Commission launched an investigation and the industry was required to register its practices under the Restrictive Trade Practices Act 1956. Registration provided for further scrutiny. The Registrar promptly challenged restrictions on trade, and price fixing, imposed by the British Constructional Steelwork Association upon its members, under the new Restrictive Practices Court Act 1958. Judgement rejected arguments the measures offered useful protections and held them to be void. The association undertook thenceforth to engage only in co-operation between its members, rather than price fixing and collusion. In 1995, the association launched their Register of Qualified Steelwork Contractors with a stated aim to readily enable identification of appropriate steelwork contractors, and thereby ensure competition takes place. Structural Steel Design Awards In 1969 the association set up its Structural Steel Design Awards. Recent recipients include: Coat of arms The association was granted a coat of arms in 1987. The shield is a helmet on a background of red lines representing a framework of girders, and the crest is a red lion symbolising the strength of steel, and also British nationality. The lion is dotted with gold bezants representing fair dealing in commerce; the yellow, blazing torch, held aloft by the lion, represents the association's enlightening message that structures should be of steel not concrete, and the crest, atop a red and gold torse, is set within a circle of steel ingots. The motto depicted on the arms is Strength and Stability, intended as reference to both the association and structural steel. The crest is used in the association's logo. Membership Full members Full members are contractors that pay a levy to the association based on their sales of relevant steelwork in the prior year. Present full members include: Past full members include: Associate members Associate members are suppliers to structural steel contractors, and others with an interest in the industry's operation. Recent associate members include: See also British Iron and Steel Federation References External links Official website Steel Construction Info Organisations based in the City of Westminster Steel companies of the United Kingdom Construction trade groups based in the United Kingdom Structural steel Organizations established in 1936 1936 establishments in the United Kingdom Metallurgical industry of the United Kingdom Price fixing convictions Private companies limited by guarantee of the United Kingdom Private companies limited by guarantee of England
British Constructional Steelwork Association
Chemistry,Engineering
1,230
23,213,003
https://en.wikipedia.org/wiki/Stage%20%28hydrology%29
In hydrology, stage is the water level in a river or stream with respect to a chosen reference height. It is commonly measured in units of feet. Stage is important because direct measurements of river discharge are very difficult while water surface elevation measurements are comparatively easy. In order to convert stage into discharge, a rating curve is needed. Hydrologists can use a combination of tracer studies, observations of high water marks, numerical modeling, and/or satellite or aerial photography. See also Hydraulic head Stream gauge References Hydrology Vertical position
Stage (hydrology)
Physics,Chemistry,Engineering,Environmental_science
107
897,102
https://en.wikipedia.org/wiki/Fill%20trestle
A fill trestle or filling trestle is a temporary construction trestle that is built to provide a scaffolding for the placement of fill or an earthen dam. Typically, the trestle is built across the valley and a railway is laid across the trestle. Specially designed side-dumping railroad cars filled with earth or gravel are pushed onto it and dumped, burying the trestle. Typically, a fill trestle is constructed out of wood which remains buried in the fill and eventually decomposes. Advances in construction technology, particularly the development of the dump truck, have rendered the fill trestle technique obsolete. References External links "Toils on Weak Soils: A Photo-essay on the Construction of the Stockwood Fill (1906 - 1909), Part I" on the Northern Pacific Railway Building engineering Railway bridges
Fill trestle
Engineering
165
1,243,189
https://en.wikipedia.org/wiki/USB%20mass%20storage%20device%20class
The USB mass storage device class (also known as USB MSC or UMS) is a set of computing communications protocols, specifically a USB Device Class, defined by the USB Implementers Forum that makes a USB device accessible to a host computing device and enables file transfers between the host and the USB device. To a host, the USB device acts as an external hard drive; the protocol set interfaces with a number of storage devices. Uses Devices connected to computers via this standard include: External magnetic hard drives External optical drives, including CD and DVD reader and writer drives USB flash drives Solid-state drives Adapters between standard flash memory cards and USB connections Digital cameras Portable media players Card readers PDAs Mobile phones Devices supporting this standard are known as MSC (Mass Storage Class) devices. While MSC is the original abbreviation, UMS (Universal Mass Storage) has also come into common use. Operating system support Most mainstream operating systems include support for USB mass storage devices; support on older systems is usually available through patches. Microsoft Windows Microsoft Windows has supported MSC since Windows 2000. There is no support for USB supplied by Microsoft in Windows before Windows 95 and Windows NT 4.0. Windows 95 OSR2.1, an update to the operating system, featured limited support for USB. During that time no generic USB mass-storage driver was produced by Microsoft (including for Windows 98), and a device-specific driver was needed for each type of USB storage device. Third-party, freeware drivers became available for Windows 98 and Windows 98SE, and third-party drivers are also available for Windows NT 4.0. Windows 2000 has support (via a generic driver) for standard USB mass-storage devices; Windows Me and all later Windows versions also include support. Windows Mobile supports accessing most USB mass-storage devices formatted with FAT on devices with USB Host. However, portable devices typically cannot provide enough power for hard-drive disk enclosures (a hard drive typically requires the maximum 2.5 W in the USB specification) without a self-powered USB hub. A Windows Mobile device cannot display its file system as a mass-storage device unless the device implementer adds that functionality. However, third-party applications add MSC emulation to most WM devices (commercial Softick CardExport and free WM5torage). Only memory cards (not internal-storage memory) can generally be exported, due to file-systems issues; see device access, below. The AutoRun feature of Windows worked on all removable media, allowing USB storage devices to become a portal for computer viruses. Beginning with Windows 7, Microsoft limited AutoRun to CD and DVD drives, updating previous Windows versions. MS-DOS Neither MS-DOS nor most compatible operating systems included support for USB. Third-party generic drivers, such as Duse, USBASPI and DOSUSB, are available to support USB mass-storage devices. FreeDOS supports USB mass storage as an Advanced SCSI Programming Interface (ASPI) interface. Classic Mac OS and macOS Apple's Mac OS 9 and macOS support USB mass storage; Mac OS 8.6 supported USB mass storage through an optional driver. Linux The Linux kernel has supported USB mass-storage devices since version 2.3.47 (2001, backported to kernel 2.2.18). This support includes quirks and silicon/firmware bug workarounds as well as additional functionality for devices and controllers (vendor-enabled functions such as ATA command pass-through for ATA-USB bridges, used for S.M.A.R.T. or temperature monitoring, controlling the spin-up and spin-down of hard disk drives, and other options). Mobile devices running Android 6 or higher also support USB mass storage through dual-role USB on USB-C ports, and USB-OTG on older ports. Other Unix-related systems Solaris has supported devices since its version 2.8 (1998), NetBSD since its version 1.5 (2000), FreeBSD since its version 4.0 (2000) and OpenBSD since its version 2.7 (2000). Digital UNIX (later known as Tru64 UNIX), has supported USB and USB mass-storage devices since its version 4.0E (1998). AIX has supported USB mass-storage devices since its 5.3 T9 and 6.1 T3 versions; however, it is not well-supported and lacks features such as partitioning and general blocking. Game consoles and embedded devices The Xbox 360 and PlayStation 3 support most mass-storage devices for the data transfer of media such as pictures and music. As of April 2010, the Xbox 360 (a) used a mass-storage device for saved games and the PS3 allowed transfers between devices on a mass-storage device. Independent developers have released drivers for the TI-84 Plus and TI-84 Plus Silver Edition to access USB mass-storage devices. In these calculators, the usb8x driver supports the msd8x user-interface application. Device access The USB mass-storage specification provides an interface to a number of industry-standard command sets, allowing a device to disclose its subclass. In practice, there is little support for specifying a command set via its subclass; most drivers only support the SCSI transparent command set, designating their subset of the SCSI command set with their SCSI Peripheral Device Type (PDT). Subclass codes specify the following command sets: Reduced Block Commands (RBC) SFF-8020i, MMC-2 (used by ATAPI-style CD and DVD drives) QIC-157 (tape drives) Uniform Floppy Interface (UFI) SFF-8070i (used by ARMD-style devices) SCSI transparent command set (use "inquiry" to obtain the PDT) The specification does not require a particular file system on conforming devices. Based on the specified command set and any subset, it provides a means to read and write sectors of data (similar to the low-level interface used to access a hard drive). Operating systems may treat a USB mass-storage device like a hard drive; users may partition it in any format (such as MBR and GPT), and format it with any file system. Because of its relative simplicity, the most common file system on embedded devices such as USB flash drives, cameras, or digital audio players is Microsoft's FAT or FAT32 file system (with optional support for long filenames). However, USB mass storage devices may be formatted with any other file system, such as NTFS on Windows NT, HFS Plus on macOS, ext2 on Linux, or Unix File System on Solaris or BSD. This choice may limit (or prevent) access to a device's contents by equipment using a different operating system. OS-dependent storage options include LVM, partition tables and software encryption. In cameras, MP3 players and similar devices which must access a file system independent of an external host, the FAT32 file system is preferred by manufacturers. All such devices halt their file-system (dismount) before making it available to a host operating system to prevent file-system corruption or other damage (although it is theoretically possible for both devices to use read-only mode or a cluster file system). Some devices have a write-protection switch (or option) allowing them to be used in read-only mode. Two main partitioning schemes are used by vendors of pre-formatted devices. One puts the file system (usually FAT32) directly on the device without partitioning, making it start from sector 0 without additional boot sectors, headers or partitions. The other uses a DOS partition table (and MBR code), with one partition spanning the entire device. This partition is often aligned to a high power of two of the sectors (such as 1 or 2 MB), common in solid state drives for performance and durability. Some devices with embedded storage resembling a USB mass-storage device (such as MP3 players with a USB port) will report a damaged (or missing) file system if they are reformatted with a different file system. However, most default-partition devices may be repartitioned (by reducing the first partition and file system) with additional partitions. Such devices will use the first partition for their own operations; after connecting to the host system, all partitions are available. Devices connected by a single USB port may function as multiple USB devices, one of which is a USB mass-storage device. This simplifies distribution and access to drivers and documentation, primarily for the Microsoft Windows and Mac OS X operating systems. Such drivers are required to make full use of the device, usually because it does not fit a standard USB class or has additional functionality. An embedded USB mass-storage device makes it possible to install additional drivers without CD-ROM disks, floppies or Internet access to a vendor website; this is important, since many modern systems are supplied without optical or floppy drives. Internet access may be unavailable because the device provides network access (wireless, GSM or Ethernet cards). The embedded USB mass storage is usually made permanently read-only by the vendor, preventing accidental corruption and use for other purposes (although it may be updated with proprietary protocols when performing a firmware upgrade). Advantages of this method of distribution are lower cost, simplified installation and ensuring driver portability. Design Some advanced hard disk drive commands, such as Tagged Command Queuing and Native Command Queuing (which may increase performance), ATA Secure Erase (which allows all data on the drive to be securely erased) and S.M.A.R.T. (accessing indicators of drive reliability) exist as extensions to low-level drive command sets such as SCSI, ATA and ATAPI. These features may not work when the drives are placed in a disk enclosure that supports a USB mass-storage interface. Some USB mass-storage interfaces are generic, providing basic read-write commands; although that works well for basic data transfers with devices containing hard drives, there is no simple way to send advanced, device-specific commands to such USB mass-storage devices (though, devices may create their own communication protocols over a standard USB control interface). The USB Attached SCSI (UAS) protocol, introduced in USB 3.0, fixes several of these issues, including command queuing, command pipes for hardware requiring them, and power management. Specific USB 2.0 chipsets had proprietary methods of achieving SCSI pass-through, which could be used to read S.M.A.R.T. data from drives using tools such as smartctl (using the option followed by "chipset"). More recent USB storage chipsets support the SCSI / ATA Translation (SAT) as a generic protocol for interacting with ATA (and SATA) devices. Using esoteric ATA or SCSI pass-through commands (such as secure-erase or password protection) when a drive is connected via a USB bridge may cause drive failure, especially with the hdparm utility. See also List of USB Device Classes Disk encryption software Media Transfer Protocol Picture Transfer Protocol SCSI / ATA Translation References Further reading From the USB Implementers Forum website: Mass Storage Class Specification Overview 1.4 Mass Storage Bootability Specification 1.0 "Mass Storage Bulk Only 1.0" External links USB Mass Storage Device source code in FreeBSD What actually happens when you plug in a USB device? Linux kernel internals Computer storage buses Computer storage devices Mass storage device class
USB mass storage device class
Technology
2,370
75,187,709
https://en.wikipedia.org/wiki/Cagrilintide
Cagrilintide is a long-acting analogue of amylin. It is being tested to treat obesity and type 2 diabetes by itself and in combination with semaglutide as cagrilintide/semaglutide. Research A systematic review and metanalysis of cagrisema, published in 2024, found that cagrisema may provide weight loss benefits. References Amylin receptor agonists Experimental diabetes drugs Cyclic peptides
Cagrilintide
Chemistry
96
66,541,188
https://en.wikipedia.org/wiki/Bovista%20paludosa
Bovista paludosa is a species of fungus belonging to the family Lycoperdaceae. It is native to Eurasia. References Lycoperdaceae Taxa named by Joseph-Henri Léveillé Fungi of Europe Fungi of Asia Fungi described in 1846 Fungus species
Bovista paludosa
Biology
57
19,287,081
https://en.wikipedia.org/wiki/Clearing%20House%20Automated%20Transfer%20System
The Clearing House Automated Transfer System, or CHATS, is a real-time gross settlement (RTGS) system for the transfer of funds in Hong Kong. It is operated by Hong Kong Interbank Clearing Limited (HKICL), a limited-liability private company jointly owned by the Hong Kong Monetary Authority (HKMA) and the Hong Kong Association of Banks. Transactions in four currency denominations may be settled using CHATS: Hong Kong dollar, renminbi, euro, and US dollar. In 2005, the value of Hong Kong dollar CHATS transactions averaged HK$467 billion per day, which amounted to a third of Hong Kong's annual Gross Domestic Product (GDP); the total value of transactions that year was 84 times the GDP of Hong Kong. CHATS has been referred by authors at the Bank for International Settlements to as "the poster child of multicurrency offshore systems". History Prior to the launch of CHATS as a RTGS system, interbank settlements in Hong Kong relied on a multi-tier system which settled on a daily net basis. About 170 banks settled with ten clearing banks. These ten banks, in turn, settled with Hongkong Bank, which then settled with the HKMA on a one-to-one basis. Hongkong Bank acted as the clearing house under this system, settling payments across its books on a net basis on the day following the transactions. The HKMA decided that this did not meet international standards as set by G10's Committee on Payment and Settlement Systems; following a six-month feasibility study, in June 1994, it decided to develop CHATS as an RTGS system. After two years of development, CHATS for Hong Kong dollars was launched on 9 December 1996. CHATS for US dollars and euros were launched on 21 August 2000 and 28 April 2003, respectively. In July 2007, the Regional CHATS Payment Services was also launched to link all participants in the three different CHATS versions for transactions involving currency exchange. Features CHATS, like other RTGS systems, settles payment transactions on a "real time" and "gross" basis—payments are not subjected to any waiting period and each transaction is settled in a one-to-one manner such that it is not bunched with other transactions. It is a single-tier system where participants settle with one central clearing house. Payments are final, irrevocable, and settled immediately if the participant's settlement account with the clearing house has sufficient funds. Daylight overdraft is not offered in CHATS; payments that cannot be settled due to insufficient funds are queued. Banks are able to alter, cancel, and re-sequence payments in their queues. Banks can obtain interest-free intraday liquidity through repurchase agreements (repos). This prevents banks from having to maintain large balances in their settlement accounts, which accrue no interest, to cover their payments. Intraday repos that are not reversed at the end of the business day are carried into overnight borrowing. To access RTGS system functions, banks must connect to the SWIFT network to initiate/receive payment instructions, and access eMBT provided by HKICL for performing administrative functions to respective payment instructions. Hong Kong dollar CHATS The HKMA, which is Hong Kong's central banking institution, acts as the clearing house for Hong Kong dollar (HKD) CHATS. All licensed banks in Hong Kong maintain HKD settlement accounts with the HKMA, and as of June 2000, restricted license banks "with a clear business need" may also open settlement accounts with the HKMA. The volume of transactions in HKD CHATS in 2007, in raw number of transactions, totaled at 5,499,494. The total value of all transactions conducted in the same year was about HK$217 trillion. As of 28 December 2015, there are 156 participating banks with HKD CHATS. US dollar CHATS Unlike HKD CHATS, the clearing house for US dollar (USD) CHATS is a commercial bank, Hongkong and Shanghai Banking Corporation (HSBC). In addition to obtaining intraday repos for payment settlement, participating banks may also obtain intraday liquidity via an overdraft facility provided by HSBC. Banks in Hong Kong are not required to participate in USD CHATS; they may choose to join as direct participants or indirect participants. Direct participants maintain USD settlement accounts with HSBC for payment transactions in CHATS. Indirect participants must conduct their payment transactions through direct participants. Additionally, a membership category called Indirect CHATS Users exists where its banks also conduct their payment transactions through direct participants. The volume of transactions in USD CHATS in 2007, in raw number of transactions, totaled at 2,121,058. The total value of all transactions conducted in the same year was about US$2,127 billion. As of 28 December 2015, USD CHATS has 100 direct participant member banks, 24 indirect participant member banks, 87 Indirect CHATS User member banks, and eight Third Party User member banks. Euro CHATS Euro CHATS is structured similarly to USD CHATS. Its clearing house is also a commercial bank, Standard Chartered Hong Kong. Participating banks may obtain intraday liquidity via an overdraft facility provided by Standard Chartered Bank. Like USD CHATS, banks in Hong Kong are not required to participate in Euro CHATS. Euro CHATS has two categories of membership, Direct Participants and Indirect CHATS Users; they function in the same manner as the categories in USD CHATS. The volume of transactions in Euro CHATS in 2007, in raw number of transactions, totaled at 18,169. The total value of all transactions conducted in the same year was about €280 billion. As of 28 December 2015, Euro CHATS has 37 Direct Participant member banks and 18 Indirect CHATS User member banks. Renminbi CHATS CHATS uses Bank of China (Hong Kong) as its clearing house for settling renminbi payments. Bank of China has a settlement account on the China National Advanced Payment System (CNAPS), allowing renminbi CHATS to effectively work as an extension of the Chinese system. See also Faster Payment System, a newer, cheaper and round-the-clock RTGS system facing to general public in Hong Kong, which also supports fund transfers from, to or between electronic payment and digital wallet operators. Fedwire (US) Clearing House Interbank Payments System (US) Clearing House Automated Payments System (UK) TARGET Services (EU) Indian Settlement Systems Society for Worldwide Interbank Financial Telecommunication References Banking in Hong Kong Real-time gross settlement
Clearing House Automated Transfer System
Technology
1,339
60,851,083
https://en.wikipedia.org/wiki/Maria%20Falkenberg
Maria Falkenberg is a professor of medical biochemistry at the Sahlgrenska Academy of the University of Gothenburg, Sweden. She has made important contributions to understanding how the mitochondrial genome is maintained in health and disease. Career Falkenberg received her doctorate from the University of Gothenburg in 2000 for her work on the molecular aspects of DNA replication in Herpes Simplex virus type 1 in the Per Elias group in Gothenburg and the Robert Lehman group at Stanford University. She studied the mechanisms of mitochondrial DNA replication during her postdoctoral fellowship from 2001-2003 in the Nils-Göran Larsson laboratory at the Karolinska Institute, where she then established her own group in 2003. She was the first to set up a cell-free system to study human mitochondrial DNA replication. Her research is funded by among others, the European Research Council (ERC), the Swedish Research Council (VR), and the Knut and Alice Wallenberg Foundation (KAW). Awards 2019 Appointed Wallenberg Scholar by the Knut and Alice Wallenberg Foundation (KAW) 2017 Election to the Royal Society of Arts and Sciences in Gothenburg (KVVS) 2015 Election to the Royal Swedish Academy of Sciences (KVA) 2012 The Edlunska Prize 2009 Sven och Ebba-Christina Hagbergs Prize in Chemistry 2009 The Fernström Prize 2008 Research Fellow of the Royal Swedish Academy of Sciences 2008 Future Research Leader Award, Swedish Foundation for Strategic Research (SSF) Selected publications Copy-choice recombination during mitochondrial L-strand synthesis causes DNA deletions. Persson Ö, Muthukumar Y, Basu S, Jenninger L, Uhler JP, Berglund AK, McFarland R, Taylor RW, Gustafsson CM, Larsson E, Falkenberg M. Nat Commun. 2019 Feb 15;10(1):759. doi: 10.1038/s41467-019-08673-5. Nucleotide pools dictate the identity and frequency of ribonucleotide incorporation in mitochondrial DNA. Berglund AK, Navarrete C, Engqvist MK, Hoberg E, Szilagyi Z, Taylor RW, Gustafsson CM, Falkenberg M, Clausen AR. PLoS Genet. 2017 Feb 16;13(2):e1006628. doi: 10.1371/journal.pgen.1006628. Maintenance and Expression of Mammalian Mitochondrial DNA. Gustafsson CM, Falkenberg M, Larsson NG. Annu Rev Biochem. 2016 Jun 2;85:133-60. doi: 10.1146/annurev-biochem-060815-014402. In vitro-reconstituted nucleoids can block mitochondrial DNA replication and transcription Géraldine Farge, Majda Mehmedovic, Marian Baclayon, Siet MJL van den Wildenberg, Wouter H Roos, Claes M Gustafsson, Gijs JL Wuite, Maria Falkenberg Cell Rep. 2014 Jul 10;8(1):66-74. doi: 10.1016/j.celrep.2014.05.046. In vivo occupancy of mitochondrial single-stranded DNA binding protein supports the strand displacement mode of DNA replication. Miralles Fusté J, Shi Y, Wanrooij S, Zhu X, Jemt E, Persson Ö, Sabouri N, Gustafsson CM, Falkenberg M. PLoS Genet. 2014 Dec 4;10(12):e1004832. doi: 10.1371/journal.pgen.1004832. References External links Year of birth missing (living people) Living people Academic staff of the University of Gothenburg University of Gothenburg alumni Swedish women scientists Swedish biochemists Women biochemists 21st-century Swedish women scientists
Maria Falkenberg
Chemistry
805
36,736,192
https://en.wikipedia.org/wiki/Japanese%20design%20law
Japanese design law is determined by the . Under this Act, only registered designs are legally protected, and it stipulates the procedure for obtaining a design registration in the Japan Patent Office. The protection for unregistered design is provided by . The Act amended in 2019 to expand its scope of protections of graphic images and interior and exterior designs of the architectures, to extend the protection term to 25 years from the filing date, and to accept multiple designs filings. Definitions A design is defined as the following subject matters "which creates an aesthetic impression through the eye"; the shape, patterns or colors, or any combination thereof, of an article (including a part of an article), the shape of an architectural structure (including a part of an architectural structure), or the graphic image, which is provided for use in the operation of the article or is displayed as a result of performance of the article. Designs may be subjected to the protection if: they are novel, that is if no identical design has been made available to the public before the filing date, and they are not easily created on the basis of publicly known designs or motifs. English translation An official English-language translation of the law does not exist, but the Japanese Ministry of Justice's website, under the Japanese Law Translation section provides users with Japanese laws and their unofficial English translation. IP laws such as the Patent Act, Copyright Act, Trademark Act, Design Act and the Unfair Competition Prevention Act are included there. In addition, the J-PlatPat offers the public access to IP Gazettes of the Japan Patent Office (JPO) free of charge through the internet. Reliable information on Japanese IP law in English is also provided by the websites of the Intellectual Property High Court, Japan Patent Office, Transparency of Japanese Law Project, European Patent Office, and the Institute of Intellectual Property (IIP) of Japan. See also Japanese patent law Japanese copyright law Japanese trademark law Japanese law References External links Japanese Law Translation - The website of the Ministry of Justice Japan, in which you can search for Japanese laws and their English translation. Intellectual Property laws such as Patent Act, Copyright Act, Trademark Act, Design Act, Unfair Competition Prevention Act, etc. are included. Intellectual Property High Court Jurisdiction Statistics Summary of Cases - You can search for English summaries of IP cases in all the instances. Publications - Presentation and theses on IP in English by Japanese judges. Japan Patent Office - Handling not only patent and utility models but also designs and trademarks. The website contains the information on procedures for obtaining those IP rights. JPlatPat - Offering the public access to IP Gazettes of the Japan Patent Office (JPO) free of charge. Japanese Copyright Law and Japanese Patent Law - As part of the Transparency of Japanese Law Project, provides overviews and explanations of Japanese copyrights and patents. The website also contains information on corporate law, contract law, finance law, insolvency law, arbitration law and civil litigation law in Japan. Institute of Intellectual Property (IIP) of Japan Translated Books - Free access to English-translated Japanese literature regarding Japanese Patent Law and Trademark Law. Patent information from Japan - On the European Patent Office web site. Japanese intellectual property law Industrial design
Japanese design law
Engineering
656
5,284,746
https://en.wikipedia.org/wiki/Sustainable%20city
A sustainable city, eco-city, or green city is a city designed with consideration for the social, economic, and environmental impact (commonly referred to as the triple bottom line), as well as a resilient habitat for existing populations. This is done in a way that does not compromise the ability of future generations to experience the same. The UN Sustainable Development Goal 11 defines sustainable cities as those that are dedicated to achieving green sustainability, social sustainability and economic sustainability. In accordance with the UN Sustainable Development Goal 11, a sustainable city is defined as one that is dedicated to achieving green, social, and economic sustainability. They are committed to this objective by facilitating opportunities for all through a design that prioritizes inclusivity as well as maintaining a sustainable economic growth. Furthermore, the objective is to minimize the inputs of energy, water, and food, and to drastically reduce waste, as well as the outputs of heat, air pollution (including , methane, and water pollution). Richard Register, a visual artist, first coined the term ecocity in his 1987 book Ecocity Berkeley: Building Cities for a Healthy Future, where he offers innovative city planning solutions that would work anywhere. Other leading figures who envisioned sustainable cities are architect Paul F Downton, who later founded the company Ecopolis Pty Ltd, as well as authors Timothy Beatley and Steffen Lehmann, who have written extensively on the subject. The field of industrial ecology is sometimes used in planning these cities. The UN Environment Programme calls out that most cities today are struggling with environmental degradation, traffic congestion, inadequate urban infrastructure, in addition to a lack of basic services, such as water supply, sanitation, and waste management. A sustainable city should promote economic growth and meet the basic needs of its inhabitants, while creating sustainable living conditions for all. Ideally, a sustainable city is one that creates an enduring way of life across the four domains of ecology, economics, politics, and culture. The European Investment Bank is assisting cities in the development of long-term strategies in fields including renewable transportation, energy efficiency, sustainable housing, education, and health care. The European Investment Bank has spent more than €150 billion in bettering cities over the last eight years. Cities occupy just 3 percent of the Earth's land but account for 60 to 80 percent of energy consumption and at least 70 percent of carbon emissions. Thus, creating safe, resilient, and sustainable cities is one of the top priorities of the Sustainable Development Goals. The Adelaide City Council states that socially sustainable cities should be equitable, diverse, connected, democratic, and provide a good quality of life. Priorities of a sustainable city include the ability to feed itself with a sustainable reliance on the surrounding natural environment and the ability to power itself with renewable sources of energy, while creating the smallest conceivable ecological footprint and the lowest quantity of pollution achievable. All of this is to be accomplished by efficient land use, composting organic matter, recycling used materials, and/or converting waste-to-energy. The idea is that these contributions will lead to a decrease of the city's impact on climate change. Today, 55 percent of the world is estimated to be living in urban areas and the United Nations estimates that by the year 2050, that number will rise to 70 percent. By 2050, there may be nearly 2.5 more billion individuals living in urban cities, possibly making it more difficult to create more sustainable communities. These large communities provide both challenges and opportunities for environmentally-conscious developers. There are distinct advantages to further defining and working towards the goals of sustainable cities. Humans thrive in urban spaces that foster social connections. Richard Florida, an urban studies theorist, focuses on the social impact of sustainable cities and states that cities need more than a competitive business climate; they should promote a great people climate that appeals to individuals and families of all types. Because of this, a shift to denser urban living would provide an outlet for social interaction and conditions under which humans can prosper. These types of urban areas would also promote the use of public transit, walkability, and biking which would benefit citizens' health as well as benefiting the environment. Practical methods to create sustainable cities Different agricultural systems such as agricultural plots within the city (suburbs or centre). This reduces the distance food has to travel from field to fork. This may be done by either small-scale/private farming plots or through larger-scale agriculture (e.g. farmscrapers). Renewable energy sources, such as wind turbines, solar panels, or bio-gas created from sewage to reduce and manage pollution. Cities provide economies of scale that make such energy sources viable. Various methods to reduce the need for air conditioning (a massive energy demand), such as passive daytime radiative cooling applications, planting trees and lightening surface colors, natural ventilation systems, an increase in water features, and green spaces equaling at least 20% of the city's surface. These measures counter the "heat island effect" caused by an abundance of tarmac and asphalt, which can make urban areas several degrees warmer than surrounding rural areas—as much as six degrees Celsius during the evening. Improved public transport and an increase in pedestrianization to reduce car emissions. This requires a radically different approach to city planning, with integrated business, industrial, and residential zones. Roads may be designed to make driving difficult. Optimal building density to make public transport viable but avoid the creation of urban heat islands. Green roofs alter the surface energy balance and can help mitigate the urban heat island effect. Incorporating eco roofs or green roofs in your design will help with air quality, climate, and water runoff. Zero-emission transport Zero-energy building to reduce energy consumption and greenhouse gas emissions using renewable energy sources. Sustainable urban drainage systems or SUDS in addition to other systems to reduce and manage waste. Energy conservation systems/devices Xeriscaping – garden and landscape design for water conservation Sustainable transport, incorporates five elements: fuel economy, occupancy, electrification, pedal power, and urbanization. Circular economy to combat inefficient resource patterns and ensure a sustainable production and consumption roadmap. Increase of cycling infrastructure would increase cycling within cities and reduce the number of cars being driven and in turn reduce car emissions. This would also benefit the health of citizens as they would be able to get more exercise through cycling. Key performance indicators – development and operational management tool providing guidance and M&V for city administrators currently monitor and evaluate energy savings in various facilities. Sustainable Sites Initiative or SSI – voluntary national guidelines and performance benchmarks for sustainable land design, construction and maintenance practices. Key areas of focus are soil, vegetation, hydrology, materials, and human health and well-being. Sustainable cities are creating safe spaces for its inhabitants through various means, such as: Solutions to decrease urban sprawl, by seeking new ways of allowing people to live closer to the workspace. Since the workplace tends to be in the city, downtown, or urban center, they are seeking a way to increase density by changing the antiquated attitudes many suburbanites have towards inner-city areas. One of the new ways to achieve this is by solutions worked out by the Smart Growth Movement. Educating residents of cities about the importance and positive impacts of living in a more sustainable city. This is to boost the initiative to have sustainable developments and push people to live in a more sustainable and environmentally-friendly way. Policy and planning changes to meet the unmet demands for urban services (water, energy, transport). With regard to methods of emissions counting cities can be challenging as production of goods and services within their territory can be related either to domestic consumption or exports. Conversely the citizens also consume imported goods and services. To avoid double counting in any emissions calculation it should be made clear where the emissions are to be counted: at the site of production or consumption. This may be complicated given long production chains in a globalized economy. Moreover, the embodied energy and consequences of large-scale raw material extraction required for renewable energy systems and electric vehicle batteries is likely to represent its own complications – local emissions at the site of utilization are likely to be very small but life-cycle emissions can still be significant. Architecture Buildings provide the infrastructure for a functioning city and allow for many opportunities to demonstrate a commitment to sustainability. A commitment to sustainable architecture encompasses all phases of building including the planning, building, and restructuring. Sustainable Site Initiative is used by landscape architects, designers, engineers, architects, developers, policy-makers, and others to align land development and management with innovative sustainable design. Eco-industrial park The UNIDO (United Nation's Industrial Development Organization) defines eco-industrial park as a community of businesses located on a common property in which businesses seek to achieve enhanced environmental, economic, and social performance through collaboration in managing environmental and resource issues. This is an industrial symbiosis where companies gain an added benefit by physically exchanging materials, energy, water, and by-products, thus enabling sustainable development. This collaboration reduces environmental impact while simultaneously improves economic performance of the area. The components for building an eco-industrial park include natural systems, more efficient use of energy, and more efficient material and water flows. Industrial parks should be built to fit into their natural settings in order to reduce environmental impacts, which can be accomplished through plant design, landscaping, and choice of materials. For instance, there is an industrial park in Michigan built by Phoenix Designs that is made almost entirely from recycled materials. The landscaping of the building will include native trees, grasses, and flowers, and the landscaping design will also act as climate shelter for the facility. In choosing the materials for building an eco-industrial park, designers must consider the life-cycle analysis of each medium that goes into the building to assess their true impact on the environment and to ensure that they are using it from one plant to another, steam connections from firms to provide heating for homes in the area, and using renewable energy such as wind and solar power. In terms of material flows, the companies in an eco-industrial park may have common waste treatment facilities, a means for transporting by-products from one plant to another, or anchoring the park around resource recovery companies that are recruited to the location or started from scratch. To create more efficient water flows in industrial parks, the processed water from one plant can be reused by another plant and the park's infrastructure can include a way to collect and reuse stormwater runoff. Examples Recycled Park in Rotterdam, the Netherlands The Recycled Park in Rotterdam, the second-largest city in the Netherlands, is an initiative introduced by Recycled Island Foundation, a Netherlands-based organization focused on recycling littered waste via creating their iconic island-parks, among other sustainable projects. Rotterdam's Recycled Park is a cluster of floating, green hexagonal "islands" composed of reused litter. The group has utilized a system of passive litter traps to collect this litter from the Maas River. The park's location upon the Maas River reflects a circular process aimed at creating a more sustainable city. On the underside of the recycled park are materials that will support the growth of plants and wildlife indigenous to the area. This interest in growing the biodiversity of Rotterdam's natural elements is also reflected in other cities. Chicago's Urban Rivers organization is similarly trying to solve this issue by building and growing the Wild Mile of floating parks and forests along the Chicago River with the goal of revegetation. Both Urban Rivers' and Recycled Island Foundation's interest in improving the area's biodiversity reflects an interest in greening the built urbanism of the surrounding city. Rotterdam's Recycled Park may suggest a greater trend in creating floating structures in response to greater climate-change-motivated impacts. The Floating Farm in Rotterdam sustainably approaches food production and transport. Other floating structures include renewable energy-powered houseboats and luxury residences some 800 meters from the coast. The Dutch city of Amsterdam likewise boasts a neighbourhood of artificial, floating islands in the suburb of IJburg. The idea of expanding both commercial enterprise and residential developments onto the water is oftentimes reflective of the demand to limit land-usage in urban areas. This has various, wide-reaching environmental impacts: reducing the aggregation of the urban heat-island effect, the zoning efforts expended on engineering and regulating the floodplain (and potentially, the capacity of waste-water reservoirs), and reduce the demands of the automobility state. The Recycled Park is a holistic approach to limiting the expense of waste. The employment of greenery has air-purifying effects, to reduce pollution. Additionally, the modular, hexagonal design allows reconstruction of each "island"; this space thus also offers environmental sustainability, as well as an open space for community-growing and other social opportunities. Urban farming Urban farming is the process of growing and distributing food, as well as raising animals, in and around a city or in urban areas. According to the RUAF Foundation, urban farming is different from rural agriculture because it is integrated into the urban economic and ecological system: urban agriculture is embedded in and interacting with the urban ecosystem. Such linkages include the use of urban residents as the key workers, use of typical urban resources (such as utilizing organic waste as compost or urban wastewater for irrigation), direct links with urban consumers, direct impacts on urban ecology (positive and negative), being part of the urban food system, competing for land with other urban functions, being influenced by urban policies and plans. One motivation for urban agriculture in sustainable cities includes saving energy that would be used in food transportation. Urban farming infrastructure can include common areas for community gardens or farms, as well as common areas for farmers markets in which the food items grown within the city can be sold to the residents of the urban system. Tiny forests or miniature forests is a new concept where many trees are grown on a small patch of land. These forests are said to grow 10x faster and 30x denser with 100x biodiversity than larger forests. Additionally, they are 100% organic. The ratio of shrub layer, sub-tree layer, tree layer, and canopy layer of the miniature forest along with the percentage of each tree species are planned and fixed before planting so as to promote biodiversity. New Urbanism The most clearly defined form of walkable urbanism is known as the Charter of New Urbanism. It is an approach for successfully reducing environmental impacts by altering the built environment to create and preserve smart cities that support sustainable transport. Residents in compact urban neighbourhoods drive fewer miles and have significantly lower environmental impacts across a range of measures, compared with those living in sprawling suburbs. The concept of circular flow land use management has also been introduced in Europe to promote sustainable land use patterns that strive for compact cities and a reduction of greenfield land taken by urban sprawl. Sustainable architecture, a recent movement of New Classical Architecture, promotes a sustainable approach towards construction that appreciates and develops smart growth, walkability, vernacular tradition, and classical design. This in contrast to modernist and globally uniform architecture and opposes solitary housing estates and suburban sprawl. Both trends started in the 1980s. Individual buildings (LEED) The Leadership in Energy and Environmental Design (LEED) Green Building Rating System encourages and accelerates global adoption of sustainable green building and development practices through the creation and implementation of universally understood and accepted tools and performance criteria. LEED, or Leadership in Energy and Environmental Design, is an internationally recognized green building certification system. LEED recognizes whole building sustainable design by identifying key areas of excellence including: Sustainable Sites, Water Efficiency, Energy and Atmosphere, Materials and Resources, Indoor Environmental Quality, Locations & Linkages, Awareness and Education, Innovation in Design, Regional Priority. In order for a building to become LEED certified sustainability needs to be prioritized in design, construction, and use. One example of sustainable design would be including a certified wood like bamboo. Bamboo is fast growing and has an incredible replacement rate after being harvested. By far the most credits are rewarded for optimizing energy performance. This promotes innovative thinking about alternative forms of energy and encourages increased efficiency. A new district in Helsinki, Finland is being made almost entirely using timber. This timber is a form of a Laminated Veneer Lumbar (LVL) that has high standards of fire resistance. The idea is that wood construction has a much smaller footprint than concrete and steel construction and thus, this project is going to take Finland's timber architecture to new heights of sustainability. Sustainable Sites Initiative (SSI) Sustainable Sites Initiative, a combined effort of the American Society of Landscape Architects, The Lady Bird Johnson Wildflower Center at The University of Texas at Austin, and the United States Botanic Garden, is a voluntary national guideline and performance benchmark for sustainable land design, construction and maintenance practices. The building principles of SSI are to design with nature and culture, use a decision-making hierarchy of preservation, conservation, and regeneration, use a system thinking approach, provide regenerative systems, support a living process, use a collaborative and ethical approach, maintain integrity in leadership and research, and finally foster environmental stewardship. All of these help promote solutions to common environmental issues such as greenhouse gases, urban climate issues, water pollution and waste, energy consumption, and health and wellbeing of site users. The main focus is hydrology, soils, vegetation, materials, and human health and well-being. In SSI, the main goal for hydrology in sites is to protect and restore existing hydrologic functions. To design storm water features to be accessible to site users, and manage and clean water on site. For site design of soil and vegetation many steps can be done during the construction process to help minimize the urban heat island effects, and minimize the building heating requirements by using plants. Regenerative Architecture Regenerative architecture is usually applied to remediate brownfield sites. Still, it can encompass a broader mindset to help an ecosystem, region, or site recover during the lifetime of a structure, during construction and operation. Regenerative architecture tends to require buildings to self-sustain themselves, including generating their sources of power and water. However, it is essential to acknowledge that a structure should only consume what it can recover while also facilitating an area for regeneration. This design mindset differs from the term sustainability as it seeks to contribute the most to an environment instead of reducing the most harm (an efficiency paradigm). This calls for a more holistic engagement with a singular site rather than broad assumptions about a general ecology. Regenerative architecture also extends beyond ecological concerns and can encompass improving social value. Since brownfields typically reside near or within human settlements, regenerative design can enhance human well-being as a site for engagement while also considering ecological needs. It is a way of synchronizing stewardship towards recovery and resilience through design while also considering the social and economic dimensions of these problems. Regenerative "refers to a process that repairs, recreates or revitalizes its own sources of energy or air, water or any other matter." For design, this means considering the impacts of products (or by-products) from Cradle-to-Grave and the cycle of resource consumption throughout these processes. A positive-impact building is a regenerative one. Examples include producing "more energy & treated water that the building consumes . . . the ability to provide habitat for lost wildlife and plant species, restore the natural hydrology by recharging the groundwater system, compost waste, and create opportunities for urban agriculture. Since these designs are capable of creating sustenance, they can be considered more economically viable, less dependable and more resilient. Converting unused industrial spaces into accessible green parks is a minor change in achieving regeneration, like the Phra Pok Klao Sky Park (a green park in the congested city of Bangkok), and The New York High Line. The Regenerative Paradigm The Anthropocene era encompasses the detrimental effects on pollution, biodiversity and climate that humans have created. In the building sector, structures have contributed to "40% of carbon emission, 14% of water consumption and 60% of waste production worldwide" in 2006. The term sustainability, largely publicized in the 1987 Bruntland Report, was a vital yardstick for institutions and governments to acknowledge the impact humans have made and generated a stream of thought where ecosystems became considerations in national agendas. The design lexicon has expanded over time "from issues of ecology, habitat, energy or pollution to address waste, lifecycle, community, sustainability and climate change" with notions of "organic or natural design . . . replaced by green, environmental, sustainable or resilient building." Still, the definition where sustainable development "meets the needs of the present without compromising the ability of future generations to meet their own needs" gears towards harm reduction, but offers enough flexibility for regions to develop their own specific guidelines. The 2013 Intergovernmental Panel on Climate Change (IPCC) report made the scientific and public community aware that the sustainable efficiency paradigm is leading towards a degenerative cycle. The Anthropocene era calls for action leading toward regeneration to reverse the impacts humans have caused instead of minimizing harm and maximizing efficiency. Since regenerative architecture seeks to restore an ecological site, it acknowledges that recovery and remediation are ongoing. Indigenous peoples and their methods of vernacular architecture have achieved similar perspectives in material sourcing as regenerative architecture, and the mindset of Regenerative Architecture includes bridging the human-nature paradox for the scope, complexity and diversity of needs for modern structures. Principles Regenerative Architecture can implement various standards like Life Cycle Assessments and Building Environmental Assessments (like LEED); however, regeneration is an ongoing activity, so it becomes contingent on ecological results. Regenerative architecture can use existing standards and principles to situate regeneration in a contemporary sustainability context, but it should extend beyond these frameworks to quantify various ecological impacts during the life-time of a building. Sustainability manifests in various forms of standardization and testing, creating frameworks such as Lifecycle Analysis (LCA) to assess the entire life-end-cycle of materials, to industry-specific systems like Building Environmental Assessments (BEAs) that consider broader areas of building and living performance to simplify integration within industry. BEAs reflect specific comprehensive (often esoteric) LCA principles through a simplified credit-weighing scale encompassing building environments and living performance. These areas apply more directly to architecture and are more accessible to decision-makers. These frameworks are very helpful in the design and construction phase, and regenerative frameworks can help extend these concepts towards future ecological resilience and evolution. Considerations include the safety and accountability of material sourcing, the reusability of the materials, renewable energy and carbon management, water impact, and social fairness. Eco-cities Eco-cities are rooted in various urban planning traditions, including the early garden city movement initiated by Ebenezer Howard. These early efforts sought self-contained, green, and interconnected communities. In the latter 20th century, a broader understanding of ecological systems prompted the need for cities to address their ecological impact both locally and globally. Concepts like "urban metabolism" and McHarg's ecological site planning emerged. The term "ecocity" was coined by Richard Register in the 1980s during the rise of sustainability concerns, as outlined in the Brundtland Commission Report. Sustainability in urban planning focuses on inter-generational equity, environmental protection, and more. In the 2000s, resilience became a key perspective, highlighting the importance of ecological and social resilience in cities facing climate change challenges. Transportation As major focus of the sustainable cities, sustainable transportation attempts to reduce a city's reliance and use of greenhouse emitting gases by utilizing eco-friendly urban planning, low environmental impact vehicles, and residential proximity to create an urban center that has greater environmental responsibility and social equity. Poor transportation systems lead to traffic jams and high levels of pollution. Due to the significant impact that transportation services have on a city's energy consumption, the last decade has seen an increasing emphasis on sustainable transportation by developmental experts. Currently, transportation systems account for nearly a quarter of the world's energy consumption and carbon dioxide emission. In order to reduce the environmental impact caused by transportation in metropolitan areas, sustainable transportation has three widely agreed-upon pillars that it utilizes to create more healthy and productive urban centers. The Carbon Trust states that there are three main ways cities can innovate to make transport more sustainable without increasing journey times – better land use planning, modal shift to encourage people to choose more efficient forms of transport, and making existing transport modes more efficient. Car free city The concept of car free cities or a city with large pedestrian areas is often part of the design of a sustainable city. A large part of the carbon footprint of a city is generated by cars so the car free concept is often considered an integral part of the design of a sustainable city. Large parts of London city are to be made car-free to allow people to walk and cycle safely following the COVID-19 lockdown. Similarly, 47 miles of bike lanes are planned to be opened in Bogotá, Colombia in addition to the existing 75-mile network of streets that was recently made to be traffic-free all week. New urbanism frees residents of Masdar City, UAE from automobiles and makes possible walkable and sustainable communities by integrating daily facilities such as plazas and sidewalks into the neighborhoods. Public transit systems like the Group Rapid Transit and the Metro provide direct access to wide areas of Masdar, as well as Abu Dhabi’s CBD, and other parts of the city. The COVID-19 pandemic gave birth to proposals for radical change in the organisation of the city, such as the Manifesto for the Reorganisation of the city after COVID19, published in Barcelona by architecture and urban theorist Massimo Paolini and signed by 160 academics and 300 architects, being the elimination of the car one of the key elements. Emphasis on proximity Created by eco-friendly urban planning, the concept of urban proximity is an essential element of current and future sustainable transportation systems. This requires that cities be built and added onto with appropriate population and landmark density so that destinations are reached with reduced time in transit. This reduced time in transit allows for reduced fuel expenditure and also opens the door to alternative means of transportation such as bike riding and walking. Furthermore, close proximity of residents and major landmarks allows for the creation of efficient public transportation by eliminating long sprawled out routes and reducing commute time. This in turn decreases the social cost to residents who choose to live in these cities by allowing them more time with families and friends instead by eliminating part of their commute time. Melbourne is leading the way in creating the 20-minute neighbourhood where biking, walking or using public transport can get you to work, shops or a government agency within 20 minutes. Paris is experimenting with a similar concept in the Rue de Rivoli area where travel time for any destination is capped at 15 minutes. Diversity in modes of transportation Sustainable transportation emphasizes the use of a diversity of fuel-efficient transportation vehicles in order to reduce greenhouse emissions and diversity fuel demand. Due to the increasingly expensive and volatile cost of energy, this strategy has become very important because it allows a way for city residents to be less susceptible to varying highs and lows in various energy prices. Among the different modes of transportation, the use alternative energy cars and widespread installation of refueling stations has gained increasing importance, while the creation of centralized bike and walking paths remains a staple of the sustainable transportation movement. Tesla is one of the pioneers in creating electric vehicles, which is said to reduce footprints of cars. More companies globally are developing their own versions of electric cars and public transport to promote sustainable transportation. Access to transportation In order to maintain the aspect of social responsibility inherent within the concept of sustainable cities, implementing sustainable transportation must include access to transportation by all levels of society. Due to the fact that car and fuel cost are often too expensive for lower-income urban residents, completing this aspect often revolves around efficient and accessible public transportation. Social inclusion is a key goal of the United Nations Sustainable Development Goal 11 – Sustainable Cities and Communities. In order to make public transportation more accessible, the cost of rides must be affordable and stations must be located no more than walking distance in each part of the city. As studies have shown, this accessibility creates a great increase in social and productive opportunity for city residents. By allowing lower-income residents cheap and available transportation, it allows for individuals to seek employment opportunities all over the urban center rather than simply the area in which they live. This in turn reduces unemployment and a number of associated social problems such as crime, drug use, and violence. Smart transportation In this age of smart cities, many smart solutions are being experimented with to regulate transportation and make public transport more efficient. Israel is reinventing commute by engaging in a public-private partnership that uses algorithms to route public transport according to needs. Using the concept of mobility as a service (MaaS), the people of Israel are encouraged to put in their destination on a mobile application; this data is then processed by the application to reroute transportation according to demands and options of different modes of transportation are suggested to the commuters to choose from. This decreases futile trips and helps the government regulate the number of people in a train or a bus at a time, especially useful in times of a pandemic like the COVID-19 pandemic. Urban strategic planning Although there is not an international policy regarding sustainable cities and there are not established international standards, the organization United Cities and Local Governments (UCLG) is working to establish universal urban strategic guidelines. The UCLG is a democratic and decentralized structure that operates in Africa, Eurasia, Latin America, North America, Middle East, West Asian and a Metropolitan section work to promote a more sustainable society. The 60 members of the UCLG committee evaluate urban development strategies and debate these experiences to make the best recommendations. Additionally, the UCLG accounts for differences in regional and national context. All the organizations are making a great effort to promote this concept by media and Internet, and in conferences and workshops. An International conference was held in Italy at Università del Salento and Università degli Studi della Basilicata, called 'Green Urbanism', from 12 to 14 October 2016. Development Recently, local and national governments and regional bodies such as the European Union have recognized the need for a holistic understanding of urban planning. This is instrumental to establishing an international policy that focuses on cities challenges and the role of the local authorities responses. The sustainable development of urban areas is crucial since more than 56% of the world's population lives in cities. Cities are in the lead of climate action, while being responsible for an estimated 75% of the world's carbon emissions. Generally, in terms of urban planning, the responsibility of local governments are limited to land use and infrastructure provision excluding inclusive urban development strategies. The advantages of urban strategic planning include an increase in governance and cooperation that aids local governments in establishing performance based-management, clearly identifying the challenges facing local community and more effectively responding on a local level rather than national level, and improves institutional responses and local decision making. Additionally, it increases dialogue between stakeholders and develops consensus-based solutions, establishing continuity between sustainability plans and change in local government; it places environmental issues as the priority for the sustainable development of cities and serves as a platform to develop concepts and new models of housing, energy and mobility. Obstacles The City Development Strategies (CDS) addresses new challenges and provides space for innovative policies that involves all stakeholders. The inequality in spatial development and socio-economic classes paired with concerns of poverty reduction and climate change are factors in achieving global sustainable cities, as highlighted by the United Nations Sustainable Development Goal 11. According to the UCLG there are differences between regional and national conditions, framework and practice that are overcome in the international commitment to communication and negotiation with other governments, communities and the private sector to continue to develop through innovative and participatory approaches in strategic decisions, building consensus and monitoring performance management and raising investment. Social factors of sustainable cities According to the United Nations Development Programme (UNDP), over half of the world's population is concentrated in cities, a proportion which is expected to rise to two-thirds by 2050. United Cities and Local Governments has specifically identified 13 global challenges to establishing sustainable cities: demographic change and migration, globalisation of the job market, poverty and unmet Millennium Development Goals, segregation, spatial patterns and urban growth, metropolisation and the rise of urban regions, more political power for local authorities, new actors for developing a city and providing services, decline in public funding for development, the environment and climate change, new and accessible building technologies, preparing for uncertainty and limits of growth and global communications and partnerships. Social equity Gender Gender associates an individual with a set of traits and behaviors that are construed to be female and/or male by society. Gender is a key part of a person's identity, which can influence their experiences and opportunities as they navigate through life. This is no different for how gender impacts how they navigate through the built environment. Men and women experience the built environment differently. For over two decades, professionals in urban planning have called for the routine consideration of gender relations and gendered experiences in the urban design process. Specifically, city planners emphasize the need to account for systemic differences in people's lived experiences by gender, when designing built environments that are safe and equitable. This applies to the development of climate resilient cities. Women represent 80% of people who've been displaced by the climate crisis. Women are more vulnerable to the impacts of climate change because of the roles they are socially assigned by gender. For instance, women are primarily responsible for food provision in the household. Unprecedented patterns in the frequency and magnitude of floods and droughts – due to climate change – directly impact the caregiving responsibilities of many women, causing them to disproportionately suffer from the consequences of these natural disasters. The inequitable distribution of the burden of climate change by gender is unjust and can be addressed in the design of sustainable cities. Achieving gender equality is not only ethically important but economically smart, since supporting female development benefits economic growth. Moreover, it's socially and economically relevant to design sustainable cities not only for women, but by women. Notable women spearheading the sustainable city movement include mayors Anne Hidalgo, Ada Colau Ballano, Claudia Lopez, Yvonne Aki-Sawyerr, Muriel Bowser, Patricia de Lille, Helen Fernandez, and Clover Moore. Other female leaders include Christina Figueres, Patricia Espinosa, Laurence Tubiana, and Hakima El Haite. Race and Income Mobility or the ability to move/go places is essential to daily life. Our mobility is primarily determined by the transportation infrastructure that surrounds us. Throughout US history, mobility and right to place have been regulated through codified social rules of who can go where, and how. Many of these rules were drawn along racial/ethnic and nationalistic lines. Discriminatory housing and transit policies, like red lining, have compounded the oppressive living conditions marginalized racial groups have been subjected to centuries, and have limited the socioeconomic opportunities of future generations. The legacies of these discriminatory policies are responsible for many environmental injustices we see today. Environmental injustice refers to the unequal distribution of risk to environmental threats, with vulnerable populations – e.g., people of low- and middle-income (LMI) and people of color (POC) – experiencing the greatest exposure and least protection. Environmental injustice is pervasive and manifests in many ways, from contaminated drinking water to mold-infested housing stock. One example of environmental injustice is the varying burden of heat exposure on different racial and socioeconomic groups. Urban areas often experience higher surface temperatures than less developed regions because the concentrated impermeable surfaces are good at absorbing heat, creating the “heat-island” effect mentioned earlier. The risk of adverse health effects caused by the heat island effect is and will be compounded by the increasing frequency in heat waves due to the climate crisis. This threat is quite dangerous for vulnerable populations – including infants and the elderly – who lack access to air conditioning and/or tree coverage to cool down. This limited adaptive capacity to urban heat is concentrated in LMI and historically segregated neighborhoods. Specifically, neighborhoods in cities that were historically targeted by redlining and divestment experience higher average land surface temperatures than surrounding areas. These differences in surface temperatures embody the legacy of discriminatory housing policies in the US, and highlight how historic urban planning practices will interact with the effects of the climate crisis. We must create the sustainable cities of the future with these historic practices in mind. The heat island effect also exacerbates the impacts of another form of environmental injustice that disproportionately affects minority and low-income groups: air pollution. Urban infrastructure projects that produce environmental toxins – like industrial plants and highways – are frequently built near or in LMI and POC communities because of favorable zoning codes, cheaper land prices, and less political backlash. This is not because residents don't care, but because they often lack the time, resources, and connections necessary to prevent such construction. In turn, pollutant-producing operations disproportionately impact LMI and POC communities, harming the health outcomes of these groups. A study by the University of Minnesota found that if nitrogen dioxide levels (NO2 – a product of the combustion of fossil fuels) in non-white communities were reduced to equal those in white communities, there would be around 7,000 fewer deaths from heart disease per year. This mortality disparity highlights the health impacts of discriminatory zoning and urban planning policies, which disproportionately expose LIM and POC communities to air pollution. The disparity also shows how much we have to gain from sustainable transportation reform which eliminates combustion-engine vehicles. The inequitable breakdown of exposure to environmental risks by race and income reinforces the understanding that the climate crisis is a social issue, and that environmental justice depends upon racial justice. There is no one right way to address these issues. Proposed solutions include eliminating single-family zoning, pricing a minimum proportions of housing units for LMI households, and requiring community engagement in future urban planning projects. To select the best combination of solutions to create sustainable cities tailored to their environments, each city must be designed for all community members, by all community members. Leaders in the environmental justice movement include Robert Bullard, Benjamin Chavis, Peggy Shepard, Kandi Moseett-White, Mustafa Santiago Ali, Jamie Margolin, Elizabeth Yeampierre, LeeAnne Walters, and Dana Johnson. Examples Australia Adelaide Urban forests In Adelaide, South Australia (a city of 1.3 million people) Premier Mike Rann (2002 to 2011) launched an urban forest initiative in 2003 to plant 3 million native trees and shrubs by 2014 on 300 project sites across the metro area. The projects range from large habitat restoration projects to local biodiversity projects. Thousands of Adelaide citizens have participated in community planting days. Sites include parks, reserves, transport corridors, schools, water courses and coastline. Only trees native to the local area are planted to ensure genetic integrity. Premier Rann said the project aimed to beautify and cool the city and make it more liveable; improve air and water quality and reduce Adelaide's greenhouse gas emissions by 600,000 tonnes of a year. He said it was also about creating and conserving habitat for wildlife and preventing species loss. Solar power The Rann government also launched an initiative for Adelaide to lead Australia in the take-up of solar power. In addition to Australia's first 'feed-in' tariff to stimulate the purchase of solar panels for domestic roofs, the government committed millions of dollars to place arrays of solar panels on the roofs of public buildings such as the museum, art gallery, Parliament, Adelaide Airport, 200 schools and Australia's biggest rooftop array on the roof of Adelaide Showgrounds' convention hall which was registered as a power station. Wind power South Australia went from zero wind power in 2002 to wind power making up 26% of its electricity generation by October 2011. In the five years preceding 2011 there was a 15% drop in emissions, despite strong economic growth. Waste recycling For Adelaide the South Australian government also embraced a Zero Waste recycling strategy, achieving a recycling rate of nearly 80% by 2011 with 4.3 million tonnes of materials diverted from landfill to recycling. On a per capita basis, this was the best result in Australia, the equivalent of preventing more than a million tonnes of entering the atmosphere. In the 1970s container-deposit legislation was introduced. Consumers are paid a 10 cent rebate on each bottle, can, or container they return to recycling. In 2009 non-reusable plastic bags used in supermarket checkouts were banned by the Rann Government, preventing 400 million plastic bags per year entering the litter stream. In 2010 Zero Waste SA was commended by a UN Habitat Report entitled 'Solid Waste Management in the World Cities'. Melbourne City of Merri-bek. The City of Merri-bek in Melbourne's north, has programs for becoming carbon neutral, one of which is 'Zero Carbon Merri-bek', amongst other existing sustainable implementations and proposals. City of Melbourne. Over the past 10 years, various methods of improving public transport have been implemented, car free zones and entire streets have also been implemented. Sydney Sydney was ranked the most sustainable city in Australia by the 2018 Arcadis Sustainable Cities Index. While most cities in Australia ranked low in the green sustainability categories, a lot of them have made a remarkable shift to improve social sustainability by being more inclusive, supporting culture and general happiness among its people. City of Greater Taree, New South Wales The City of Greater Taree north of Sydney has developed a masterplan for Australia's first low-to-no carbon urban development. Austria Vienna is aiming for only 20% of trips to be made by automobile. Brazil Belo Horizonte, Brazil was created in 1897 and is the third-largest metropolis in Brazil, with 2.4 million inhabitants. The Strategic Plan for Belo Horizonte (2010–2030) is being prepared by external consultants based on similar cities' infrastructure, incorporating the role of local government, state government, city leaders and encouraging citizen participation. The need for environmentally sustainable development is led by the initiative of new government following planning processes from the state government. Overall, the development of the metropolis is dependent on the land regularization and infrastructure improvement that will better support the cultural technology and economic landscape. Despite being a developing or newly industrialized nation, it is home to two sustainable cities. The southern cities of Porto Alegre and Curitiba are often cited as examples of urban sustainability. Cameroon Bafut, is a town and traditional kingdom which is working towards becoming an eco-city by 2020, through the Bafut Council Eco-city Project. Canada Since 2016 the Green Score City Index has been studying the urban footprints of Canadian cities. It uses recognized governmental and institutional data to calculate the urban footprints of 50 cities. Vancouver had 2018's highest green score for large cities. Burlington had 2018's highest green score for medium cities. Victoria had 2018's highest green score for small cities. Most cities in Canada have sustainability action plans which are easily searched and downloaded from city websites. In 2010, Calgary ranked as the top eco-city in the planet for its, "excellent level of service on waste removal, sewage systems, and water drinkability and availability, coupled with relatively low air pollution." The survey was performed in conjunction with the reputable Mercer Quality of Living Survey. China The Chinese government has launched three sustainable city programs to promote pilot projects and foster innovation. Beginning in the early 2000s, China acknowledged the importance of sustainable development in addressing the challenges brought about by rapid urbanization and industrialization. As a result, hundreds of eco-city projects have been initiated throughout the country, making China home to the world's largest eco-city program. Tianjin: Sino-Singapore Tianjin Eco-city is a large and one of the first ecocity collaboration project created with the cooperation between China and Singapore, in November 2007, covering an area of 31.23 km². Locating at Binhai, Tianjin, it has been rated as the Eco-city with the most living experience in 2018. Dongtan Eco-city, Shanghai: The project, located in the east of Chongming Island developed by Arup and Parthers, was scheduled to accommodate 50,000 residents by 2010, but its developer has currently put construction on hold. An additional project was made in 2007 in this area: an Eco-Village based on the concept made by an Italian professor from the School of Architecture of Tianjin University. Huangbaiyu, Benxi, Liaoning is a small village of 42 homes that has come under great criticism: most of the homes are unoccupied by villagers. Nanjing: As of April 2008, an ecocity collaboration project is being proposed here. Rizhao, Shandong mandates solar water heaters for households, and has been designated the Environmental Model City by China's SEPA. Chengdu Tianfu District Great City is a planned city located just outside Chengdu that is planned to be sustainable and has the goal of being a self-sustaining city that discourages the use of cars. Dalian, Liaoning: The 100 MW Dalian Flow Battery Energy Storage Peak-shaving Power Station, with the largest power and capacity in the world so far, was connected to the grid in Dalian, China, on September 29, and it was put into operation in mid-October. Denmark Two comprehensive studies were carried out for the whole of Denmark in 2010 (The IDA Climate Plan 2050) and 2011 (The Danish Commission on Climate Change Policy). The studies analysed the benefits and obstacles of running Denmark on 100% renewable energy from the year 2050. There is also a larger, ambitious plan in action: the Copenhagen 2025 Climate Plan. On a more local level, the industrial park in Kalundborg is often cited as a model for industrial ecology. However, projects have been carried out in several Danish cities promoting 100% renewable energy. Examples include Aalborg, Ballerup and Frederikshavn. Aalborg University has launched a master education program on sustainable cities (Sustainable Cities @ Aalborg University Copenhagen). See also the Danish Wikipedia. Copenhagen: Cycling in Copenhagen: One of the most bicycle-friendly city's in the world where over 50% of the population get around on bikes. The city has infrastructure that caters to cycling with hundreds of kilometres of curb segregated bike lanes to separate cyclists and car traffic. A notable feature is The Cycle Super Highways which feature elevated bike lanes which ensure fast, unhindered travel between destinations. The city is aiming for just 25% of trips to be made by automobile. Ecuador Loja, Ecuador won three international prizes for the sustainability efforts begun by its mayor Dr. Jose Bolivar Castillo. Estonia Oxford Residences for four seasons in Estonia, winning a prize for Sustainable Company of the Year, is arguably one of the most advanced sustainable developments, not only trying to be carbon neutral, but already carbon negative. Finland The Finnish city of Turku has adopted a "Carbon Neutral Turku by 2040" strategy to achieve carbon neutrality via combining the goal with circular economy. VTT Technical Research Centre of Finland has formulated an EcoCity concept tailored to address the unique requirements of developing countries and emerging economies. Prominent reference examples include EcoCity Miaofeng in China, EcoNBC in Egypt, EcoGrad in St. Petersburg, Russia, UN Gigiri in Kenya, and MUF2013 in Tanzania. France In Paris, bike lanes are being doubled, while electric car incentives are being created. The French capital is banning the most polluting automobiles from key districts. Germany Freiburg im Breisgau often refers to itself as a green city. It is one of the few cities with a Green mayor and is known for its strong solar energy industry. Vauban, Freiburg is a sustainable model district. All houses are built to a low energy consumption standard and the whole district is designed to be car-free. Another green district in Freiburg is Rieselfeld, where houses generate more energy than they consume. There are several other green sustainable city projects such as Kronsberg in Hannover and current developments around Munich, Hamburg, and Frankfurt. Berlin: The Tiergarten (park) is a large park that takes up 520 acres and is an example of social sustainability where it is a green space but also used for transportation. The Tiergarten has inter paths where people can safely bike and walk without the disturbance of cars. Paths connect to notable areas within the city, such as government buildings, shopping areas and monuments. Berlin is mimicking London's "superhighways" for cyclists. Hong Kong The government portrays the proposed Hung Shui Kiu New Town as an eco-city. The same happened with the urban development plan on the site of the former Kai Tak Airport. Iran Isfahan dedicated smart city office began buildings architectures sustaintability programs in May 2022. Ireland South Dublin County Council announced plans in late 2007 to develop Clonburris, a new suburb of Dublin to include up to 15,000 new homes, to be designed to achieve the highest of international standards. The plans for Clonburris include countless green innovations such as high levels of energy efficiency, mandatory renewable energy for heating and electricity, the use of recycled and sustainable building materials, a district heating system for distributing heat, the provision of allotments for growing food, and even the banning of tumble driers, with natural drying areas being provided instead. In 2012 an energy plan was carried out by the Danish Aalborg University for the municipalities of Limerick and County Clare. The project was a short-term 2020 renewable energy strategy giving a 20% reduction in CO2 emissions, while ensuring that short-term actions are beneficial to the long-term goal of 100% renewable energy. India India is working on Gujarat International Finance Tec-City or GIFT which is an under-construction world-class city in the Indian state of Gujarat. It will come up on 500 acres (2.0 km2) land. It will also be first of its kind fully Sustainable City. Auroville was founded in 1968 with the intention of realizing human unity, and is now home to approximately 2,000 individuals from over 45 nations around the world. Its focus is its vibrant community culture and its expertise in renewable energy systems, habitat restoration, ecology skills, mindfulness practices, and holistic education. The new capital of Andhra Pradesh is also planned to be a sustainable city in the future. As a part of the UN Global Sustainable Development Goals (SDG) cities initiative, Noida in Uttar Pradesh was selected in 2018 to become one of 25 cities in the world to become models of SDGs by 2025. Indonesia The cities of Bandung, Cimahi, and Soreang in Indonesia become world leaders in zero waste cities program after significantly reducing the amount of waste and improving its management. Korea Songdo IBD is a planned city in Incheon which has incorporated a number of eco-friendly features. These include a central park irrigated with seawater, a subway line, bicycle lanes, rainwater catchment systems, and pneumatic waste collection system. 75% of the waste generated by the construction of the city will be recycled. Gwanggyo City Centre is another planned sustainable city. Malaysia As of 2014 a Low Carbon Cities programme is being piloted in Malaysia by KeTTHA, the Malaysian Ministry of Energy, Green Technology and Water, Malaysian Green Technology Corporation (GreenTech Malaysia) and the Carbon Trust. Malacca has a stated ambition to become a carbon-free city, taking steps towards creating a smart electricity grid. This is being done as part of an initiative to create a Green Special Economic Zone, where it is intended that as many as 20 research and development centers will be built focusing on renewable energy and clean technology, creating up to 300,000 new green jobs. The Federal Department of Town and Country Planning (FDTCP) in peninsular Malaysia is a focal point for the implementation of the Malaysian Urban Rural National Indicators Network for Sustainable Development (MURNInets), which includes 36 sets of compulsory indicators grouped under 21 themes under six dimensions. Most of the targets and standards for the selected indicators were adjusted according to hierarchy of local authorities. In MURNInets at least three main new features are introduced. These include the Happiness Index, an indicator under the quality of life theme to meet the current development trend that emphasizes on the well-being of the community. Another feature introduced is the customer or people satisfaction level towards local authorities' services. Through the introduction of these indicators the bottom-up approach in measuring sustainability is adopted. Morocco Planned for 2023, Zenata is the first African city to be awarded the Eco-City Label. It will include a total of 470 hectares of green spaces. It will also have water retention basins and promotes groundwater recharge and afforestation of the site. The naturally irrigated parks leading to the sea are designed as ecological corridors. New Zealand Waitakere City, a local body that formerly existed in West Auckland, was New Zealand's first eco-city, working from the Greenprint, a guiding document that the City Council developed in the early 1990s. Norway Oslo city was ranked first in the 2019 SDG Index and Dashboards Report for European Cities with a high score of 74.8. In order to achieve its ambitious targets for reducing carbon emissions in the European Green City index, Oslo plans to convert cities to biofuels and has considerably reduced traffic by 4–7% by introducing a congestion charge. Its aim is to cut-down emissions by 50 per cent since 1990 and it has taken a number of transportation, waste recycling, energy consumption and green space measures among others to meet its target. Philippines Clark Freeport Zone is a former United States Air Force base in the Philippines. It is located on the northwest side of Angeles City and on the west side of Mabalacat City in the province of Pampanga, about 40 miles (60 km) northwest of Metro Manila. A multi-billion project will convert the former Clark Air Force Base into a mix of industrial, commercial and institutional areas of green environment. The heart of the project is a 9,450-hectare metropolis dubbed as the "Clark Green City". Builders will use the green building system for environmentally-friendly structures. Its facilities will tap renewable energy such as solar and hydro power. Portugal The organization Living PlanIT is currently constructing a city from scratch near Porto, Portugal. Buildings will be electronically connected to vehicles giving the user a sense of personal eco-friendliness. Pakistan Islamabad The capital of Pakistan is full of green spaces and is an eco friendly city. Spain Bilbao: The city faced economic turmoil following the decline of the steel and port industries but through communication between stakeholders and authorities to create inner-city transformation, the local government benefited from the increase in land value in old port areas. The Strategic Plan for the Revitalisation of Metropolitan Bilbao was launched in 1992 and have flourished regenerating old steel and port industries. The conversion from depleted steel and port industries to one of Europe's most flourishing markets is a prime example of a sustainable project in action. Barcelona: The city is planning an urban redesign of civic super blocks, they plan to convert nine-block areas into unified mega block neighbourhoods. The aim is to decrease car-related traffic, noise and pollution by over 20% and to free up to 60% of road areas for reuse as citizen spaces. This is being done because they realized that people in Barcelona die prematurely due to poor air quality and everyday noise levels are deemed harmful. By converting roads to spaces for festivals, farmer markets, bikes, and walkability it promotes a healthier lifestyle and potentially a happier one. In 2020, the European Investment Bank approved a €95 million loan to assist Barcelona in the completion of approximately 40 projects, with an emphasis on climate change and social inequity. The city plans to redevelop streets to create more space for pedestrians and bicyclists, enhance building energy efficiency, and expand social, cultural, and recreational opportunities. Madrid: In 2018, Madrid banned all non-resident vehicles from its downtown areas. Saudi Arabia Saudi Arabia recently unveiled a proposed one of the most ambitious eco-city projects; Neom. Development is planned in the northwest region of the country along the Red Sea and would cover over 26,500 sq-km (10,230 sq-miles). Some of the most notable aspects of this development are The Line and Oxagon. The Line is advertised as a smart city that will stretch for 170 km with easily accessible amenities throughout. Oxagon is a planned floating city off the coast. If built, it will be the largest city. Sweden Norra Älvstranden (Swedish), in Gothenburg by the river Göta älv, is an example of a sustainable city in Sweden. It has low environmental impact, and contains passive houses, recycling system for waste, etc. Hammarby Sjöstad Västra Hamnen or Bo01, Malmö Stockholm Royal Seaport United Arab Emirates Masdar City, Abu Dhabi is a planned city that relies entirely on solar energy and other renewable energy sources, with a sustainable, zero-carbon, zero-waste ecology. Dubai The Sustainable City, Dubai United Kingdom London has committed to reaching net-zero carbon emissions by 2050. To do so, it aims to drastically reduce the proportion of trips made by cars and also ban all new petrol and diesel cars by 2035. Similarly, according to the UK Green Building Council, 40 per cent of UK's total carbon footprint comes from the built environment. Steel, which is used to make skyscrapers, is responsible for 7 per cent of the global emissions. Timber, especially CLT is a being considered as a great alternative to reduce construction-based emissions. The built environment is responsible for around 40% of the UK's total carbon footprint, according to the UK Green Building Council London Borough of Sutton is the first One Planet Region in the United Kingdom, with significant targets for reducing the ecological footprint of residents and creating the UK's greenest borough. Middlesbrough is another One Planet Region in the United Kingdom. Milton Keynes' original design concept aimed for a "forest city" and the foresters of the designers planted millions of trees from its own nursery in Newlands in the following years. Parks, lakes and green spaces cover about 25% of Milton Keynes; , there are 22 million trees and shrubs in public open spaces. St Davids, the smallest city in the United Kingdom, aims to be the first carbon-neutral city in the world. Leicester is the United Kingdom's first environment city. United States Arcosanti, Arizona Coyote Springs Nevada largest planned city in the United States. Babcock Ranch Florida a proposed solar-powered city. Douglass Ranch in Buckeye Arizona Mesa del Sol in Albuquerque, New Mexico San Francisco, California is ranked the most sustainable city in the United States according to the 2019 US Cities Sustainable Development Report. Treasure Island, San Francisco: is a project that aims to create a small eco city. Sonoma Mountain Village in Rohnert Park, California* See also 2000-watt society BedZED Blue roof Carfree city Circles of Sustainability Covenant of Mayors Cyclability Eco hotel Eco-cities Ecodistrict Ecological engineering Environmental economics Floating ecopolis Freeway removal Global Ecovillage Network Green infrastructure Green retrofit Green urbanism Greening Land recycling Pedestrian village Roof garden Street reclamation Sustainable design Sustainable urbanism Transition town Tree Cities of the World Urban design Urban forest inequity Urban forestry Urban green space Urban park Urban prairie Urban reforestation Urban vitality Walking audit Zero-carbon city Notes Further reading Helmut Bott, Gregor Grassl, Stephan Anders (2019) Sustainable Urban Planning: Vibrant Neighbourhoods – Smart Cities – Resilience, DETAIL Publishers, Volume 1, Celso N. Santos, Kristen S. Cetin, Hadi Salehi (2022) Energy-efficient technology retrofit investment behavior of Midwest households in lower and higher income regions – Sustainable Cities and Society Stanislav E. Shmelev and Irina A. Shmeleva (2009) "Sustainable cities: problems of integrated interdisciplinary research", International Journal of Sustainable Development, Volume 12, Number 1, 2009, pp. 4 – 23 Richard Register (2006) Ecocities: building cities in balance with nature, New Society Publishers. . Shannon May (2008) "Ecological citizenship and a plan for sustainable development", City,12:2,237 — 244 Timothy Beatley (1997) Eco-city dimensions : healthy communities, healthy planet, New Society Publishers. , . Richard Register (1987) Ecocity Berkeley: building cities for a healthy future, North Atlantic Books. . Sim Van der Ryn and Peter Calthorpe (1986) Sustainable communities: a new design synthesis for cities, suburbs, and towns, Sierra Club Books. . Paolo Soleri (1973) Arcology : the city in the image of man, MIT Press. . Ian L. McHarg (1969) Design with nature, Published for the American Museum of Natural History [by] the Natural History Press. Federico Caprotti (2014) Eco-urbanism and the Eco-city, or, Denying the Right to the City?, Antipode, Volume 46, Issue 1, pp. 1285-1303 Simon Joss (2015) Eco-cities and Sustainable Urbanism, International Encyclopedia of the Social & Behavioral Sciences (Second Edition). External links Eco Cities in China Publications by Anthropologist Shannon May on the transformation of Huangbaiyu, China into an Eco Village Ecocity Summit 2009 ISTANBUL – TURKIYE ECOPOLIS Green Score City Index, GreenScore.eco Ecotopia 2121. An Atlas of 100 "Visionary Super-Green" cities of the future from around the world. Los Angeles: A History of the Future Resource Guide on Sprawl and the New Urbanism edited by Deborah Sommer, Environmental Design Library, University of California, Berkeley. Vattenfall Sustainable Cities Manifesto for the Reorganisation of the City after COVID19 | author: Massimo Paolini [20 April 2020] Sustainable Cities, Terrain.org Which way China? Herbert Girardet, 2006 October 42, chinadialogue. Discusses the emergence of ecocities in China. Working Group for Sustainable Cities at Harvard University Landscape Urban planning Sustainability Types of cities Environment by city
Sustainable city
Engineering
12,796
9,183,354
https://en.wikipedia.org/wiki/Docomo%20Pacific
Docomo Pacific is a wholly owned subsidiary of Japanese mobile phone operator NTT Docomo headquartered in Tamuning, Guam. It is the largest provider of mobile, television, internet and telephone services to the United States territories of Guam and the Northern Mariana Islands. The company was formed through the merger of cell phone carriers Guamcell Communications and HafaTel and was acquired in December 2006 by NTT Docomo, a spin-off of Japanese communication company Nippon Telegraph and Telephone. In October 2008, Docomo Pacific was the first company on Guam to introduce a HSDPA network. In November 2011, Docomo Pacific launched 4G HSPA+ service on Guam followed by the launch of advanced 4G LTE service in October 2012. In May 2013, Docomo Pacific acquired cable company MCV Broadband (Marianas Cable Vision Broadband) from Seaport Capital, an investment company based in New York City. Incidents March 17, 2023 A cyberattack occurred early in the morning, disabling access to Docomo's call hotline center, website, and internet services. Immediate failsafe protocols were initiated by Docomo's cybersecurity technicians, shutting down the affected servers and isolating the intrusion. During the downtime of the call center and the internet, people on Facebook from Guam complained about what was happening. People complained about the stuff that happened. Docomo subscribers still had access to the mobile network, as well voice, SMS, and fiber internet services. Subscribers of other telecommunication companies were not affected. Docomo Pacific posted the updates on their Facebook page, due to their website being offline. After March 17th, the post was deleted and they returned to their normal schedule. Some Docomo internet services came back the next day to some places in Guam, but other villages were still left with no internet. The next day, Internet services around Guam came back. See also Communications in Guam References Cable television companies of the United States Broadband Companies of Guam Mass media in Guam Nippon Telegraph and Telephone NTT Docomo
Docomo Pacific
Technology
412
1,435,047
https://en.wikipedia.org/wiki/Agate%20%28rocket%29
VE 110 Agate is the designation of an unguided French test rocket developed in the late 1950s and early 1960s. It was part of the Pierres précieuses (fr.: gemstones) program, that included five prototypes Agathe, Topaze, Emeraude, Rubis and Saphir, leading up to the Diamant orbital rocket. The Agate has a length of 8.50 metres, a diameter of 0.80 metres, a start mass of 3.2 tonnes, a takeoff thrust of 186 kN and a ceiling of 20 km. It used a NA801 Mammouth solid propellant rocket engine (same as the Rubis VE-210). The initial version was designated VE (Véhicule Expérimental) 110, while the VE 110RR version was used to develop recovery procedures at sea. The name indicates that it is a "Véhicule Expérimental" (Experimental Vehicle) with 1 stage, using solid propulsion (code 1), and not guided (code 0). Launches The Agate was launched from the CIEES test site in Hammaguir, French Algeria, and the Île du Levant test site, in order to test instrument capsules and recovery systems. See also French space program Rubis Diamant Nuclear dissuasion References Pierres précieuses
Agate (rocket)
Astronomy
271
5,489,731
https://en.wikipedia.org/wiki/Reflex%20hammer
A reflex hammer is a medical instrument used by practitioners to test deep tendon reflexes, the best known possibly being the patellar reflex. Testing for reflexes is an important part of the neurological physical examination in order to detect abnormalities in the central or peripheral nervous system. Reflex hammers can also be used for chest percussion. Models of reflex hammer Prior to the development of specialized reflex hammers, hammers specific for percussion of the chest were used to elicit reflexes. However, this proved to be cumbersome, as the weight of the chest percussion hammer was insufficient to generate an adequate stimulus for a reflex. Starting in the late 19th century, several models of specific reflex hammers were created: The Taylor or tomahawk reflex hammer was designed by John Madison Taylor in 1888 and is the most well known reflex hammer in the USA. It consists of a triangular rubber component which is attached to a flat metallic handle. The traditional Taylor hammer is significantly lighter in weight when compared to the heavier European hammers. The Queen Square reflex hammer was designed for use at the National Hospital for Nervous Diseases (now the National Hospital for Neurology and Neurosurgery) in Queen Square, London. It was originally made with a bamboo or cane handle of varying length, of average 25 to 40 centimetres (10 to 16 inches), attached to a 5-centimetre (2-inch) metal disk with a plastic bumper. The Queen Square hammer is also now made with plastic molds, and often has a sharp tapered end to allow for testing of plantar reflexes though this is no longer recommended due to tightened infection control. It is the reflex hammer of choice of the UK neurologist. The Babinski reflex hammer was designed by Joseph Babiński in 1912 and is similar to the Queen Square hammer, except that it has a metallic handle that is often detachable. Babinski hammers can also be telescoping, allowing for compact storage. Babinski's hammer was popularized in clinical use in America by the neurologist Abraham Rabiner, who was given the instrument as a peace offering by Babinski after the two brawled at a black tie affair in Vienna. The Trömner reflex hammer was designed by Ernst Trömner. This model is shaped like a two-headed mallet. The larger mallet is used to elicit tendon stretch reflexes, and the smaller mallet is used to elicit percussion myotonia. Other reflex hammer types include the Buck, Berliner and Stookey reflex hammers. There are numerous models available from various commercial sources. Method of use The strength of a reflex is used to gauge central and peripheral nervous system disorders, with the former resulting in hyperreflexia, or exaggerated reflexes, and the latter resulting in hyporeflexia or diminished reflexes. However, the strength of the stimulus used to extract the reflex also affects the magnitude of the reflex. Attempts have been made to determine the force required to elicit a reflex, but vary depending on the hammer used, and are difficult to quantify. The Taylor hammer is usually held at the end by the physician, and the entire device is swung in an arc-like motion onto the tendon in question. The Queen Square and Babinski hammers are usually held perpendicular to the tendon in question, and are passively swung with gravity assistance onto the tendon. The Jendrassik maneuver, which entails interlocking of flexed fingers to distract a patient and prime the reflex response, can also be used to accentuate reflexes. In cases of hyperreflexia, the physician may place his finger on top of the tendon, and tap the finger with the hammer. Sometimes a reflex hammer may not be necessary to elicit hyperreflexia, with finger tapping over the tendon being sufficient as a stimulus. See also Physical examination Neurology References Hammers History of neurology Medical equipment 1888 introductions
Reflex hammer
Biology
812
73,273,718
https://en.wikipedia.org/wiki/AVN-322
AVN-322 is a 5-hydroxytryptamine subtype 6 receptor antagonist manufactured by Avineuro Pharmaceuticals Inc. that could potentially be used to combat Alzheimer's disease and schizophrenia. AVN-322 also reverses the negative effects of scopolamine and MK-80. The compound is a sister drug to AVN-101 and AVN-211, two similar compounds under trial for treating Alzheimer's. Phase I trials for the drug were initiated in 2009 by Avineuro, and completed in the spring of 2010. The trials showed that AVN-322 was tolerated in a range of doses without any adverse effects, and Avineuro released plans to commence Phase II trials later the same year. The plan for further trials was discontinued in 2013. References Abandoned drugs Sulfones Pyrazolopyrimidines Secondary amines
AVN-322
Chemistry
176
74,170,779
https://en.wikipedia.org/wiki/Toroidal%20solenoid
The toroidal solenoid was an early 1946 design for a fusion power device designed by George Paget Thomson and Moses Blackman of Imperial College London. It proposed to confine a deuterium fuel plasma to a toroidal (donut-shaped) chamber using magnets, and then heating it to fusion temperatures using radio frequency energy in the fashion of a microwave oven. It is notable for being the first such design to be patented, filing a secret patent on 8 May 1946 and receiving it in 1948. A critique by Rudolf Peierls noted several problems with the concept. Over the next few years, Thomson continued to suggest starting an experimental effort to study these issues, but was repeatedly denied as the underlying theory of plasma diffusion was not well developed. When similar concepts were suggested by Peter Thonemann that included a more practical heating arrangement, John Cockcroft began to take the concept more seriously, establishing small study groups at Harwell. Thomson adopted Thonemann's concept, abandoning the radio frequency system. When the patent had still not been granted in early 1948, the Ministry of Supply inquired about Thomson's intentions. Thomson explained the problems he had getting a program started and that he did not want to hand off the rights until that was clarified. As the directors of the UK nuclear program, the Ministry quickly forced Harwell's hand to provide funding for Thomson's program. Thomson then released his rights the patent, which was granted late that year. Cockcroft also funded Thonemann's work, and with that, the UK fusion program began in earnest. After the news furor over the Huemul Project in February 1951, significant funding was released and led to rapid growth of the program in the early 1950s, and ultimately to the ZETA reactor of 1958. Conceptual development The basic understanding of nuclear fusion was developed during the 1920s as physicists explored the new science of quantum mechanics. George Gamow's 1928 work on quantum tunnelling demonstrated that nuclear reactions could take place at lower energies than classical theory predicted. Using this theory, in 1929 Fritz Houtermans and Robert Atkinson demonstrated that expected reaction rates in the core of the Sun supported Arthur Eddington's 1920 suggestion that the Sun is powered by fusion. In 1934, Mark Oliphant, Paul Harteck and Ernest Rutherford were the first to achieve fusion on Earth, using a particle accelerator to shoot deuterium nuclei into a metal foil containing deuterium, lithium or other elements. This allowed them to measure the nuclear cross section of various fusion reactions, and determined that the deuterium-deuterium reaction occurred at a lower energy than other reactions, peaking at about 100,000 electronvolts (100 keV). This energy corresponds to the average energy of particles in a gas heated to a billion Kelvin. Materials heated beyond a few tens of thousand Kelvin dissociate into their electrons and nuclei, producing a gas-like state of matter known as plasma. In any gas the particles have a wide range of energies, normally following the Maxwell–Boltzmann statistics. In such a mixture, a small number of particles will have much higher energy than the bulk. This leads to an interesting possibility; even at temperatures well below 100,000 eV, some particles will randomly have enough energy to undergo fusion. Those reactions release huge amounts of energy. If that energy can be captured back into the plasma, it can heat other particles to that energy as well, making the reaction self-sustaining. In 1944, Enrico Fermi calculated this would occur at about 50,000,000 K. Confinement Taking advantage of this possibility requires the fuel plasma to be held together long enough that these random reactions have time to occur. Like any hot gas, the plasma has an internal pressure and thus tends to expand according to the ideal gas law. For a fusion reactor, the problem is keeping the plasma contained against this pressure; any known physical container would melt at temperatures in the thousands of Kelvin, far below the millions needed for fusion. A plasma is electrically conductive, and is subject to electric and magnetic fields. In a magnetic field, the electrons and nuclei orbit the magnetic field lines. A simple confinement system is a plasma-filled tube placed inside the open core of a solenoid. The plasma naturally wants to expand outwards to the walls of the tube, as well as move along it, towards the ends. The solenoid creates a magnetic field running down the centre of the tube, which the particles will orbit, preventing their motion towards the sides. Unfortunately, this arrangement does not confine the plasma along the length of the tube, and the plasma is free to flow out the ends. Initial design The obvious solution to this problem is to bend the tube, and solenoid, around to form a torus (a ring or doughnut shape). Motion towards the sides remains constrained as before, and while the particles remain free to move along the lines, in this case, they will simply circulate around the long axis of the tube. But, as Fermi pointed out, when the solenoid is bent into a ring, the electrical windings of the solenoid would be closer together on the inside than the outside. This would lead to an uneven field across the tube, and the fuel will slowly drift out of the centre. Some additional force needs to counteract this drift, providing long-term confinement. Thomson began development of his concept in February 1946. He noted that this arrangement caused the positively charged fuel ions to drift outward more rapidly than the negatively charged electrons. This would result in a negative area in the center of the chamber that would develop over a short period. This net negative charge would then produce an attractive force on the ions, keeping them from drifting too far from the center, and thus preventing them from drifting to the walls. It appeared this could provide long-term confinement. This leaves the issue of how to heat the fuel to the required temperatures. Thomson proposed injecting a cool plasma into the torus and then heating it with radio frequency signals beamed into the chamber. The electrons in the plasma would be "pumped" by this energy, transferring it to the ions though collisions. If the chamber held a plasma with densities on the order of 1014 to 1015 nuclei/cm3, it would take several minutes to reach the required temperatures. Filing a patent In early March, Thomson sent a copy of his proposal to Rudolf Peierls, then at the University of Birmingham. Peierls immediately pointed out a concern; both Peierls and Thomson had been to meetings at the Los Alamos in 1944 where Edward Teller held several informal talks, including the one in which Fermi outlined the basic conditions needed for fusion. This was in the context of an H-bomb, or "the super" as it was then known. Peierls noted that the US might claim priority on such information and consider it highly secret, which meant that while Thomson was privy to the information, it was unlikely others at Imperial were. Considering the problem, Thomson decided to attempt to file a patent on the concept. This would ensure the origins of the concepts would be recorded, and prove that the ideas were due to efforts in the UK and not his previous work on the atom bomb. At the time, Thomson was not concerned with establishing personal priority for the concept nor generating income from it. At his suggestion, on 26 March 1946 they met with Arthur Block of the Ministry of Supply (MoS), which led to B.L. Russel, the MoS' patent agent, beginning to write a patent application that would be owned entirely by the government. Peierls' concerns Peierls then followed up with a lengthy critique of the concept, noting three significant issues. The major concern was that the system as a whole used a toroidal field to confine the electrons, and the electric field resulting to confine the ions. Peierls pointed out that this "cross field" would cause the particles to be forced across the magnetic lines due to the right hand rule, causing the electrons to orbit around the chamber in the poloidal direction, eliminating the area of increased electrons in the center, and thereby allowing the ions to drift to the walls. Using Thomson's own figures for the conditions in an operating reactor, Peierls demonstrated that the resulting neutralized region would extend all the way to the walls, by less than the radius of the electrons in the field. There would be no confinement of the ions. He also included two additional concerns. One involved the issue of the deuterium fuel ions impacting with the walls of the chamber and the effects that would have, and the other that having electrons leave the plasma would cause an ion to be forced out to maintain charge balance, which would quickly "clean up" all of the gas in the chamber. Pinch emerges Thomson was not terribly concerned about the two minor problems but accepted that the primary one about the crossed fields was a serious issue. Considering the issue, a week later he wrote back with a modified concept. In this version, the external magnets producing the toroidal field were removed, and confinement was instead provided by running a current through the plasma. He proposed inducing this current using radio signals injected through slots cut into the torus at spaces that would create a wave moving around the torus similar to the system used in linear accelerators used to accelerate electrons. A provisional patent was filed on 8 May 1946, updated to use the new confinement system. In the patent, Thomson noted that the primary problem would be overcoming energy losses through bremsstrahlung. He calculated that a plasma density of 1015 would remain stable long enough for the energy of the pumped electrons to heat the D fuel to the required 100 keV over the time of several minutes. Although the term "pinch effect" is not mentioned, except for the current generation concept, the description was similar to the pinch machines that would become widespread in the 1950s. Further criticism Thomson was then sent to New York City as part of the British delegation to the United Nations Atomic Energy Commission and did not return until late in the year. After he returned, in January 1947, John Cockcroft called a meeting at Harwell to discuss his ideas with a group including Peierls, Moon and Sayers from Birmingham University, Tuck from the Clarendon Laboratory at Oxford University, and Skinner, Frisch, Fuchs, French and Bretscher from Harwell. Thomson described his concept, including several possible ways to drive the current. Peierls reiterated his earlier concerns, mentioning the observations by Mark Oliphant and Harrie Massey who had worked with David Bohm on isotopic separation at Berkeley. Bohm had observed greatly increased rates of diffusion well beyond what classical diffusion would suggest, today known as Bohm diffusion. If this was inherent to such designs, Peierls suggested there was no way the device would work. He then added a highly prescient statement that there may be further unknown instabilities that would ruin confinement. Peierls concluded by suggesting initial studies on the pinch effect be carried out by Moon in Birmingham, where Moon had some experience in these sorts of devices and especially because Sayers was already planning experiments with powerful spark discharges in deuterium. There is no record that this work was carried out, although theoretical studies on the behaviour of plasma in a pinch was worked on. Early experiments The main outcome of the meeting was to introduce Thomson to the wirbelrohr, a new type of particle accelerator built in 1944 in Germany. The wirbelrohr used a cyclotron-like arrangement to accelerate the electrons in a plasma, which its designer, Max Steenbeck, believed would cause them to "break away" from the ions and accelerate to very high speeds. The parallels between this device and Thomson's concept were obvious, but Steenbeck's acceleration mechanism was novel and presented a potentially more efficient heating system. When he returned to London after the meeting, Thomson had two PhD students put on the project, with Alan Ware tasked with building a wirbelrohr and Stanley Cousins starting a mathematical study on diffusion of plasma in a magnetic field. Ware build a device using 3 cm tube bent around into a 25 cm wide torus. Using a wide variety of gas pressures and currents up to 13,000 Amps, Ware was able to show some evidence of the pinching of the plasma, but failed, as had the Germans, to find any evidence of the break away electrons. With this limited success, Ware and Cousins built a second device at 40 cm and up to 27,000 Amps. Once again, no evidence of electron break away was seen, but this time a new high-speed rotating-mirror camera was able to directly image the plasma during the discharge and was able to conclusively show the plasma was indeed being pinched. Classification concerns While Cousins and Ware began their work, in April 1947 Thomson filed a more complete patent application. This described a larger wide torus with many ports for injecting and removing gas and to inject the radio frequency energy to drive the current. The entire system was then placed within a large magnet that produced a moderate 0.15 T vertical magnetic field across the entire torus, which kept the electrons confined. He predicted that a power input of 1.9 MW would be needed and calculated that the D-D and D-T reactions would generate 9 MW of fusion energy, of which 1.9 MW was in the form of neutrons. He suggested that the neutrons could be used as a power source, but also if the system was surrounded by natural uranium, mostly 238U, the neutrons would transmute it into plutonium-239, a major component of atomic bombs. It was this last part that raised new concerns. If, as Thomson described, one could make a relatively simple device that could produce plutonium there was an obvious nuclear security concern and such work would need to be secret. Neither Thomson or Harwell were happy performing secret work at the university. Considering the problem, Thomson suggested moving this work to RAF Aldermaston. Associated Electrical Industries (AEI) was outgrowing their existing labs in Rugby and Trafford Park, and had already suggested building a new secure lab at Aldermaston. AEI was looking to break into the emerging nuclear power field, and its director of research, Thomas Allibone, was a friend of Thomson's. Allibone strongly supported Thomson's suggestion, and further backing was received from Nobel winner James Chadwick. Cockcroft, on the other hand, believed it was too early to start the large program Thomson was suggesting, and continued to delay. Thonemann's concept Around the same time, Cockcroft learned of similar work carried out independently by Peter Thonemann at Clarendon, triggering a small theoretical program at Harwell to consider it. But all suggestions of a larger development program continued to be rejected. Thonemann's concept was to replace the radio frequency injection used by Thonemann and arrange the reactor like a betatron, that is, wrapping the torus in a large magnet and using its field to induce a current in the torus in a fashion similar to an electrical transformer. Betatrons had a natural limitation that the number of electrons in them was limited due to their self-repulsion, known as the space charge limit. Some had suggested introducing a gas to the chamber; when ionized by the accelerated electrons, the leftover ions would produce a positive charge that would help neutralize the chamber as a whole. Experiments to this end instead showed that collisions between the electrons and ions would scatter so rapidly that the number of electrons remaining was actually lower than before. This effect, however, was precisely what was desired in a fusion reactor, where the collisions would heat the deuterium ions. At an accidental meeting at Clarendon, Thonemann ended up describing his idea to Thomson. Thonemann was not aware he was talking to Thomson, nor of Thomson's work on similar ideas. Thomson followed up with Skinner, who strongly supported Thonemann's concept over Thomson's. Skinner then wrote a paper on the topic, "Thermonuclear Reactions by Electrical Means", and presented it to the Atomic Energy Commission on 8 April 1948. He clearly pointed out where the unknowns were in the concepts, and especially the possibility of destructive instabilities that would ruin confinement. He concluded that it would be "useless to do much further planning" before further study on the instability issues. It was at this point that a curious bit of legality comes into the events. In February 1948, Thompson's original patent filing had not been granted as the Ministry of Supply was not sure about his intentions on assigning the rights. Blackman was ill with malaria in South Africa, and the issue was put off for a time. It was raised again in May when he returned, resulting in a mid-July meeting. Thompson complained that Harwell was not supporting their efforts, and that as none of this was classified, he wanted to remain open to turning to private funding. In that case, he was hesitant to assign the rights to the Ministry. The Ministry, who was in charge of the nuclear labs including Harwell, quickly arranged for Cockroft to fund Thompson's development program. The program was approved in November, and the patent was assigned to the Ministry by the end of the year. Move to AEI The work on fusion at Harwell and Imperial remained relatively low-level until 1951, when two events occurred that changed the nature of the program significantly. The first was the January 1950 confession by Klaus Fuchs that he had been passing atomic information to the Soviets. His confession led to immediate and sweeping classification of almost anything nuclear related. This included all fusion related work, as the previous fears about the possibility of using fusion as a neutron source to produce plutonium now seemed like a serious issue. The earlier plans to move the team from Imperial were put into effect immediately, with the AEI labs being set up at the former Aldermaston and opening in April. This lab soon became the Atomic Weapons Research Establishment. The second was the February 1951 announcement that Argentina had successfully produced fusion in its Huemul Project. Physicists around the world quickly dismissed it as impossible, which was revealed to be the case by 1952. However, it also had the effect of making politicians learn of the concept of fusion, and its potential as an energy source. Physicists working on the concept suddenly found themselves able to talk to high-ranking politicians, who proved rather receptive to increasing their budgets. Within weeks, programs in the US, UK and USSR were seeing dramatic expansion. By the summer of 1952, the UK fusion program was developing several machines based on Thonemann's overall design, and Thomson's original RF-concept was put aside. Notes References Citations Bibliography Magnetic confinement fusion devices Nuclear power in the United Kingdom Nuclear technology in the United Kingdom Physics
Toroidal solenoid
Chemistry
3,876
78,214
https://en.wikipedia.org/wiki/Permaculture
Permaculture is an approach to land management and settlement design that adopts arrangements observed in flourishing natural ecosystems. It includes a set of design principles derived using whole-systems thinking. It applies these principles in fields such as regenerative agriculture, town planning, rewilding, and community resilience. The term was coined in 1978 by Bill Mollison and David Holmgren, who formulated the concept in opposition to modern industrialized methods, instead adopting a more traditional or "natural" approach to agriculture. Permaculture has been criticised as being poorly defined and unscientific. Critics have pushed for less reliance on anecdote and extrapolation from ecological first principles, in favor of peer-reviewed research to substantiate productivity claims and to clarify methodology. Peter Harper from the Centre for Alternative Technology suggests that most of what passes for permaculture has no relevance to real problems. Defenders of permaculture reply that researchers have concluded it to be a "sustainable alternative to conventional agriculture", that it "strongly" enhances carbon stocks, soil quality, and biodiversity, making it "an effective tool to promote sustainable agriculture, ensure sustainable production patterns, combat climate change and halt and reverse land degradation and biodiversity loss". They further point out that most of permaculture’s most common methods, such as agroforestry, polycultures, and water harvesting features, are also backed by peer-reviewed research. Background History In 1911, Franklin Hiram King wrote Farmers of Forty Centuries: Or Permanent Agriculture in China, Korea and Japan, describing farming practices of East Asia designed for "permanent agriculture". In 1929, Joseph Russell Smith appended King's term as the subtitle for Tree Crops: A Permanent Agriculture, which he wrote in response to widespread deforestation, plow agriculture, and erosion in the eastern mountains and hill regions of the United States. He proposed the planting of tree fruits and nuts as human and animal food crops that could stabilize watersheds and restore soil health. Smith saw the world as an inter-related whole and suggested mixed systems of trees with understory crops. This book inspired individuals such as Toyohiko Kagawa who pioneered forest farming in Japan in the 1930s. Another pioneer, George Washington Carver, advocated for practices now common in permaculture, including the use of crop rotation to restore nitrogen to the soil and repair damaged farmland, in his work at the Tuskegee Institute between 1896 and his death in 1947. In his 1964 book Water for Every Farm, the Australian agronomist and engineer P. A. Yeomans advanced a definition of permanent agriculture as one that can be sustained indefinitely. Yeomans introduced both an observation-based approach to land use in Australia in the 1940s and in the 1950s the Keyline Design as a way of managing the supply and distribution of water in semi-arid regions. Other early influences include Stewart Brand's works, Ruth Stout and Esther Deans, who pioneered no-dig gardening, and Masanobu Fukuoka who, in the late 1930s in Japan, began advocating no-till orchards and gardens and natural farming. In the late 1960s, Bill Mollison, senior lecturer in Environmental Psychology at University of Tasmania, and David Holmgren, graduate student at the then Tasmanian College of Advanced Education started developing ideas about stable agricultural systems on the southern Australian island of Tasmania. Their recognition of the unsustainable nature of modern industrialized methods and their inspiration from Tasmanian Aboriginal and other traditional practises were critical to their formulation of permaculture. In their view, industrialized methods were highly dependent on non-renewable resources, and were additionally poisoning land and water, reducing biodiversity, and removing billions of tons of topsoil from previously fertile landscapes. They responded with permaculture. This term was first made public with the publication of their 1978 book Permaculture One. Following the publication of Permaculture One, Mollison responded to widespread enthusiasm for the work by traveling and teaching a three-week program that became known as the Permaculture Design Course. It addressed the application of permaculture design to growing in major climatic and soil conditions, to the use of renewable energy and natural building methods, and to "invisible structures" of human society. He found ready audiences in Australia, New Zealand, the USA, Britain, and Europe, and from 1985 also reached the Indian subcontinent and southern Africa. By the early 1980s, the concept had broadened from agricultural systems towards sustainable human habitats and at the 1st Intl. Permaculture Convergence, a gathering of graduates of the PDC held in Australia, the curriculum was formalized and its format shortened to two weeks. After Permaculture One, Mollison further refined and developed the ideas while designing hundreds of properties. This led to the 1988 publication of his global reference work, Permaculture: A Designers Manual. Mollison encouraged graduates to become teachers and set up their own institutes and demonstration sites. Critics suggest that this success weakened permaculture's social aspirations of moving away from industrial social forms. They argue that the self-help model (akin to franchising) has had the effect of creating market-focused social relationships that the originators initially opposed. Foundational ethics The ethics on which permaculture builds are: "Care of the Earth: Provision for all life systems to continue and multiply". "Care of people: Provision for people to access those resources necessary for their existence". "Setting limits to population and consumption: By governing our own needs, we can set resources aside to further the above principles". Mollison's 1988 formulation of the third ethic was restated by Holmgren in 2002 as "Set limits to consumption and reproduction, and redistribute surplus" and is elsewhere condensed to "share the surplus". Permaculture emphasizes patterns of landscape, function, and species assemblies. It determines where these elements should be placed so they can provide maximum benefit to the local environment. Permaculture maximizes synergy of the final design. The focus of permaculture, therefore, is not on individual elements, but rather on the relationships among them. The aim is for the whole to become greater than the sum of its parts, minimizing waste, human labour, and energy input, and to and maximize benefits through synergy. Permaculture design is founded in replicating or imitating natural patterns found in ecosystems because these solutions have emerged through evolution over thousands of years and have proven to be effective. As a result, the implementation of permaculture design will vary widely depending on the region of the Earth it is located in. Because permaculture's implementation is so localized and place specific, scientific literature for the field is lacking or not always applicable. Design principles derive from the science of systems ecology and the study of pre-industrial examples of sustainable land use. A core theme of permaculture is the idea of "people care". Seeking prosperity begins within a local community or culture that can apply the tenets of permaculture to sustain an environment that supports them and vice versa. This is in contrast to typical modern industrialized societies, where locality and generational knowledge is often overlooked in the pursuit of wealth or other forms of societal leverage. Theory Design principles Holmgren articulated twelve permaculture design principles in his Permaculture: Principles and Pathways Beyond Sustainability: Observe and interact: Take time to engage with nature to design solutions that suit a particular situation. Catch and store energy: Develop systems that collect resources at peak abundance for use in times of need. Obtain a yield: Emphasize projects that generate meaningful rewards. Apply self-regulation and accept feedback: Discourage inappropriate activity to ensure that systems function well. Use and value renewable resources and services: Make the best use of nature's abundance: reduce consumption and dependence on non-renewable resources. Produce no waste: Value and employ all available resources: waste nothing. Design from patterns to details: Observe patterns in nature and society and use them to inform designs, later adding details. Integrate rather than segregate: Proper designs allow relationships to develop between design elements, allowing them to work together to support each other. Use small and slow solutions: Small and slow systems are easier to maintain, make better use of local resources, and produce more sustainable outcomes. Use and value diversity: Diversity reduces system-level vulnerability to threats and fully exploits its environment. Use edges and value the marginal: The border between things is where the most interesting events take place. These are often the system's most valuable, diverse, and productive elements. Creatively use and respond to change: A positive impact on inevitable change comes from careful observation, followed by well-timed intervention. Guilds A guild is a mutually beneficial group of species that form a part of the larger ecosystem. Within a guild each species of insect or plant provides a unique set of diverse services that work in harmony. Plants may be grown for food production, drawing nutrients from deep in the soil through tap roots, balancing nitrogen levels in the soil (legumes), for attracting beneficial insects to the garden, and repelling undesirable insects or pests. There are several types of guilds, such as community function guilds, mutual support guilds, and resource partitioning guilds. Community function guilds group species based on a specific function or niche that they fill in the garden. Examples of this type of guild include plants that attract a particular beneficial insect or plants that restore nitrogen to the soil. These types of guilds are aimed at solving specific problems which may arise in a garden, such as infestations of harmful insects and poor nutrition in the soil. Establishment guilds are commonly used when working to establish target species (the primary vegetables, fruits, herbs, etc. you want to be established in your garden) with the support of pioneer species (plants that will help the target species succeed). For example, in temperate climates, plants such as comfrey (as a weed barrier and dynamic accumulator), lupine (as a nitrogen fixer), and daffodil (as a gopher deterrent) can together form a guild for a fruit tree. As the tree matures, the support plants will likely eventually be shaded out and can be used as compost. Mature guilds form once your target species are established. For example, if the tree layer of your landscape closes its canopy, sun-loving support plants will be shaded out and die. Shade loving medicinal herbs such as ginseng, Black Cohosh, and goldenseal can be planted as an understory. Mutual support guilds group species together that are complementary by working together and supporting each other. This guild may include a plant that fixes nitrogen, a plant that hosts insects that are predators to pests, and another plant that attracts pollinators. Resource partitioning guilds group species based on their abilities to share essential resources with one another through a process of niche differentiation. An example of this type of guild includes placing a fibrous- or shallow-rooted plant next to a tap-rooted plant so that they draw from different levels of soil nutrients. Zones Zones intelligently organize design elements in a human environment based on the frequency of human use and plant or animal needs. Frequently manipulated or harvested elements of the design are located close to the house in zones 1 and 2. Manipulated elements located further away are used less frequently. Zones are numbered from 0 to 5 based on positioning. Zone 0 The house, or home center. Here permaculture principles aim to reduce energy and water needs harnessing natural resources such as sunlight, to create a harmonious, sustainable environment in which to live and work. Zone 0 is an informal designation, not specifically defined in Mollison's book. Zone 1 The zone nearest to the house, the location for those elements in the system that require frequent attention, or that need to be visited often, such as salad crops, herb plants, soft fruit like strawberries or raspberries, greenhouse and cold frames, propagation area, worm compost bin for kitchen waste, etc. Raised beds are often used in Zone 1 in urban areas. Zone 2 This area is used for siting perennial plants that require less frequent maintenance, such as occasional weed control or pruning, including currant bushes and orchards, pumpkins, sweet potato, etc. Also, a good place for beehives, larger-scale composting bins, etc. Zone 3 The area where main crops are grown, both for domestic use and for trade purposes. After establishment, care and maintenance required are fairly minimal (provided mulches and similar things are used), such as watering or weed control maybe once a week. Zone 4 A semi-wild area, mainly used for forage and collecting wild plants as well as production of timber for construction or firewood. Zone 5 A wilderness area. Humans do not intervene in zone 5 apart from observing natural ecosystems and cycles. This zone hosts a natural reserve of bacteria, molds, and insects that can aid the zones above it. Edge effect The edge effect in ecology is the increased diversity that results when two habitats meet. Permaculturists argue that these places can be highly productive. An example of this is a coast. Where land and sea meet is a rich area that meets a disproportionate percentage of human and animal needs. This idea is reflected in permacultural designs by using spirals in herb gardens, or creating ponds that have wavy undulating shorelines rather than a simple circle or oval (thereby increasing the amount of edge for a given area). On the other hand, in a keyhole bed, edges are minimized to avoid wasting space and effort. Common practices Hügelkultur Hügelkultur is the practice of burying wood to increase soil water retention. The porous structure of wood acts like a sponge when decomposing underground. During the rainy season, sufficient buried wood can absorb enough water to sustain crops through the dry season. This technique is a traditional practice that has been developed over centuries in Europe and has been recently adopted by permaculturalists. The Hügelkultur technique can be implemented through building mounds on the ground as well as in raised garden beds. In raised beds, the practice "imitates natural nutrient cycling found in wood decomposition and the high water-holding capacities of organic detritus, while also improving bed structure and drainage properties." This is done by placing wood material (e.g. logs and sticks) in the bottom of the bed before piling organic soil and compost on top. A study comparing the water retention capacities of Hügel raised beds to non-Hügel beds determined that Hügel beds are both lower maintenance and more efficient in the long term by requiring less irrigation. Sheet mulching Mulch is a protective cover placed over soil. Mulch material includes leaves, cardboard, and wood chips. These absorb rain, reduce evaporation, provide nutrients, increase soil organic matter, create habitat for soil organisms, suppress weed growth and seed germination, moderate diurnal temperature swings, protect against frost, and reduce erosion. Sheet mulching or lasagna gardening is a gardening technique that attempts to mimic the leaf cover that is found on forest floors. No-till gardening Edward Faulkner's 1943 book Plowman's Folly, King's 1946 pamphlet "Is Digging Necessary?", A. Guest's 1948 book "Gardening without Digging", and Fukuoka's "Do Nothing Farming" all advocated forms of no-till or no-dig gardening. No-till gardening seeks to minimise disturbance to the soil community so as to maintain soil structure and organic matter. Cropping practices Low-effort permaculture favours perennial crops which do not require tilling and planting every year. Annual crops inevitably require more cultivation. They can be incorporated into permaculture by using traditional techniques such as crop rotation, intercropping, and companion planting so that pests and weeds of individual annual crop species do not build up, and minerals used by specific crop plants do not become successively depleted. Companion planting aims to make use of beneficial interactions between species of cultivated plants. Such interactions include pest control, pollination, providing habitat for beneficial insects, and maximizing use of space; all of these may help to increase productivity. Rainwater harvesting Rainwater harvesting is the accumulation and storage of rainwater for reuse before it runs off or reaches the aquifer. It has been used to provide drinking water, water for livestock, and water for irrigation, as well as other typical uses. Rainwater collected from the roofs of houses and local institutions can make an important contribution to the availability of drinking water. It can supplement the water table and increase urban greenery. Water collected from the ground, sometimes from areas which are specially prepared for this purpose, is called stormwater harvesting. Greywater is wastewater generated from domestic activities such as laundry, dishwashing, and bathing, which can be recycled for uses such as landscape irrigation and constructed wetlands. Greywater is largely sterile, but not potable (drinkable). Keyline design is a technique for maximizing the beneficial use of water resources. It was developed in Australia by farmer and engineer P. A. Yeomans. Keyline refers to a contour line extending in both directions from a keypoint. Plowing above and below the keyline provides a watercourse that directs water away from a purely downhill course to reduce erosion and encourage infiltration. It is used in designing drainage systems. Compost production Vermicomposting is a common practice in permaculture. The practice involves using earthworms, such as red wigglers, to break down green and brown waste. The worms produce worm castings, which can be used to organically fertilize the garden. Worms are also introduced to garden beds, helping to aerate the soil and improve water retention. Worms may multiply quickly if provided conditions are ideal. For example, a permaculture farm in Cuba began with 9 tiger worms in 2001 and 15 years later had a population of over 500,000. The worm castings are particularly useful as part of a seed starting mix and regular fertilizer. Worm castings are reportedly more successful than conventional compost for seed starting. Sewage or blackwater contains human or animal waste. It can be composted, producing biogas and manure. Human waste can be sourced from a composting toilet, outhouse or dry bog (rather than a plumbed toilet). Economising on space Space can be saved in permaculture gardens with techniques such as herb spirals which group plants closely together. A herb spiral, invented by Mollison, is a round cairn of stones packed with earth at the base and sand higher up; sometimes there is a small pond on the south side (in the northern hemisphere). The result is a series of microclimate zones, wetter at the base, drier at the top, warmer and sunnier on the south side, cooler and drier to the north. Each herb is planted in the zone best suited to it. Domesticated animals Domesticated animals are often incorporated into site design. Activities that contribute to the system include: foraging to cycle nutrients, clearing fallen fruit, weed maintenance, spreading seeds, and pest maintenance. Nutrients are cycled by animals, transformed from their less digestible form (such as grass or twigs) into more nutrient-dense manure. Multiple animals can contribute, including cows, goats, chickens, geese, turkey, rabbits, and worms. An example is chickens who can be used to scratch over the soil, thus breaking down the topsoil and using fecal matter as manure. Factors such as timing and habits are critical. For example, animals require much more daily attention than plants. Fruit trees Masanobu Fukuoka experimented with no-pruning methods on his family farm in Japan, finding that trees which were never pruned could grow well, whereas previously-pruned trees often died when allowed to grow without further pruning. He felt that this reflected the Tao-philosophy of Wú wéi, meaning no action against nature or "do-nothing" farming. He claimed yields comparable to intensive arboriculture with pruning and chemical fertilisation. Applications Agroforestry Agroforestry uses the interactive benefits from combining trees and shrubs with crops or livestock. It combines agricultural and forestry technologies to create more diverse, productive, profitable, healthy and sustainable land-use systems. Trees or shrubs are intentionally used within agricultural systems, or non-timber forest products are cultured in forest settings. Forest gardens Forest gardens or food forests are permaculture systems designed to mimic natural forests. Forest gardens incorporate processes and relationships that the designers understand to be valuable in natural ecosystems. A mature forest ecosystem is organised into layers with constituents such as trees, understory, ground cover, soil, fungi, insects, and other animals. Because plants grow to different heights, a diverse community of organisms can occupy a relatively small space, each at a different layer. Rhizosphere: Root layers within the soil. The major components of this layer are the soil and the organisms that live within it such as plant roots and zomes (including root crops such as potatoes and other edible tubers), fungi, insects, nematodes, and earthworms. Soil surface/groundcover: Overlaps with the herbaceous layer and the groundcover layer; however plants in this layer grow much closer to the ground, densely fill bare patches, and typically can tolerate some foot traffic. Cover crops retain soil and lessen erosion, along with green manures that add nutrients and organic matter, especially nitrogen. Herbaceous layer: Plants that die back to the ground every winter, if cold enough. No woody stems. Many beneficial plants such as culinary and medicinal herbs are in this layer; whether annuals, biennials, or perennials. Shrub layer: woody perennials of limited height. Includes most berry bushes. Understory layer: trees that flourish under the canopy. The canopy: the tallest trees. Large trees dominate, but typically do not saturate the area, i.e., some patches are devoid of trees. Vertical layer: climbers or vines, such as runner beans and lima beans (vine varieties). Suburban and urban permaculture The fundamental element of suburban and urban permaculture is the efficient utilization of space. Wildfire journal suggests using methods such as the keyhole garden which require little space. Neighbors can collaborate to increase the scale of transformation, using sites such as recreation centers, neighborhood associations, city programs, faith groups, and schools. Columbia, an ecovillage in Portland, Oregon, consisting of 37 apartment condominiums, influenced its neighbors to implement permaculture principles, including in front-yard gardens. Suburban permaculture sites such as one in Eugene, Oregon, include rainwater catchment, edible landscaping, removing paved driveways, turning a garage into living space, and changing a south side patio into passive solar. Vacant lot farms are community-managed farm sites, but are often seen by authorities as temporary rather than permanent. For example, Los Angeles' South Central Farm (1994–2006), one of the largest urban gardens in the United States, was bulldozed with approval from property owner Ralph Horowitz, despite community protest. The possibilities and challenges for suburban or urban permaculture vary with the built environment around the world. For example, land is used more ecologically in Jaisalmer, India than in American planned cities such as Los Angeles: Marine systems Permaculture derives its origin from agriculture, although the same principles, especially its foundational ethics, can also be applied to mariculture, particularly seaweed farming. In Marine Permaculture, artificial upwelling of cold, deep ocean water is induced. When an attachment substrate is provided in association with such an upwelling, and kelp sporophytes are present, a kelp forest ecosystem can be established (since kelp needs the cool temperatures and abundant dissolved macronutrients present in such an environment). Microalgae proliferate as well. Marine forest habitat is beneficial for many fish species, and the kelp is a renewable resource for food, animal feed, medicines and various other commercial products. It is also a powerful tool for carbon fixation. The upwelling can be powered by renewable energy on location. Vertical mixing has been reduced due to ocean stratification effects associated with climate change. Reduced vertical mixing and marine heatwaves have decimated seaweed ecosystems in many areas. Marine permaculture mitigates this by restoring some vertical mixing and preserves these important ecosystems. By preserving and regenerating habitat offshore on a platform, marine permaculture employs natural processes to regenerate marine life. Grazing Grazing is blamed for much destruction. However, when grazing is modeled after nature, it can have the opposite effect. Cell grazing is a system of grazing in which herds or flocks are regularly and systematically moved to fresh range with the intent to maximize forage quality and quantity. Sepp Holzer and Joel Salatin have shown how grazing can start ecological succession or prepare ground for planting. Allan Savory's holistic management technique has been likened to "a permaculture approach to rangeland management". One variation is conservation grazing, where the primary purpose of the animals is to benefit the environment and the animals are not necessarily used for meat, milk or fiber. Sheep can replace lawn mowers. Goats and sheep can eat invasive plants. Natural building Natural building involves using a range of building systems and materials that apply permaculture principles. The focus is on durability and the use of minimally processed, plentiful, or renewable resources, as well as those that, while recycled or salvaged, produce healthy living environments and maintain indoor air quality. For example, cement, a common building material, emits carbon dioxide and is harmful to the environment while natural building works with the environment, using materials that are biodegradable, such as cob, adobe, rammed earth (unburnt clay), and straw bale (which insulates as well as modern synthetic materials). Issues Intellectual property Trademark and copyright disputes surround the word permaculture. Mollison's books claimed on the copyright page, "The contents of this book and the word PERMACULTURE are copyright." Eventually Mollison acknowledged that he was mistaken and that no copyright protection existed. In 2000, Mollison's U.S.-based Permaculture Institute sought a service mark for the word permaculture when used in educational services such as conducting classes, seminars, or workshops. The service mark would have allowed Mollison and his two institutes to set enforceable guidelines regarding how permaculture could be taught and who could teach it, particularly with relation to the PDC, despite the fact that he had been certifying teachers since 1993. This attempt failed and was abandoned in 2001. Mollison's application for trademarks in Australia for the terms "Permaculture Design Course" and "Permaculture Design" was withdrawn in 2003. In 2009 he sought a trademark for "Permaculture: A Designers' Manual" and "Introduction to Permaculture", the names of two of his books. These applications were withdrawn in 2011. Australia has never authorized a trademark for the word permaculture. Methodology Permaculture has been criticised as being poorly defined and unscientific. Critics have pushed for less reliance on anecdote and extrapolation from ecological first principles in favor of peer-reviewed research to substantiate productivity claims and to clarify methodology. Peter Harper from the Centre for Alternative Technology suggests that most of what passes for permaculture is irrelevant to real problems. Harper notes that British organic farmers are "embarrassed or openly derisive" of permaculture, while the permaculture expert Robert Kourik found the supposed advantages of "less- or no-work gardening, bountiful yields, and the soft fuzzy glow of knowing that the garden will ... live on without you" were often illusory. Harper found "many permacultures" are based on ideas ranging from practical farming techniques to "bullshit ... no more than charming cultural graces." Defenders respond that permaculture is not yet a mainstream scientific tradition and lacks the resources of mainstream industrial agriculture. Rafter Ferguson and Sarah Lovell point out that permaculturalists rarely engage with mainstream research in agroecology, agroforestry, or ecological engineering, and claim that mainstream science has an elitist or pro-corporate bias. Julius Krebs and Sonja Bach argue in Sustainability that there is "scientific evidence for all twelve [of Holmgren's] principles". In 2017, Ferguson and Lovell presented sociological and demographic data from 36 self-described American permaculture farms. The farms were well diversified, with a median effective number of enterprises per farm of 3.6 (out of a maximum of 6 in the analysis method used). Business strategies included small mixed farms, integrated producers of perennial and animal crops, mixes of production and services, livestock, and service-based businesses. Median household income ($38,750) was less than either the national median household income ($51,017) or the national median farm household income ($68,680). A 2019 study by Hirschfeld and Van Acker found that adopting permaculture consistently encouraged the cultivation of perennials, crop diversity, landscape heterogeneity, and nature conservation. They discovered that grass-roots adopters were "remarkably consistent" in their implementation of permaculture, leading them to conclude that the movement could exert influence over positive agroecological transitions. In 2024, Reiff and colleagues stated that permaculture is a "sustainable alternative to conventional agriculture", and that it "strongly" enhances carbon stocks, soil quality, and biodiversity, making it "an effective tool to promote sustainable agriculture, ensure sustainable production patterns, combat climate change and halt and reverse land degradation and biodiversity loss." They point out that most of permaculture's commonest methods, such as agroforestry, polycultures, and water harvesting features, are backed by peer-reviewed research. See also References Sources – The first systematic review of the permaculture literature, from the perspective of agroecology. ; Jacke, Dave with Eric Toensmeier. Edible Forest Gardens. Volume I: Ecological Vision and Theory for Temperate-Climate Permaculture, Volume II: Ecological Design and Practice for Temperate-Climate Permaculture. Edible Forest Gardens (US) 2005 Loofs, Mona. Permaculture, Ecology and Agriculture: An investigation into Permaculture theory and practice using two case studies in northern New South Wales Honours thesis, Human Ecology Program, Department of Geography, Australian National University 1993 Macnamara, Looby. People and Permaculture: caring and designing for ourselves, each other and the planet. Permanent Publications (UK) (2012) Odum, H. T., Jorgensen, S.E. and Brown, M.T. 'Energy hierarchy and transformity in the universe', in Ecological Modelling, 178, pp. 17–28 (2004). Paull, J. "Permanent Agriculture: Precursor to Organic Farming", Journal of Bio-Dynamics Tasmania, no.83, pp. 19–21, 2006. Organic eprints. Shepard, Mark: Restoration Agriculture – Redesigning Agriculture in Nature's Image, Acres US, 2013, Woodrow, Linda. The Permaculture Home Garden. Penguin Books (Australia). Yeomans, P.A. Water for Every Farm: A practical irrigation plan for every Australian property, KG Murray, Sydney, NSW, Australia (1973). External links Ethics and principles of permaculture (Holmgren's) Permaculture Commons – collection of permaculture material under free licenses The 15 pamphlets based on the 1981 Permaculture Design Course given by Bill Mollison (co-founder of permaculture) all in 1 PDF file The Permaculture Research Institute – Permaculture Forums, Courses, Information, News and Worldwide Reports The Worldwide Permaculture Network – Database of permaculture people and projects worldwide Australian inventions Environmental design Environmental social science concepts Horticulture Landscape architecture Rural community development Sustainable agriculture Sustainable design Sustainable food system Sustainable gardening Systems ecology
Permaculture
Engineering,Environmental_science
6,748
2,880,139
https://en.wikipedia.org/wiki/Tau2%20Arietis
{{DISPLAYTITLE:Tau2 Arietis}} Tau2 Arietis, Latinized from τ2 Arietis, is the Bayer designation for a binary star in the northern constellation on Aries. The combined apparent visual magnitude of this system is +5.09, which is bright enough to be seen with the naked eye. With an annual parallax shift of 10.27 mas, it is located at a distance of approximately from Earth, give or take a 20 light-year margin of error. At this distance the brightness of the star is diminished by 0.18 in magnitude because of extinction from interstellar gas and dust. The primary component is an evolved giant star with a stellar classification of K3 III. It has expanded to 19 times the radius of the Sun, from which it is radiating 120 times the Sun's luminosity. This energy is being emitted into outer space from the outer atmosphere at an effective temperature of 4,406 K, giving it the cool orange glow of a K-type star. At an angular separation of 0.53 arcseconds is a magnitude 8.50 companion. References External links Aladin previewer Aladin sky atlas HR 1015 Arietis, 63 020893 015737 Arietis, Tau2 Aries (constellation) K-type giants 1015 Durchmusterung objects
Tau2 Arietis
Astronomy
283
22,577,850
https://en.wikipedia.org/wiki/Berge%27s%20theorem
In graph theory, Berge's theorem states that a matching M in a graph G is maximum (contains the largest possible number of edges) if and only if there is no augmenting path (a path that starts and ends on free (unmatched) vertices, and alternates between edges in and not in the matching) with M. It was proven by French mathematician Claude Berge in 1957 (though already observed by Petersen in 1891 and Kőnig in 1931). Proof To prove Berge's theorem, we first need a lemma. Take a graph G and let M and be two matchings in G. Let be the resultant graph from taking the symmetric difference of M and ; i.e. (M - ) ∪ ( - M). will consist of connected components that are one of the following: An isolated vertex. An even cycle whose edges alternate between M and . A path whose edges alternate between M and , with distinct endpoints. The lemma can be proven by observing that each vertex in can be incident to at most 2 edges: one from M and one from . Graphs where every vertex has degree less than or equal to 2 must consist of either isolated vertices, cycles, and paths. Furthermore, each path and cycle in must alternate between M and . In order for a cycle to do this, it must have an equal number of edges from M and , and therefore be of even length. Let us now prove the contrapositive of Berge's theorem: G has a matching larger than M if and only if G has an augmenting path. Clearly, an augmenting path P of G can be used to produce a matching that is larger than M — just take to be the symmetric difference of P and M ( contains exactly those edges of G that appear in exactly one of P and M). Hence, the reverse direction follows. For the forward direction, let be a matching in G larger than M. Consider D, the symmetric difference of M and . Observe that D consists of paths and even cycles (as observed by the earlier lemma). Since is larger than M, D contains a component that has more edges from than M. Such a component is a path in G that starts and ends with an edge from , so it is an augmenting path. Corollaries Corollary 1 Let M be a maximum matching and consider an alternating chain such that the edges in the path alternates between being and not being in M. If the alternating chain is a cycle or a path of even length starting on an unmatched vertex, then a new maximum matching can be found by interchanging the edges found in M and not in M. For example, if the alternating chain is (m1, n1, m2, n2, ...), where mi ∈ M and ni ∉ M, interchanging them would make all ni part of the new matching and make all mi not part of the matching. Corollary 2 An edge is considered "free" if it belongs to a maximum matching but does not belong to all maximum matchings. An edge e is free if and only if, in an arbitrary maximum matching M, edge e belongs to an even alternating path starting at an unmatched vertex or to an alternating cycle. By the first corollary, if edge e is part of such an alternating chain, then a new maximum matching, , must exist and e would exist either in M or , and therefore be free. Conversely, if edge e is free, then e is in some maximum matching M but not in . Since e is not part of both M and , it must still exist after taking the symmetric difference of M and . The symmetric difference of M and results in a graph consisting of isolated vertices, even alternating cycles, and alternating paths. Suppose to the contrary that e belongs to some odd-length path component. But then one of M and must have one fewer edge than the other in this component, meaning that the component as a whole is an augmenting path with respect to that matching. By the original lemma, then, that matching (whether M or ) cannot be a maximum matching, which contradicts the assumption that both M and are maximum. So, since e cannot belong to any odd-length path component, it must either be in an alternating cycle or an even-length alternating path. References . . . Matching (graph theory) Theorems in graph theory
Berge's theorem
Mathematics
910
17,401,600
https://en.wikipedia.org/wiki/Necessity%20good
In economics, a necessity good or a necessary good is a type of normal good. Necessity goods are product(s) and services that consumers will buy regardless of the changes in their income levels, therefore making these products less sensitive to income change. As for any other normal good, an income rise will lead to a rise in demand, but the increase for a necessity good is less than proportional to the rise in income, so the proportion of expenditure on these goods falls as income rises. If income elasticity of demand is lower than unity, it is a necessity good. This observation for food, known as Engel's law, states that as income rises, the proportion of income spent on food falls, even if absolute expenditure on food rises. This makes the income elasticity of demand for food between zero and one. Some necessity goods are produced by a public utility. According to Investopedia, stocks of private companies producing necessity goods are known as defensive stocks. Defensive stocks are stocks that provide a constant dividend and stable earnings regardless of the state of the overall stock market. See also Basic needs Income elasticity of demand Wealth (economics) References Goods (economics)
Necessity good
Physics
237
30,411,167
https://en.wikipedia.org/wiki/History%20of%20water%20supply%20and%20sanitation
The history of water supply and sanitation is one of a logistical challenge to provide clean water and sanitation systems since the dawn of civilization. Where water resources, infrastructure or sanitation systems were insufficient, diseases spread and people fell sick or died prematurely. Major human settlements could initially develop only where fresh surface water was plentiful, such as near rivers or natural springs. Throughout history, people have devised systems to make getting water into their communities and households and disposing of (and later also treating) wastewater more convenient. The historical focus of sewage treatment was on the conveyance of raw sewage to a natural body of water, e.g. a river or ocean, where it would be diluted and dissipated. Early human habitations were often built next to water sources. Rivers would often serve as a crude form of natural sewage disposal. Over the millennia, technology has dramatically increased the distances across which water can be relocated. Furthermore, treatment processes to purify drinking water and to treat wastewater have been improved. Prehistory During the Neolithic era, humans dug the first permanent water wells, from where vessels could be filled and carried by hand. Wells dug around 8500 BC have been found on Cyprus, and 6500 BC in the Jezreel Valley. The size of human settlements was largely dependent on the amount of water available nearby. A primitive indoor fresh- and wastewater system, consisting of two stone channels lined with tree bark, appears to have featured in the houses of Skara Brae and the Barnhouse Settlement in Orkney from around 3000 BC. Combined with a cell-like enclave in a number of houses at Skara Brae, it has been suggested that these may have functioned as an early indoor latrine. Wastewater reuse activities Waste water reuse is an ancient practice connected to the development of sanitation provision. Reuse of untreated municipal wastewater has been practiced for many centuries with the objective of diverting human waste outside of urban settlements. Likewise, land application of domestic wastewater is an old and common practice, which has gone through different stages of development. Domestic wastewater was used for irrigation by ancient civilizations (e.g. Mesopotamian, Indus valley, and Minoan) since the Bronze Age (ca. 3200–1100 BC). Thereafter, wastewater was used for disposal, irrigation, and fertilization by Hellenic civilizations and later by Romans in areas surrounding cities (e.g. Athens and Rome). Bronze and early Iron Ages Ancient Americas In ancient Peru, the Nazca people employed a system of interconnected wells and an underground watercourse known as puquios. The Mayans were the third earliest civilization to have employed a system of indoor plumbing using pressurized water. Ancient Near East Mesopotamia The Mesopotamians introduced clay sewer pipes around 4000 BC, with the earliest examples found in the Temple of Bel at Nippur and at Eshnunna, utilised to remove wastewater from sites, and capture rainwater, in wells. The city of Uruk also demonstrates the first examples of brick constructed latrines, from 3200 BC. Clay pipes were later used in the Hittite city of Hattusa. They had easily detachable and replaceable segments, and allowed for cleaning. Ancient Persia The first sanitation systems within prehistoric Iran were built near the city of Zabol. Persian qanats and ab anbars have been used for water supply and cooling. Ancient Egypt The , Pyramid of Sahure, and adjoining temple complex at Abusir, was discovered to have a network of copper drainage pipes. Ancient East Asia Ancient China Some of the earliest evidence of water wells are located in China. The Neolithic Chinese discovered and made extensive use of deep drilled groundwater for drinking. The Chinese text The Book of Changes, originally a divination text of the Western Zhou dynasty (1046–771 BC), contains an entry describing how the ancient Chinese maintained their wells and protected their sources of water. Archaeological evidence and old Chinese documents reveal that the prehistoric and ancient Chinese had the aptitude and skills for digging deep water wells for drinking water as early as 6000 to 7000 years ago. A well excavated at the Hemedu excavation site was believed to have been built during the Neolithic era. The well was caused by four rows of logs with a square frame attached to them at the top of the well. Sixty additional tile wells southwest of Beijing are also believed to have been built around 600 BC for drinking and irrigation. Plumbing is also known to have been used in East Asia since the Qin and Han Dynasties of China. Indus Valley Civilization The Indus Valley civilization in Asia shows early evidence of public water supply and sanitation. The system the Indus developed and managed included a number of advanced features. An exceptional example is the Indus city of Lothal (c. 2350–1810 BC). In Lothal, the ruler's house had their own private bathing platform and latrine, which was connected to an open street drain that discharged into the towns dock. A number of the other houses of the acropolis had burnished brick bathing platforms, that drained into a covered brick sewer, held together with a gypsum-based mortar, that ran to a soak pit outside the towns walls, while the lower town offered soak jars (large buried urns, with a hole in the bottom to permit liquids to drain), the latter of which were regularly emptied and cleaned. Water was supplied from two wells in the town, one in the acropolis, and the other on the edge of the dock. The urban areas of the Indus Valley civilization included public and private baths. Sewage was disposed through underground drains built with precisely laid bricks, and a sophisticated water management system with numerous reservoirs was established. In the drainage systems, drains from houses were connected to wider public drains. Many of the buildings at Mohenjo-daro had two or more stories. Water from the roof and upper storey bathrooms was carried through enclosed terracotta pipes or open chutes that emptied out onto the street drains. The earliest evidence of urban sanitation was seen in Harappa, Mohenjo-daro, and the recently discovered Rakhigarhi of Indus Valley civilization. This urban plan included the world's first urban sanitation systems. Within the city, individual homes or groups of homes obtained water from wells. From a room that appears to have been set aside for bathing, waste water was directed to covered drains, which lined the major streets. Devices such as shadoofs were used to lift water to ground level. Ruins from the Indus Valley Civilization like Mohenjo-daro in Pakistan and Dholavira in Gujarat in India had settlements with some of the ancient world's most sophisticated sewage systems. They included drainage channels, rainwater harvesting, and street ducts. Stepwells have mainly been used in the Indian subcontinent. Ancient Mediterranean Ancient Greece The ancient Greek civilization of Crete, known as the Minoan civilization, built advanced underground clay pipes for sanitation and water supply. Their capital, Knossos, had a well-organized water system for bringing in clean water, taking out waste water and storm sewage canals for overflow when there was heavy rain. People constructed flushed toilets in ancient Crete, like in ancient Egypt and before them at places of the Indus Civilization, with the facilities on Crete possibly having a first flush installation for pouring water into, dating back to 16th century BC. These Minoan sanitation facilities were connected to stone sewers that were regularly flushed by rain, flowing in through the collection system. In addition to sophisticated water and sewer systems they devised elaborate heating systems. The Ancient Greeks of Athens and Asia Minor also used an indoor plumbing system, used for pressurized showers. The Greek inventor Heron used pressurized piping for fire fighting purposes in the City of Alexandria. An inverted siphon system, along with glass-covered clay pipes, was used for the first time in the palaces of Crete, Greece. It is still in working condition, after about 3000 years. Roman Empire In ancient Rome, the Cloaca Maxima, considered a marvel of engineering, discharged into the Tiber. Public latrines were built over the Cloaca Maxima. Beginning in the Roman era a water wheel device known as a noria supplied water to aqueducts and other water distribution systems in major cities in Europe and the Middle East. The Roman Empire had indoor plumbing, meaning a system of aqueducts and pipes that terminated in homes and at public wells and fountains for people to use. Rome and other nations used lead pipes; while commonly thought to be the cause of lead poisoning in the Roman Empire, the combination of running water which did not stay in contact with the pipe for long and the deposition of precipitation scale actually mitigated the risk from lead pipes. Roman towns and garrisons in the United Kingdom between 46 BC and 400 AD had complex sewer networks sometimes constructed out of hollowed-out elm logs, which were shaped so that they butted together with the down-stream pipe providing a socket for the upstream pipe. Medieval and early modern ages Nepal In Nepal the construction of water conduits like drinking fountains and wells is considered a pious act. A drinking water supply system was developed starting at least as early as 550 AD. This dhunge dhara or hiti system consists of carved stone fountains through which water flows uninterrupted from underground sources. These are supported by numerous ponds and canals that form an elaborate network of water bodies, created as a water resource during the dry season and to help alleviate the water pressure caused by the monsoon rains. After the introduction of modern, piped water systems, starting in the late 19th century, this old system has fallen into disrepair and some parts of it are lost forever. Nevertheless, many people of Nepal still rely on the old hitis on a daily basis. In 2008 the dhunge dharas of the Kathmandu Valley produced 2.95 million litres of water per day. Of the 389 stone spouts found in the Kathmandu Valley in 2010, 233 were still in use, serving about 10% of Kathmandu's population. 68 had gone dry, 45 were lost entirely and 43 were connected to the municipal water supply instead of their original source. Islamic world Islam stresses the importance of cleanliness and personal hygiene. Islamic hygienical jurisprudence, which dates back to the 7th century, has a number of elaborate rules. Taharah (ritual purity) involves performing wudu (ablution) for the five daily salah (prayers), as well as regularly performing ghusl (bathing), which led to bathhouses being built across the Islamic world. Islamic toilet hygiene also requires washing with water after using the toilet, for purity and to minimize germs. In the Abbasid Caliphate (8th–13th centuries), its capital city of Baghdad (Iraq) had 65,000 baths, along with a sewer system. Cities of the medieval Islamic world had water supply systems powered by hydraulic technology that supplied drinking water along with much greater quantities of water for ritual washing, mainly in mosques and hammams (baths). Bathing establishments in various cities were rated by Arabic writers in travel guides. Medieval Islamic cities such as Baghdad, Córdoba (Islamic Spain), Fez (Morocco) and Fustat (Egypt) also had sophisticated waste disposal and sewage systems. The city of Fustat also had multi-storey tenement buildings (with up to six floors) with flush toilets, which were connected to a water supply system, and flues on each floor carrying waste to underground channels. Al-Karaji (c. 953–1029) wrote a book, The Extraction of Hidden Waters, which presented ground-breaking ideas and descriptions of hydrological and hydrogeological perceptions such as components of the hydrological cycle, groundwater quality, and driving factors of groundwater flow. He also gave an early description of a water filtration process. As the Islamic Golden Age waned and in later times, both Arab and European scholars criticised the condition of canals, streets and waterways at certain urban locations in Egypt. The Egyptian physician Ali ibn Ridwan wrote in the 11th century "the people of al-Fustat are in the habit of throwing whatever dies in their homes... out into the streets and alleys where they decay, and their corruption mixes with the air.... The sewers from their latrines also empty into the Nile. When the flow of water is cut off, the people drink this corruption mingled with the water". The 18th-century French Consul in Egypt, De Pauw, blamed the abandonment of the embalming practices of the Ancient Egyptians and the unsuitability of modern burial practices for the Nile delta for the area becoming "a hotbed of the plague". Some colonial commentary of this kind seemed informed by attitudes underpinning the ruling powers. For instance, the British doctor J. W Simpson wrote in 1883 "the residents of Damietta [Egypt] have little respect for water, contaminating the Nile and its canals... Arabs do not known mud from clean water"; the historians Schultz, Hipwood and Lee, writing in 2023, conclude the tenor of Simspson's report "reinforces the British colonial view of Egyptians as inferior to European colonisers". Sub-Saharan Africa In post-classical Kilwa, plumbing was prevalent in the stone homes of the natives. The Husani Kubwa Palace, as well as other buildings for the ruling elite and wealthy, included the luxury of indoor plumbing. In the Ashanti Empire, toilets were housed in two story buildings that were flushed with gallons of boiling water. Medieval Europe Christianity places an emphasis on hygiene. Despite the denunciation of the mixed bathing style of Roman pools by early Christian clergy, as well as the pagan custom of women naked bathing in front of men, this did not stop the Church from urging its followers to go to public baths for bathing, which contributed to hygiene and good health according to the Church Fathers, Clement of Alexandria and Tertullian. The Church built public bathing facilities that were separate for both sexes near monasteries and pilgrimage sites; also, the popes situated baths within church basilicas and monasteries since the early Middle Ages. Pope Gregory the Great urged his followers on value of bathing as a bodily need. Contrary to popular belief, bathing and sanitation were not lost in Europe with the collapse of the Roman Empire. Public bathhouses were common in medieval Christendom larger towns and cities such as Constantinople, Paris, Regensburg, Rome and Naples. And great bathhouses were built in Byzantine centers such as Constantinople and Antioch. There is little record of other sanitation systems (apart from sanitation in ancient Rome) in most of Europe until the High Middle Ages. Unsanitary conditions and overcrowding were widespread throughout Europe and Asia during the Middle Ages. This resulted in pandemics such as the Plague of Justinian (541–542) and the Black Death (1347–1351), which killed tens of millions of people. Very high infant and child mortality prevailed in Europe throughout medieval times, due partly to deficiencies in sanitation. In medieval European cities, small natural waterways used for carrying off wastewater were eventually covered over and functioned as sewers. London's River Fleet is such a system. Open drains, or gutters, for waste water run-off ran along the center of some streets. These were known as "kennels" (i.e., canals, channels), and in Paris were sometimes known as “split streets”, as the waste water running along the middle physically split the streets into two halves. The first closed sewer constructed in Paris was designed by Hugues Aubird in 1370 on Rue Montmartre (Montmartre Street), and was 300 meters long. The original purpose of designing and constructing a closed sewer in Paris was less-so for waste management as much as it was to hold back the stench coming from the odorous waste water. In Dubrovnik, then known as Ragusa (Latin name), the Statute of 1272 set out the parameters for the construction of septic tanks and channels for the removal of dirty water. Throughout the 14th and 15th century the sewage system was built, and it is still operational today, with minor changes and repairs done in recent centuries. Pail closets, outhouses, and cesspits were used to collect human waste. The use of human waste as fertilizer was especially important in China and Japan, where cattle manure was less available. However, most cities did not have a functioning sewer system before the Industrial era, relying instead on nearby rivers or occasional rain showers to wash away the sewage from the streets. In some places, waste water simply ran down the streets, which had stepping stones to keep pedestrians out of the muck, and eventually drained as runoff into the local watershed. In the 16th century, Sir John Harington invented a flush toilet as a device for Queen Elizabeth I (his godmother) that released wastes into cesspools. After the adoption of gunpowder, municipal outhouses became an important source of raw material for the making of saltpeter in European countries. In London, the contents of the city's outhouses were collected every night by commissioned wagons and delivered to the nitrite beds where it was laid into specially designed soil beds to produce earth rich in mineral nitrates. The nitrate rich-earth would be then further processed to produce saltpeter, or potassium nitrate, an important ingredient in black powder that played a part in the making of gunpowder. Classic and early modern Mesoamerica The Classic Maya at Palenque had underground aqueducts and flush toilets; the Classic Maya even used household water filters using locally abundant limestone carved into a porous cylinder, made so as to work in a manner strikingly similar to Modern ceramic water filters. In Spain and Spanish America, a community operated watercourse known as an acequia, combined with a simple sand filtration system, provided potable water. Sewage farms for disposal and irrigation “Sewage farms” (i.e. wastewater application to the land for disposal and agricultural use) were operated in Bunzlau (Silesia) in 1531, in Edinburgh (Scotland) in 1650, in Paris (France) in 1868, in Berlin (Germany) in 1876 and in different parts of the US since 1871, where wastewater was used for beneficial crop production. In the following centuries (16th and 18th centuries) in many rapidly growing countries/cities of Europe (e.g. Germany, France) and the United States, “sewage farms” were increasingly seen as a solution for the disposal of large volumes of the wastewater, some of which are still in operation today. Irrigation with sewage and other wastewater effluents has a long history also in China and India; while also a large "sewage farm" was established in Melbourne, Australia, in 1897. Modern age Water supply Until the Enlightenment era, little progress was made in water supply and sanitation. It was in the 18th century that a rapidly growing population fueled a boom in the establishment of private water supply networks in London. London water supply infrastructure developed over many centuries from early mediaeval conduits, through major 19th-century treatment works built in response to cholera threats, to modern, large-scale reservoirs. The first screw-down water tap was patented in 1845 by Guest and Chrimes, a brass foundry in Rotherham. The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland, John Gibb, installed an experimental filter, selling his unwanted surplus to the public. The first treated public water supply in the world was installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. The practice of water treatment soon became mainstream, and the virtues of the system were made starkly apparent after the investigations of the physician John Snow during the 1854 Broad Street cholera outbreak demonstrated the role of the water supply in spreading the cholera epidemic. Sewer systems A significant development was the construction of a network of sewers to collect wastewater. In some cities, including Rome, Istanbul (Constantinople) and Fustat, networked ancient sewer systems continue to function today as collection systems for those cities' modernized sewer systems. Instead of flowing to a river or the sea, the pipes have been re-routed to modern sewer treatment facilities. Before modern sewers were invented, cesspools that collected human waste were the most widely used sanitation system. In ancient Mesopotamia, vertical shafts carried waste away into cesspools. Similar systems existed in the Indus Valley civilization in modern-day Pakistan and in Ancient Crete and Greece. In the Middle Ages waste was collected into cesspools that were periodically emptied by workers known as 'rakers' who would often sell it as fertilizer to farmers outside the city. Archaeological discoveries have shown that some of the earliest sewer systems were developed in the third millennium BC in the ancient cities of Harappa and Mohenjo-daro in present-day Pakistan. The primitive sewers were carved in the ground alongside buildings. This discovery reveals the conceptual understanding of waste disposal by early civilizations. The tremendous growth of cities in Europe and North America during the Industrial Revolution quickly led to crowding, which acted as a constant source for the outbreak of disease. As cities grew in the 19th century concerns were raised about public health. As part of a trend of municipal sanitation programs in the late 19th and 20th centuries, many cities constructed extensive gravity sewer systems to help control outbreaks of disease such as typhoid and cholera. Storm and sanitary sewers were necessarily developed along with the growth of cities. By the 1840s the luxury of indoor plumbing, which mixes human waste with water and flushes it away, eliminated the need for cesspools. Modern sewerage systems were first built in the mid-nineteenth century as a reaction to the exacerbation of sanitary conditions brought on by heavy industrialization and urbanization. Baldwin Latham, a British civil engineer contributed to the rationalization of sewerage and house drainage systems and was a pioneer in sanitary engineering. He developed the concept of oval sewage pipe to facilitate sewer drainage and to prevent sludge deposition and flooding. Due to the contaminated water supply, cholera outbreaks occurred in 1832, 1849 and 1855 in London, killing tens of thousands of people. This, combined with the Great Stink of 1858, when the smell of untreated human waste in the River Thames became overpowering, and the report into sanitation reform of the Royal Commissioner Edwin Chadwick, led to the Metropolitan Commission of Sewers appointing Joseph Bazalgette to construct a vast underground sewage system for the safe removal of waste. Contrary to Chadwick's recommendations, Bazalgette's system, and others later built in Continental Europe, did not pump the sewage onto farm land for use as fertilizer; it was simply piped to a natural waterway away from population centres, and pumped back into the environment. Liverpool, London and other cities in the UK As recently as the late 19th-century, sewerage systems in some parts of the rapidly industrializing United Kingdom were so inadequate that water-borne diseases such as cholera and typhoid remained a risk. From as early as 1535, there were efforts to stop polluting the River Thames in London. Beginning with an Act passed that year that was to prohibit the dumping of excrement into the river. Leading up to the Industrial Revolution the River Thames was identified as being thick and black due to sewage, and it was even said that the river “smells like death.” As Britain was the first country to industrialize, it was also the first to experience the disastrous consequences of major urbanization and was the first to construct a modern sewerage system to mitigate the resultant unsanitary conditions. During the early 19th century, the River Thames was effectively an open sewer, leading to frequent outbreaks of cholera epidemics. Proposals to modernize the sewerage system had been made during 1856, but were neglected due to lack of funds. However, after the Great Stink of 1858, Parliament realized the urgency of the problem and resolved to create a modern sewerage system. Liverpool However, ten years earlier and 200 miles to the north, James Newlands, a Scottish Engineer, was one of a celebrated trio of pioneering officers appointed under a private Act, the Liverpool Sanitory Act by the Borough of Liverpool Health of Towns Committee. The other officers appointed under the Act were William Henry Duncan, Medical Officer for Health, and Thomas Fresh, Inspector of Nuisances (an early antecedent of the environmental health officer). One of five applicants for the post, Newlands was appointed Borough Engineer of Liverpool on 26 January 1847. He made a careful and exact survey of Liverpool and its surroundings, involving approximately 3,000 geodetical observations, and resulting in the construction of a contour map of the town and its neighbourhood, on a scale of one inch to 20 feet (6.1 m). From this elaborate survey Newlands proceeded to lay down a comprehensive system of outlet and contributory sewers, and main and subsidiary drains, to an aggregate extent of nearly 300 miles (480 km). The details of this projected system he presented to the Corporation in April 1848. In July 1848, James Newlands' sewer construction programme began, and over the next 11 years 86 miles (138 km) of new sewers were built. Between 1856 and 1862, another 58 miles (93 km) were added. This programme was completed in 1869. Before the sewers were built, life expectancy in Liverpool was 19 years, and by the time Newlands retired it had more than doubled. London Joseph Bazalgette, a civil engineer and Chief Engineer of the Metropolitan Board of Works, was given responsibility for similar work in London. He designed an extensive underground sewerage system that diverted waste to the Thames Estuary, downstream of the main center of population. Six main interceptor sewers, totaling almost 100 miles (160 km) in length, were constructed, some incorporating stretches of London's 'lost' rivers. Three of these sewers were north of the river, the southernmost, low-level one being incorporated in the Thames Embankment. The Embankment also allowed new roads, new public gardens, and the Circle Line of the London Underground. The intercepting sewers, constructed between 1859 and 1865, were fed by 450 miles (720 km) of main sewers that, in turn, conveyed the contents of some 13,000 miles (21,000 km) of smaller local sewers. Construction of the interceptor system required 318 million bricks, 2.7 million cubic metres of excavated earth and 670,000 cubic metres of concrete. Gravity allowed the sewage to flow eastwards, but in places such as Chelsea, Deptford and Abbey Mills, pumping stations were built to raise the water and provide sufficient flow. Sewers north of the Thames feed into the Northern Outfall Sewer, which fed into a major treatment works at Beckton. South of the river, the Southern Outfall Sewer extended to a similar facility at Crossness. With only minor modifications, Bazalgette's engineering achievement remains the basis for sewerage design up into the present day. Other Cities in the UK In Merthyr Tydfil, a large town in South Wales, most houses discharged their sewage to individual cess-pits which persistently overflowed causing the pavements to be awash with foul sewage. Paris, France In 1802, Napoleon built the Ourcq canal which brought 70,000 cubic meters of water a day to Paris, while the Seine river received up to of wastewater per day. The Paris cholera epidemic of 1832 sharpened the public awareness of the necessity for some sort of drainage system to deal with sewage and wastewater in a better and healthier way. Between 1865 and 1920 Eugene Belgrand led the development of a large scale system for water supply and wastewater management. Between these years approximately 600 kilometers of aqueducts were built to bring in potable spring water, which freed the poor quality water to be used for flushing streets and sewers. By 1894 laws were passed which made drainage mandatory. The treatment of Paris sewage, though, was left to natural devices as 5,000 hectares of land were used to spread the waste out to be naturally purified. Further, the lack of sewage treatment left Parisian sewage pollution to become concentrated downstream in the town of Clichy, effectively forcing residents to pack up and move elsewhere. The 19th century brick-vaulted Paris sewers serve as a tourist attraction nowadays. Hamburg and Frankfurt, Germany The first comprehensive sewer system in a German city was built in Hamburg, Germany, in the mid-19th century. In 1863, work began on the construction of a modern sewerage system for the rapidly growing city of Frankfurt am Main, based on design work by William Lindley. 20 years after the system's completion, the death rate from typhoid had fallen from 80 to 10 per 100,000 inhabitants. United States The first sewer systems in the United States were built in the late 1850s in Chicago and Brooklyn. In the United States, the first sewage treatment plant using chemical precipitation was built in Worcester, Massachusetts, in 1890. Sewage treatment Initially, the gravity sewer systems discharged sewage directly to surface waters without treatment. Later, cities attempted to treat the sewage before discharge in order to prevent water pollution and waterborne diseases. During the half-century around 1900, these public health interventions succeeded in drastically reducing the incidence of water-borne diseases among the urban population, and were an important cause in the increases of life expectancy experienced at the time. Application on agricultural land Early techniques for sewage treatment involved land application of sewage on agricultural land. One of the first attempts at diverting sewage for use as a fertilizer in the farm was made by the cotton mill owner James Smith in the 1840s. He experimented with a piped distribution system initially proposed by James Vetch that collected sewage from his factory and pumped it into the outlying farms, and his success was enthusiastically followed by Edwin Chadwick and supported by organic chemist Justus von Liebig. The idea was officially adopted by the Health of Towns Commission, and various schemes (known as sewage farms) were trialled by different municipalities over the next 50 years. At first, the heavier solids were channeled into ditches on the side of the farm and were covered over when full, but soon flat-bottomed tanks were employed as reservoirs for the sewage; the earliest patent was taken out by William Higgs in 1846 for "tanks or reservoirs in which the contents of sewers and drains from cities, towns and villages are to be collected and the solid animal or vegetable matters therein contained, solidified and dried..." Improvements to the design of the tanks included the introduction of the horizontal-flow tank in the 1850s and the radial-flow tank in 1905. These tanks had to be manually de-sludged periodically, until the introduction of automatic mechanical de-sludgers in the early 1900s. Chemical treatment and sedimentation As pollution of water bodies became a concern, cities attempted to treat the sewage before discharge. In the late 19th century some cities began to add chemical treatment and sedimentation systems to their sewers. In the United States, the first sewage treatment plant using chemical precipitation was built in Worcester, Massachusetts in 1890. During the half-century around 1900, these public health interventions succeeded in drastically reducing the incidence of water-borne diseases among the urban population, and were an important cause in the increases of life expectancy experienced at the time. Odor was considered the big problem in waste disposal and to address it, sewage could be drained to a lagoon, or "settled" and the solids removed, to be disposed of separately. This process is now called "primary treatment" and the settled solids are called "sludge." At the end of the 19th century, since primary treatment still left odor problems, it was discovered that bad odors could be prevented by introducing oxygen into the decomposing sewage. This was the beginning of the biological aerobic and anaerobic treatments which are fundamental to wastewater processes. The precursor to the modern septic tank was the cesspool in which the water was sealed off to prevent contamination and the solid waste was slowly liquified due to anaerobic action; it was invented by L.H Mouras in France in the 1860s. Donald Cameron, as City Surveyor for Exeter patented an improved version in 1895, which he called a 'septic tank'; septic having the meaning of 'bacterial'. These are still in worldwide use, especially in rural areas unconnected to large-scale sewage systems. Biological treatment It was not until the late 19th century that it became possible to treat the sewage by biologically decomposing the organic components through the use of microorganisms and removing the pollutants. Land treatment was also steadily becoming less feasible, as cities grew and the volume of sewage produced could no longer be absorbed by the farmland on the outskirts. Edward Frankland conducted experiments at the sewage farm in Croydon, England during the 1870s and was able to demonstrate that filtration of sewage through porous gravel produced a nitrified effluent (the ammonia was converted into nitrate) and that the filter remained unclogged over long periods of time. This established the then revolutionary possibility of biological treatment of sewage using a contact bed to oxidize the waste. This concept was taken up by the chief chemist for the London Metropolitan Board of Works, William Libdin, in 1887: ...in all probability the true way of purifying sewage...will be first to separate the sludge, and then turn into neutral effluent... retain it for a sufficient period, during which time it should be fully aerated, and finally discharge it into the stream in a purified condition. This is indeed what is aimed at and imperfectly accomplished on a sewage farm. From 1885 to 1891, filters working on this principle were constructed throughout the UK and the idea was also taken up in the US at the Lawrence Experiment Station in Massachusetts, where Frankland's work was confirmed. In 1890 the LES developed a 'trickling filter' that gave a much more reliable performance. Contact beds were developed in Salford, Lancashire and by scientists working for the London City Council in the early 1890s. According to Christopher Hamlin, this was part of a conceptual revolution that replaced the philosophy that saw "sewage purification as the prevention of decomposition with one that tried to facilitate the biological process that destroy sewage naturally." Contact beds were tanks containing an inert substance, such as stones or slate, that maximized the surface area available for the microbial growth to break down the sewage. The sewage was held in the tank until it was fully decomposed and it was then filtered out into the ground. This method quickly became widespread, especially in the UK, where it was used in Leicester, Sheffield, Manchester and Leeds. The bacterial bed was simultaneously developed by Joseph Corbett as Borough Engineer in Salford and experiments in 1905 showed that his method was superior in that greater volumes of sewage could be purified better for longer periods of time than could be achieved by the contact bed. The Royal Commission on Sewage Disposal published its eighth report in 1912 that set what became the international standard for sewage discharge into rivers; the '20:30 standard', which allowed "2 parts per hundred thousand" of Biochemical oxygen demand and "3 parts per hundred thousand" of suspended solid. Activated sludge process Most cities in the Western world added more effective systems for sewage treatment in the early 20th century, after scientists at the University of Manchester discovered the sewage treatment process of activated sludge in 1912. Toilets With the onset of the Industrial Revolution and related advances in technology, the flush toilet began to emerge into its modern form in the late 18th century, (See Development of the modern flush toilet.) In urban areas, toilets are typically connected to a municipal sanitary sewer system, while in more rural areas they are usually connected to an onsite sewage facility (septic system). Where this is not feasible or desired, dry toilets are an alternative option. Water supply An ambitious engineering project to bring fresh water from Hertfordshire to London was undertaken by Hugh Myddleton, who oversaw the construction of the New River between 1609 and 1613. The New River Company became one of the largest private water companies of the time, supplying the City of London and other central areas. The first civic system of piped water in England was established in Derby in 1692, using wooden pipes, which was common for several centuries. The Derby Waterworks included waterwheel-powered pumps for raising water out of the River Derwent and storage tanks for distribution. It was in the 18th century that a rapidly growing population fueled a boom in the establishment of private water supply networks in London. The Chelsea Waterworks Company was established in 1723 "for the better supplying the City and Liberties of Westminster and parts adjacent with water". The company created extensive ponds in the area bordering Chelsea and Pimlico using water from the tidal Thames. Other waterworks were established in London, including at West Ham in 1743, at Lea Bridge before 1767, Lambeth Waterworks Company in 1785, West Middlesex Waterworks Company in 1806 and Grand Junction Waterworks Company in 1811. The S-bend pipe was invented by Alexander Cummings in 1775 but became known as the U-bend following the introduction of the U-shaped trap by Thomas Crapper in 1880. The first screw-down water tap was patented in 1845 by Guest and Chrimes, a brass foundry in Rotherham. Water treatment Sand filter Sir Francis Bacon attempted to desalinate sea water by passing the flow through a sand filter. Although his experiment did not succeed, it marked the beginning of a new interest in the field. The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland, John Gibb, installed an experimental filter, selling his unwanted surplus to the public. This method was refined in the following two decades by engineers working for private water companies, and it culminated in the first treated public water supply in the world, installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. This installation provided filtered water for every resident of the area, and the network design was widely copied throughout the United Kingdom in the ensuing decades. The Metropolis Water Act introduced the regulation of the water supply companies in London, including minimum standards of water quality for the first time. The Act "made provision for securing the supply to the Metropolis of pure and wholesome water", and required that all water be "effectually filtered" from 31 December 1855. This was followed up with legislation for the mandatory inspection of water quality, including comprehensive chemical analyses, in 1858. This legislation set a worldwide precedent for similar state public health interventions across Europe. The Metropolitan Commission of Sewers was formed at the same time, water filtration was adopted throughout the country, and new water intakes on the Thames were established above Teddington Lock. Automatic pressure filters, where the water is forced under pressure through the filtration system, were innovated in 1899 in England. Water chlorination In what may have been one of the first attempts to use chlorine, William Soper used chlorinated lime to treat the sewage produced by typhoid patients in 1879. In a paper published in 1894, Moritz Traube formally proposed the addition of chloride of lime (calcium hypochlorite) to water to render it "germ-free." Two other investigators confirmed Traube's findings and published their papers in 1895. Early attempts at implementing water chlorination at a water treatment plant were made in 1893 in Hamburg, Germany, and in 1897 the town of Maidstone, in Kent, England, was the first to have its entire water supply treated with chlorine. Permanent water chlorination began in 1905, when a faulty slow sand filter and a contaminated water supply led to a serious typhoid fever epidemic in Lincoln, England. Dr. Alexander Cruickshank Houston used chlorination of the water to stem the epidemic. His installation fed a concentrated solution of chloride of lime to the water being treated. The chlorination of the water supply helped stop the epidemic and as a precaution, the chlorination was continued until 1911 when a new water supply was instituted. The first continuous use of chlorine in the United States for disinfection took place in 1908 at Boonton Reservoir (on the Rockaway River), which served as the supply for Jersey City, New Jersey. Chlorination was achieved by controlled additions of dilute solutions of chloride of lime (calcium hypochlorite) at doses of 0.2 to 0.35 ppm. The treatment process was conceived by Dr. John L. Leal and the chlorination plant was designed by George Warren Fuller. Over the next few years, chlorine disinfection using chloride of lime were rapidly installed in drinking water systems around the world. The technique of purification of drinking water by use of compressed liquefied chlorine gas was developed by a British officer in the Indian Medical Service, Vincent B. Nesfield, in 1903. According to his own account, "It occurred to me that chlorine gas might be found satisfactory ... if suitable means could be found for using it.... The next important question was how to render the gas portable. This might be accomplished in two ways: By liquefying it, and storing it in lead-lined iron vessels, having a jet with a very fine capillary canal, and fitted with a tap or a screw cap. The tap is turned on, and the cylinder placed in the amount of water required. The chlorine bubbles out, and in ten to fifteen minutes the water is absolutely safe. This method would be of use on a large scale, as for service water carts." U.S. Army Major Carl Rogers Darnall, Professor of Chemistry at the Army Medical School, gave the first practical demonstration of this in 1910. Shortly thereafter, Major William J. L. Lyster of the Army Medical Department used a solution of calcium hypochlorite in a linen bag to treat water. For many decades, Lyster's method remained the standard for U.S. ground forces in the field and in camps, implemented in the form of the familiar Lyster Bag (also spelled Lister Bag). This work became the basis for present day systems of municipal water purification. Fluoridation Water fluoridation is a practice adding fluoride to drinking water for the purpose of decreasing tooth decay. The architect of these first fluoride studies was Dr. H. Trendley Dean, head of the Dental Hygiene Unit at the National Institute of Health (NIH). Dean began investigating the epidemiology of fluorosis in 1931. By the late 1930s, he and his staff had made a critical discovery. Namely, fluoride levels of up to 1.0 ppm in drinking water did not cause enamel fluorosis in most people and only mild enamel fluorosis in a small percentage of people. This finding sent Dean's thoughts spiraling in a new direction. He recalled from reading McKay's and Black's studies on fluorosis that mottled tooth enamel is unusually resistant to decay. Dean wondered whether adding fluoride to drinking water at physically and cosmetically safe levels would help fight tooth decay. This hypothesis, Dean told his colleagues, would need to be tested. In 1944, Dean got his wish. That year, the City Commission of Grand Rapids, Michigan-after numerous discussions with researchers from the PHS, the Michigan Department of Health, and other public health organizations-voted to add fluoride to its public water supply the following year. In 1945, Grand Rapids became the first city in the world to fluoridate its drinking water. The Grand Rapids water fluoridation study was originally sponsored by the U.S. Surgeon General, but was taken over by the NIDR shortly after the institute's inception in 1948. Trends The Sustainable Development Goal 6 formulated in 2015 includes targets on access to water supply and sanitation at a global level. In developing countries, self-supply of water and sanitation is used as an approach of incremental improvements to water and sanitation services, which are mainly financed by the user. Decentralized wastewater systems are also growing in importance to achieve sustainable sanitation. Understanding of health aspects The Greek historian Thucydides (c. 460 – c. 400 BC) was the first person to write, in his account of the plague of Athens, that diseases could spread from an infected person to others. The Mosaic Law, within the first five books of the Hebrew Bible, contains the earliest recorded thoughts of contagion in the spread of disease. Specifically, it presents instructions on quarantine and washing in relation to leprosy and venereal disease. One theory of the spread of contagious diseases that were not spread by direct contact was that they were spread by spore-like "seeds" (Latin: semina) that were present in and dispersible through the air. In his poem, De rerum natura (On the Nature of Things, c. 56 BC), the Roman poet Lucretius (c. 99 BC – c. 55 BC) stated that the world contained various "seeds", some of which could sicken a person if they were inhaled or ingested. The Roman statesman Marcus Terentius Varro (116–27 BC) wrote, in his Rerum rusticarum libri III (Three Books on Agriculture, 36 BC): "Precautions must also be taken in the neighborhood of swamps ... because there are bred certain minute creatures which cannot be seen by the eyes, which float in the air and enter the body through the mouth and nose and there cause serious diseases." The Greek physician Galen (AD 129 – c. 200/216) speculated in his On Initial Causes (c. 175 AD that some patients might have "seeds of fever". In his On the Different Types of Fever (c. 175 AD), Galen speculated that plagues were spread by "certain seeds of plague", which were present in the air. And in his Epidemics (c. 176–178 AD), Galen explained that patients might relapse during recovery from fever because some "seed of the disease" lurked in their bodies, which would cause a recurrence of the disease if the patients did not follow a physician's therapeutic regimen. The fiqh scholar Ibn al-Haj al-Abdari (c. 1250–1336), while discussing Islamic diet and hygiene, gave advice and warnings about impurities that contaminate water, food, and garments, and could spread through the water supply. Long before studies had established the germ theory of disease, or any advanced understanding of the nature of water as a vehicle for transmitting disease, traditional beliefs had cautioned against the consumption of water, rather favoring processed beverages such as beer, wine and tea. For example, in the camel caravans that crossed Central Asia along the Silk Road, the explorer Owen Lattimore noted (in 1928), "The reason we drank so much tea was because of the bad water. Water alone, unboiled, is never drunk. There is a superstition that it causes blisters on the feet." One of the earliest understandings of waterborne diseases in Europe arose during the 19th century, when the Industrial Revolution took over Europe. Waterborne diseases, such as cholera, were once wrongly explained by the miasma theory, the theory that bad air causes the spread of diseases. However, people started to find a correlation between water quality and waterborne diseases, which led to different water purification methods, such as sand filtering and chlorinating their drinking water. Founders of microscopy, Antonie van Leeuwenhoek and Robert Hooke, used the newly invented microscope to observe for the first time small material particles that were suspended in the water, laying the groundwork for the future understanding of waterborne pathogens and waterborne diseases. In the 19th century, Britain was the center for rapid urbanization, and as a result, many health and sanitation problems manifested, for example cholera outbreaks and pandemics. This resulted in Britain playing a large role in the development for public health. Before discovering the link between contaminated drinking water and diseases, such as cholera and other waterborne diseases, the miasma theory was used to justify the outbreaks of these illnesses. Miasma theory is the theory that certain diseases and illnesses are the products of "bad airs". The investigations of the physician John Snow in the United Kingdom during the 1854 Broad Street cholera outbreak clarified the connections between waterborne diseases and polluted drinking water. Although the germ theory of disease had not yet been developed, Snow's observations led him to discount the prevailing miasma theory. His 1855 essay On the Mode of Communication of Cholera conclusively demonstrated the role of the water supply in spreading the cholera epidemic in Soho, with the use of a dot distribution map and statistical proof to illustrate the connection between the quality of the water source and cholera cases. During the 1854 epidemic, he collected and analyzed data establishing that people who drank water from contaminated sources such as the Broad Street pump died of cholera at much higher rates than those who got water elsewhere. His data convinced the local council to disable the water pump, which promptly ended the outbreak. Edwin Chadwick, in particular, played a key role in Britain's sanitation movement, using the miasma theory to back up his plans for improving the sanitation situation in Britain. Although Chadwick brought contributions to developing public health in the 19th century, it was John Snow and William Budd who introduced the idea that cholera was the consequence of contaminated water, presenting the idea that diseases could be transmitted through drinking water. People found that purifying and filtering their water improved the water quality and limited the cases of waterborne diseases. In the German town Altona this finding was first illustrated by using a sand filtering system for its water supply. A nearby town that didn't use any filtering system for their water suffered from the outbreak while Altona remained unaffected by the disease, providing evidence that the quality of water had something to do with the diseases. After this discovery, Britain and the rest of Europe took into account to filter their drinking water, as well as chlorinating them to fight off waterborne diseases like cholera. Related important concepts Laundry Night soil Rainwater harvesting Water industry See also Ancient water conservation techniques Water supply and sanitation in the Indus-Saraswati Valley Civilisation History of stepwells in Gujarat Sanitation in ancient Rome Traditional water sources of Persian antiquity List of water supply and sanitation by country History of water filters References Further reading Juuti, Petri S., Tapio S. Katko, and Heikki S. Vuorinen. Environmental history of water: global views on community water supply and sanitation (IWA Publishing, 2007) Plumbing Bathrooms Human ecology
History of water supply and sanitation
Engineering,Environmental_science
10,455
59,184,147
https://en.wikipedia.org/wiki/Residue-to-product%20ratio
In climate engineering, the residue-to-product ratio (RPR) is used to calculate how much unused crop residue might be left after harvesting a particular crop. Also called the residue yield or straw/grain ratio, the equation takes the mass of residue divided by the mass of crop produced, and the result is dimensionless. The RPR can be used to project costs and benefits of bio-energy projects, and is crucial in determining financial sustainability. The RPR is particularly important for estimating the production of biochar, a beneficial farm input obtained from crop residues through pyrolysis. However, it is important to note that RPR values are rough estimates taken from broad production statistics, and can vary greatly depending on crop variety, climate, processing, and residual moisture content. See also Carbon sequestration Biomass Biochar Biofuel Pyrolysis References Climate engineering Crops Biofuels
Residue-to-product ratio
Engineering
185
17,702,796
https://en.wikipedia.org/wiki/Bilevel%20optimization
Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables, referred to as the upper-level variables and the lower-level variables. Mathematical formulation of the problem A general formulation of the bilevel optimization problem can be written as follows: subject to: , for where In the above formulation, represents the upper-level objective function and represents the lower-level objective function. Similarly represents the upper-level decision vector and represents the lower-level decision vector. and represent the inequality constraint functions at the upper and lower levels respectively. If some objective function is to be maximized, it is equivalent to minimize its negative. The formulation above is also capable of representing equality constraints, as these can be easily rewritten in terms of inequality constraints: for instance, can be translated as . However, it is usually worthwhile to treat equality constraints separately, to deal with them more efficiently in a dedicated way; in the representation above, they have been omitted for brevity. Stackelberg competition Bilevel optimization was first realized in the field of game theory by a German economist Heinrich Freiherr von Stackelberg who published Market Structure and Equilibrium (Marktform und Gleichgewicht) in 1934 that described this hierarchical problem. The strategic game described in his book came to be known as Stackelberg game that consists of a leader and a follower. The leader is commonly referred as a Stackelberg leader and the follower is commonly referred as a Stackelberg follower. In a Stackelberg game, the players of the game compete with each other, such that the leader makes the first move, and then the follower reacts optimally to the leader's action. This kind of a hierarchical game is asymmetric in nature, where the leader and the follower cannot be interchanged. The leader knows ex ante that the follower observes its actions before responding in an optimal manner. Therefore, if the leader wants to optimize its objective, then it needs to anticipate the optimal response of the follower. In this setting, the leader's optimization problem contains a nested optimization task that corresponds to the follower's optimization problem. In the Stackelberg games, the upper level optimization problem is commonly referred as the leader's problem and the lower level optimization problem is commonly referred as the follower's problem. If the follower has more than one optimal response to a certain selection of the leader, there are two possible options: either the best or the worst follower's solution with respect to the leader's objective function is assumed, i.e. the follower is assumed to act either in a cooperative way or in an aggressive way. The resulting bilevel problem is called optimistic bilevel programming problem or pessimistic bilevel programming problem respectively. Applications Bilevel optimization problems are commonly found in a number of real-world problems. This includes problems in the domain of transportation, economics, decision science, business, engineering, environmental economics etc. Some of the practical bilevel problems studied in the literature are briefly discussed. Toll setting problem In the field of transportation, bilevel optimization commonly appears in the toll-setting problem. Consider a network of highways that is operated by the government. The government wants to maximize its revenues by choosing the optimal toll setting for the highways. However, the government can maximize its revenues only by taking the highway users' problem into account. For any given tax structure the highway users solve their own optimization problem, where they minimize their traveling costs by deciding between utilizing the highways or an alternative route. Under these circumstances, the government's problem needs to be formulated as a bilevel optimization problem. The upper level consists of the government’s objectives and constraints, and the lower level consists of the highway users' objectives and constraints for a given tax structure. It is noteworthy that the government will be able to identify the revenue generated by a particular tax structure only by solving the lower level problem that determines to what extent the highways are used. Structural optimization Structural optimization problems consist of two levels of optimization task and are commonly referred as mathematical programming problems with equilibrium constraints (MPEC). The upper level objective in such problems may involve cost minimization or weight minimization subject to bounds on displacements, stresses and contact forces. The decision variables at the upper level usually are shape of the structure, choice of materials, amount of material etc. However, for any given set of upper level variables, the state variables (displacement, stresses and contact forces) can only be figured out by solving the potential energy minimization problem that appears as an equilibrium satisfaction constraint or lower level minimization task to the upper level problem. Defense applications Bilevel optimization has a number of applications in defense, like strategic offensive and defensive force structure design, strategic bomber force structure, and allocation of tactical aircraft to missions. The offensive entity in this case may be considered a leader and the defensive entity in this case may be considered a follower. If the leader wants to maximize the damage caused to the opponent, then it can only be achieved if the leader takes the reactions of the follower into account. A rational follower will always react optimally to the leaders offensive. Therefore, the leader's problem appears as an upper level optimization task, and the optimal response of the follower to the leader's actions is determined by solving the lower level optimization task. Workforce and Human Resources applications Bilevel optimization can serve as a decision support tool for firms in real-life settings to improve workforce and human resources decisions. The first level reflects the company’s goal to maximize profitability. The second level reflects employees goal to minimize the gap between desired salary and a preferred work plan. The bilevel model provides an exact solution based on a mixed integer formulation and present a computational analysis based on changing employees behaviors in response to the firm’s strategy, thus demonstrate how the problem’s parameters influence the decision policy. Solution methodologies Bilevel optimization problems are hard to solve. One solution method is to reformulate bilevel optimization problems to optimization problems for which robust solution algorithms are available. Extended Mathematical Programming (EMP) is an extension to mathematical programming languages that provides several keywords for bilevel optimization problems. These annotations facilitate the automatic reformulation to Mathematical Programs with Equilibrium Constraints (MPECs) for which mature solver technology exists. EMP is available within GAMS. KKT reformulation Certain bilevel programs, notably those having a convex lower level and satisfying a regularity condition (e.g. Slater's condition), can be reformulated to single level by replacing the lower-level problem by its Karush-Kuhn-Tucker conditions. This yields a single-level mathematical program with complementarity constraints, i.e., MPECs. If the lower level problem is not convex, with this approach the feasible set of the bilevel optimization problem is enlarged by local optimal solutions and stationary points of the lower level, which means that the single-level problem obtained is a relaxation of the original bilevel problem. Optimal value reformulation Denoting by the so-called optimal value function, a possible single-level reformulation of the bilevel problem is subject to: , for This is a nonsmooth optimization problem since the optimal value function is in general not differentiable, even if all the constraint functions and the objective function in the lower level problem are smooth. Heuristic methods For complex bilevel problems, classical methods may fail due to difficulties like non-linearity, discreteness, non-differentiability, non-convexity etc. In such situations, heuristic methods may be used. Among them, evolutionary methods, though computationally demanding, often constitute an alternative tool to offset some of these difficulties encountered by exact methods, albeit without offering any optimality guarantee on the solutions they produce. Multi-objective bilevel optimization A bilevel optimization problem can be generalized to a multi-objective bilevel optimization problem with multiple objectives at one or both levels. A general multi-objective bilevel optimization problem can be formulated as follows: In the Stackelberg games: Leader problem subject to: , for ; In the Stackelberg games: Follower problem where In the above formulation, represents the upper-level objective vector with objectives and represents the lower-level objective vector with objectives. Similarly, represents the upper-level decision vector and represents the lower-level decision vector. and represent the inequality constraint functions at the upper and lower levels respectively. Equality constraints may also be present in a bilevel program, but they have been omitted for brevity. References External links Mathematical Programming Glossary Mathematical optimization
Bilevel optimization
Mathematics
1,793
39,484,172
https://en.wikipedia.org/wiki/Ray%20P.%20Dinsmore
Ray Putnam Dinsmore (24 April 1893 – 26 October 1979) was a rubber scientist, known for pioneering the use of rayon as a reinforcing material in auto tires. In 1928, Dinsmore patented the first water-emulsion synthetic rubber in the United States. The material later became a staple of the rubber industry during the World War II shortage of natural rubber. Dinsmore worked for the Goodyear Tire and Rubber Company and developed Chemigum, an early synthetic rubber. Dinsmore hired noted rubber physicist Samuel D. Gehman, and he collaborated with Lorin Sebrell. His authored a popular review on the topic of rubber chemistry for the American Chemical Society's 75th Anniversary. Dinsmore served as chairman of the Rubber Division of the American Chemical Society in 1927. He received the 1947 Colwyn medal and was named the 1955 Charles Goodyear Medalist. Dinsmore was educated at the Massachusetts Institute of Technology, completing a degree in chemical engineering at the age of 21. He entered the rubber industry in 1914. He was Vice President of Research and Development (1943-1961) and a Member of the Board of Directors (1960-1964) at the Goodyear Tire and Rubber Company. He died on October 26, 1979. References External links Interview with Ray Dinsmore. Polymer scientists and engineers U.S. Synthetic Rubber Program 1979 deaths Massachusetts Institute of Technology alumni 1893 births Presidents of the American Institute of Chemists Tire industry people Goodyear Tire and Rubber Company people 20th-century American chemists
Ray P. Dinsmore
Chemistry,Materials_science
309
37,377,809
https://en.wikipedia.org/wiki/Arthur%20Dendy
Arthur Dendy (20 January 1865, in Manchester – 24 March 1925, in London) was an English zoologist known for his work on marine sponges and the terrestrial invertebrates of Victoria, Australia, notably including the "living fossil" Peripatus. He was in turn professor of zoology in New Zealand, in South Africa and finally at King's College London. He was a Fellow of the Royal Society. Family life Dendy's parents were John Dendy, a silk fabric maker of Manchester, and Sarah Beard, daughter of John Relly Beard. His sisters included Mary Dendy and Helen Bosanquet. He married Ada Margaret Courtauld on 5 December 1888. They had four children, three daughters—including the artist Vera Ellen Poole (1890–1965)—and one son. Career He was educated in zoology at Owens College, Manchester, gaining his M.Sc. in 1887 and his D.Sc. in 1891. He worked on part of the report of the Challenger expedition (1872–1876), describing monaxonid sponges. In 1888 he moved to the University of Melbourne as demonstrator and assistant lecturer. There he identified and described almost 2000 specimens of sponges from the sea near Port Phillip Heads. This work led to ten scientific papers on Australian sponges; he described 87 new species of sponge. Eventually Dendy became a leading authority on the sponge phylum, (Porifera), which he extensively restructured. Dendy was the first zoologist to study the terrestrial invertebrates of Victoria, Australia. This work led to 16 scientific papers and 79 new species. These included terrestrial flatworms (planarians) and nemerteans, but the most famous of his animals was the so-called "living fossil" Peripatus. In 1893, Dendy became professor of biology at Canterbury College, Christchurch, New Zealand. While in New Zealand, Dendy coined the term "cryptozoic fauna" to refer to animals which live in environments like leaf litter, under rocks, and so on. In 1903, he became professor of biology at the University of Cape Town, South Africa. In 1905, he became professor of zoology at King's College, London. Dendy was an "extreme" Lamarckian, contributing to the eclipse of Darwinism in the late 19th century. Honours and distinctions Dendy contributed articles including "Sponges" to the 1911 Encyclopædia Britannica under the initials "A. DE." He served as president of the Quekett Microscopical Club from 1912 to 1916. His name is honoured in the genus name Arthurdendyus Jones, 1999; Arthurdendyus triangulatus is the New Zealand flatworm, an invasive species in the United Kingdom. His name is honoured in the genus name Dendya Bidder, 1898, a genus of Calcarea (Porifera). Still in this group of Porifera, the genus name Arturia Azevedo, Padua, Moraes, Rossi, Muricy & Klautau, 2017 was also named in his honour. Works Dendy, Arthur, and Ridley, Stuart O. (1886) On Proteleia sollasi, a new genus and species of monaxonid sponges allied to Polymastia. Annals and Magazine of Natural History (5) 18: 152–159. References Bibliography Australian Dictionary of Biography: Dendy, Arthur (1865–1925) Cyclopedia of New Zealand: Professor Arthur Dendy External links Photo of Dendy in 1895, with signature National Portrait Gallery: 1917 photograph of Dendy by Walter Stoneman 1865 births 1925 deaths English zoologists Lamarckism Scientists from Manchester Fellows of the Royal Society Spongiologists Academics of King's College London
Arthur Dendy
Biology
769
6,077,753
https://en.wikipedia.org/wiki/Clean%20Air%20Society%20of%20Australia%20and%20New%20Zealand
The Clean Air Society of Australia and New Zealand (CASANZ) is a non-governmental, non-profit organization formed in the 1960s to bring together people with an interest in clean air and the study of air pollution. Its focus has since grown to include broader environmental management affairs, but with special emphasis on air quality and related issues. As of October 2005, the society had 836 members (683 in Australia, 140 in New Zealand, and 13 in other countries). In September 2007, CASANZ hosted the 14th IUAPPA World Congress in Brisbane, Australia. Activities promoting environmental protection CASANZ promotes the protection of the environment by a variety of activities, including: Advancement of knowledge and practical experience of environmental and air quality management. Providing an organization which gathers and distributes the experience and knowledge of its members. Providing lectures, exhibitions, public meetings and conferences that expand knowledge of environmental matters and, in particular, air quality—including causes, effects, measurement, legislative aspects and control of air pollution. Liaison with organizations having similar interests in other countries. Providing scholarships, monetary grants, awards and prizes to encourage the study of relevant subjects. Operation and governing CASANZ operates through autonomous branches which determine their own programs of activities, including technical meetings, seminars, workshops, conferences, training courses, etc. Details of these activities are circulated to branch members and posted on the society's online web site. CASANZ is governed by an Executive Committee consisting of: An elected Executive Director Other officers and representatives nominated by the branches. The Executive Committee manages the day-to-day activities of the society and directs the work of the Executive Director. Special Interest Groups As of September 2013, CASANZ has eight Special Interest Groups (commonly referred to as SIGs). Modelling Special Interest Group The objective of the Modelling Special Interest Group is to bring together CASANZ members who have an interest in the development and/or application of atmospheric dispersion modeling in order to exchange ideas, identify common problems, inform members of new developments, and establish, as appropriate, a 'ModSIG view' on issues of relevance. Dispersion modelling is becoming more and more a part of licensing emissions to the atmosphere throughout Australia, and there is growing awareness of the role that air quality modelling can play in areas such as air resource management, risk assessment and land-use planning. Odour Special Interest Group The Odour Special Interest Group is a forum for the exchange of information, and in encouraging improved practices in odour measurement, modelling, assessment, control, management and monitoring. Indoor Air Special Interest Group The Indoor Air Special Interest Group promotes discussion and debate on the state of knowledge and the quality of indoor air, and on related environmental health concerns. The group encourages research, education, community awareness and management of issues related to this topic. Greenhouse Special Interest Group The purpose of the Greenhouse Special Interest Group is threefold: To identify the research and development work being done at national and international levels on the science of greenhouse and global climate modelling as well as the developments being undertaken to assess and quantify the impact of climate change. To inform CASANZ members and the broader community on greenhouse gas emissions, emission scenarios, regional contributions and economic and technological developments. To provide response to government policies and initiatives in the area of energy reform and greenhouse abatement so as to stimulate debate amongst members and the broader community. Measurement Special Interest Group The primary objective of the Measurement Special Interest Group is to ensure that atmospheric pollutants in the ambient air and industrial source emissions are measured utilising methods that are fit for that purpose. The objective is to be achieved by: conducting workshops, developing decision trees to assist in selecting appropriate air pollution measurement test methods and developing source emission and ambient air quality test methods in cooperation with Standards Australia and Standards New Zealand. Transport Special Interest Group The Transport Emissions and Fuel Consumption Modeling Special Interest Group (or simply the Transport Special Interest Group) focuses on the quantification and modelling of air pollutant and greenhouse gas impacts from all forms of transport and their support equipment. The Transport Special Interest Group is intended to be a platform for information sharing, discussion of emerging issues and coordination. Risk Assessment Interest Group The Risk Assessment Interest Group exists to be knowledgeable and supportive of governmental policy developments involving risk assessment, such as: The National Air Quality Standards (Air NEPM) which were developed with the help of risk assessment. Variations to the State of Victoria's Environment Protection Policy for Air Quality Management to incorporate formalised risk assessments. The National Environmental Health Strategy, which includes risk assessment as a basis for decision making. The Australian Standard for Risk Management (AS4360-1999). The proposal to develop a coordinated national strategy for air toxics. Air Policy Special Interest Group The purpose of the Air Policy Special Interest Group is to bring together CASANZ members interested in advancing air policy development and contributing to responsible air quality related risk communication by: Exchanging ideas, and informing members of new air policy related developments and air quality issues of significant public concern. Coordinating and communicating science-based inputs for air policy, legislative and regulatory development processes. Collating and communicating science-based information to communities and decision makers on pressing air quality related issues, including official inquiries See also Clean Ocean Foundation of Australia Australian Conservation Foundation References External links Official website Report on the CASANZ ModSIG Workshop, May 2005 21st International Clean Air and Environment Conference, September, 2013, Sydney. IUAPPA website page devoted to CASANZ as a member of IUPPA Atmospheric dispersion modeling Environmental organisations based in Australia Environmental organisations based in New Zealand Air pollution in New Zealand
Clean Air Society of Australia and New Zealand
Chemistry,Engineering,Environmental_science
1,130
24,732,344
https://en.wikipedia.org/wiki/Motorola%20Droid
The Motorola Droid (GSM/UMTS version: Motorola Milestone) is an Internet and multimedia-enabled smartphone designed by Motorola, which runs Google's Android operating system. The Droid had been publicized under the codenames Sholes and Tao and the model number A855. In Latin America and Europe, the model number is A853 (Milestone), and in Mexico, the model number is A854 (Motoroi). Due to the ambiguity with newer phones with similar names, it is also commonly known as the DROID 1. The brand name Droid is a trademark of Lucasfilm licensed to Verizon Wireless. Features of the phone include Wi-Fi networking, a 5-megapixel low light capable digital camera, a standard 3.5 mm headphone jack, interchangeable battery, 3.7-inch 854×480 touchscreen display. It also includes microSDHC support with bundled 16 GB card, free turn-by-turn navigation from Google Maps, sliding QWERTY keyboard, and Texas Instruments OMAP 3430 processor. The Motorola Droid runs Android version 2.2. The phone does not, however, run the re-branded Motoblur interface for Android, instead providing the Google Experience skin and application stack. With a major marketing push by Motorola and Verizon during and after its November 2009 release, the Droid became popular and had strong sales in the United States. It is credited for having popularized Android in the mass market. The Droid has a hearing aid compatibility (HAC) rating of M3/T3. The phone was the first to ship with free Google Maps Navigation (beta) installed. Launch United States Verizon explicitly promoted the Droid as an Apple iPhone alternative. Launched on October 17, 2009, TV spots and an associated website made "entertainingly combative" claims listing features then lacking on the iPhone, e.g. "iDon't multitask" and "'iDon't have a real keyboard", only mentioning the name of the Droid in the final frame, reading "Droid Does". At the official launch event on October 28, 2009, Verizon's Chief Marketing Officer John Stratton described the campaign as a spoof of Apple's iPhone ads, intended to "wake up the market." The marketing for the American launch included "Droid Does Times Square." This was a program (billed as an "interactive experience") in which Verizon connected the Nasdaq and Reuters electronic billboards in Times Square to its systems such that people were able to control the electronic displays by using voice commands (illustrating the voice search function that is a primary Android feature). Control of the billboards was available in Times Square or via the "Droid Does" website. The November 6, 2009, release date of the Droid came just under a month after Verizon and Google announced that they had entered into an agreement to jointly develop wireless devices based on the Android mobile platform. Verizon said at the time that it planned to have two Android-based handsets on the market by year-end with more to come in 2010. The other handset is the HTC Droid Eris, a modification to the HTC Hero, seen in shots of Google CEO Eric Schmidt holding one in a Verizon/Google press conference. American exclusive software for the Droid includes Google Maps Navigation, an Amazon MP3 Store applet, and Verizon Wireless Visual Voice Mail management. Analytics firm Flurry estimated that 250,000 Motorola Droid phones were sold in the United States during the phone's first week in stores. Flurry also estimated that 1.05 million Motorola Droids were sold in first 74 days of the launch. This number is greater than that of the original iPhone which sold one million units through day 74. Software This section applies to the Verizon-branded Motorola device in the US. Droid versions in foreign markets (Milestone) may be crippled or have some features disabled due to restrictions enforced by retailer agreements, carrier agreements, manufacturer agreements, or local laws and should be addressed in the appropriate section above. The Linux kernel used in the 2.0.1 OTA release is 2.6.29, Android build. Android 2.1 update On February 8, 2010, Motorola announced via Facebook that the Droid update would begin rolling out later that same week. Details were later released via Motorola's official forums. However, the details about the update were later retracted, and unclear information from Motorola and Verizon Wireless caused confusion among many, and technology news websites speculated the update would happen even later in 2010. As an attempt to clear confusion, Motorola released a chart of dates stating when each of their Android devices would be updated to Android 2.1, but this only added to the issues because they said the update would happen "soon", rather than offering a concrete time frame. However, on March 17, 2010, Verizon Wireless announced that they would issue the Android 2.1 "ESE53" update over-the-air, beginning March 18 to a small test group, and more users would see the update later, into the following week. But screenshots from Verizon Wireless documents showed that a last minute software bug was found, and the update was delayed once again, with no new roll out date determined. The new news from Verizon Wireless provided official information about the update and what it entailed, including "Pinch-to-zoom...available when using the browser, Gallery, and Google Maps", a weather and news app/widget including "information you want from the Web...weekly and hourly weather forecasts based on your location, and news headlines", voice-to-text wherever there are text boxes, "New Gallery application with 3D layout", and Live Wallpapers "offer[ing] richer animated, interactive backgrounds on the home screen", as well as other, more minor upgrades. However, the update did not include the addition of two more home screens currently available on the Nexus One, another Android 2.1 device. The 2.1 update rollout began on March 30, 2010. Android 2.2 update After some apparent discussion by Motorola over whether they would provide an Android 2.2 Froyo upgrade for the Droid and Milestone, it was confirmed that the Droid would get the upgrade, and a staggered rollout began. This rollout began on August 3, 2010, and updates the phone to Android 2.2 build number FRG01B. Another update for the Droid began on August 24, 2010, and it included some minor bug fixes. This update's build number is FRG22D. A third update was released on December 6, 2010, with a version number of 2.2.1 and a build of FRG83D. A fourth update was released on March 9, 2011, with a version number of 2.2.2 and a build of FRG83G. The Motorola support page reported the Milestone version would get an update to Android 2.2 in the first quarter of 2011, and on March 15 an update was made available. Root access and unsupported Android releases The Motorola Droid was successfully "rooted" (manipulated to provide superuser access) on December 8, 2009. This allowed removing sponsored or pay-to-use applets (Amazon, Verizon Visual Voice Mail, etc.), installing and launching custom software, and root access on the phone using a terminal emulator. On June 5, 2010, a leaked Android 2.2 ROM was given to the public for those with superuser access. It includes a new home launcher, Flash 10.1 support, new homescreen widgets, an updated Market, a JIT compiler for a faster system, as well as new aesthetic changes. As of July 13, 2011, the Droid is able to be updated to the Android 2.3.4 ROM, but with many small updates and edits to the base code in order to properly run. On July 26, 2011, an Android 2.3.5 ROM was made available for the DROID that, like the Android 2.3.4 ROM, has been modified to improve performance on the smart phone. Work is currently underway at xda-developers to bring Android 4.0, Ice Cream Sandwich, to the Motorola Droid/Milestone, although CyanogenMod has stated it will drop support for the Droid/Milestone in ICS-based CyanogenMod 9. As of July 17, 2012, the Droid is able to be updated to the Android 4.0.4 ROM. Similar to previous Gingerbread ROMs, this ROM has been tweaked to properly run. As of August 27, 2012, the Droid is able to be updated to the Android 4.1.1 Rom. This rom is currently being worked on and tweaked to properly run. Motorola Milestone The quad-band GSM/UMTS version of the Droid is the Motorola A853 (Milestone). While the phone's internal hardware (besides cellular) is the same, differences include out-of-the-box multi-touch support enabled, a trial version of Motorola's MOTONAV service (instead of Droid's US-only Google Maps Navigation) and an 8 GB microSDHC card, instead of the Droid's 16 GB microSDHC card. Geographic launch information The launch countries for the Motorola Milestone included Germany, Italy and Argentina on November 9, 2009. Other European countries soon followed. Joining this trend, a new, North American, version of the Motorola Milestone was released in Canada in February 2010. This new version supports 3G bands II and V instead of bands I and VIII so it is compatible with Rogers, Bell, and Telus in Canada, and with AT&T and T-Mobile (2G Only) in the US. At this time, Motorola Milestone is only available from Telus. The Milestone is also available in the U.S. from several regional GSM carriers, like Cincinnati Bell. United Kingdom The phone was launched in the United Kingdom on Thursday December 10, 2009, as the Motorola Milestone. The exclusive sales outlet for the phone, eXpansys, reported that all stocks of the phone had completely sold out within 3 hours of its debut. The sell-out was considered a major victory for Motorola, which has had little success with its prior Windows Mobile-based phones, and had been counting on Android-based phones for future growth. Middle East The phone was launched in the Middle East in 2010, however it included a serious limitation as Google prohibited access to its Android Market application for most of Middle Eastern countries. No reasonable alternative was provided by Google. Asia, India The Milestone was launched in Hong Kong on December 21, 2009, as the sixth Android device in the region for HKD 4,680 with an 8 GB microSDHC card included. This version is in English but supports Chinese hand writing that originated from the Motorola MING and pinyin input method. While MOTONAV is included, it does not work and is not officially supported. The phone can be updated to Android 2.1 since Q1 2010, the update features a Chinese user interface and more Chinese input methods such as Changjie. The Milestone launched in India on 30 March 2010, priced at 32,000 INR, approximately US$680. Locked bootloader Unlike the DROID, the Milestone has a bootloader that only allows signed firmware to load. This creates difficulties for users who wish to boot custom ROM images not signed by Motorola that have become popular in the Droid modding community. This has caused discontent in the Droid community in markets foreign to the US, with numerous petitions being created. Whether the disabling of custom boot loaders is due to retailer, carrier, manufacturer, or legal restrictions is unknown. Early in 2011 the community around Milestone ROM-Development and the Polish National BOINC Team initiated AndrOINC to decipher the RSA 1024-bit signature and make customizing possible. The project has been down since April 2011 because the German police confiscated the server on suspicion it was involved with an illegal P2P service. In June 2011, Motorola committed to delivering an unlockable and relockable bootloader solution for phones receiving updates later that year. This will likely not include the Milestone as Motorola has not indicated it will update the Milestone past Android 2.2/Froyo. Root access Despite the locked bootloader, the Motorola Milestone has been successfully rooted. Several step-by-step guides are available on the web. Software releases 2.0.1 update Two months after Motorola Droid received the update, the 2.0.1 update was released to fix a bug with the camera autofocus. 2.1 update The Motorola Milestone update rolled out in the beginning of May 2010, five months after the introduction of Android 2.1. After the update several problems were discovered, but since then firmware 2.1 update 1 has been released, which fixes some of the issues. Although Motorola claimed, that they have fixed the random MP3 player start problems, they still occur. These problems affect only some users. 2.2 update The official Android 2.2 Froyo update was released on March 16, 2011, for UK users with a slow rollout to the rest of Europe. As of March 20, the update was still being rolled out was not available in all countries. Community created ROMs of Android 2.2 (Froyo), Android 2.3 (Gingerbread) and Android 4 (Ice Cream Sandwich) have existed for some time, typically based on kernels from leaked updates. The continued postponing of Milestone updates has become known under the term "Motofail" in Central and South America. See Android operating system for later software releases. Motorola Motoroi In July 2010, Motorola launched the phone for Mexico as the Motorola A854 (MOTOROI) and became exclusive to CDMA network carrier Iusacell. Its history is similar to the Motorola Droid and Motorola Milestone. See also Verizon Droid family References External links Droid from Verizon Wireless Mobile phones with an integrated hardware keyboard Android (operating system) devices Droid Verizon Wireless Mobile phones introduced in 2009 Discontinued flagship smartphones Slider phones Mobile phones with user-replaceable battery
Motorola Droid
Technology
2,981
12,747,184
https://en.wikipedia.org/wiki/Out%28Fn%29
In mathematics, Out(Fn) is the outer automorphism group of a free group on n generators. These groups are at universal stage in geometric group theory, as they act on the set of presentations with generators of any finitely generated group. Despite geometric analogies with general linear groups and mapping class groups, their complexity is generally regarded as more challenging, which has fueled the development of new techniques in the field. Definition Let be the free nonabelian group of rank . The set of inner automorphisms of , i.e. automorphisms obtained as conjugations by an element of , is a normal subgroup . The outer automorphism group of is the quotientAn element of is called an outer class. Relations to other groups Linear groups The abelianization map induces a homomorphism from to the general linear group , the latter being the automorphism group of . This map is onto, making a group extension, . The kernel is the Torelli group of . The map is an isomorphism. This no longer holds for higher ranks: the Torelli group of contains the automorphism fixing two basis elements and multiplying the remaining one by the commutator of the two others. Aut(Fn) By definition, is an extension of the inner automorphism group by . The inner automorphism group itself is the image of the action by conjugation, which has kernel the center . Since is trivial for , this gives a short exact sequenceFor all , there are embeddings obtained by taking the outer class of the extension of an automorphism of fixing the additional generator. Therefore, when studying properties that are inherited by subgroups and quotients, the theories of and are essentially the same. Mapping class groups of surfaces Because is the fundamental group of a bouquet of n circles, can be described topologically as the mapping class group of a bouquet of n circles (in the homotopy category), in analogy to the mapping class group of a closed surface which is isomorphic to the outer automorphism group of the fundamental group of that surface. Given any finite graph with fundamental group , the graph can be "thickened" to a surface with one boundary component that retracts onto the graph. The Birman exact sequence yields a map from the mapping class group . The elements of that are in the image of such a map are called geometric. Such outer classes must leave invariant the cyclic word corresponding to the boundary, hence there are many non-geometric outer classes. A converse is true under some irreducibility assumptions, providing geometric realization for outer classes fixing a conjugacy class. Known results For , is not linear, i.e. it has no faithful representation by matrices over a field (Formanek, Procesi, 1992); For , the isoperimetric function of is exponential (Hatcher, Vogtmann, 1996); The Tits Alternative holds in : each subgroup is either virtually solvable or else it contains a free group of rank 2 (Bestvina, Feighn, Handel, 2000); For , (Bridson and Vogtmann, 2000); Every solvable subgroup of has a finitely generated free abelian subgroup of finite index (Bestvina, Feighn, Handel, 2004); For , all but finitely many of the th-degree homology morphisms induced by the sequence are isomorphisms (Hatcher and Vogtmann, 2004); For , the reduced -algebra of (i.e. the closure of its image under the regular representation) is simple; For , if is a finite index subgroup of , then any subgroup of isomorphic to is a conjugate of (Farb and Handel, 2007); For , has Kazhdan's property (T) (Kaluba, Nowak, Ozawa, 2019 for ; Kaluba, Kielak, Nowak, 2021 for ); Actions on hyperbolic complexes satisfying acylindricity conditions were constructed, in analogy with complexes like the complex of curves for mapping class groups; For , is rigid with respect to measure equivalence (Guirardel and Horbez, 2021 preprint). Outer space Out(Fn) acts geometrically on a cell complex known as Culler–Vogtmann Outer space, which can be thought of as the Fricke-Teichmüller space for a bouquet of circles. Definition A point of the outer space is essentially an -graph X homotopy equivalent to a bouquet of n circles together with a certain choice of a free homotopy class of a homotopy equivalence from X to the bouquet of n circles. An -graph is just a weighted graph with weights in . The sum of all weights should be 1 and all weights should be positive. To avoid ambiguity (and to get a finite dimensional space) it is furthermore required that the valency of each vertex should be at least 3. A more descriptive view avoiding the homotopy equivalence f is the following. We may fix an identification of the fundamental group of the bouquet of n circles with the free group in n variables. Furthermore, we may choose a maximal tree in X and choose for each remaining edge a direction. We will now assign to each remaining edge e a word in in the following way. Consider the closed path starting with e and then going back to the origin of e in the maximal tree. Composing this path with f we get a closed path in a bouquet of n circles and hence an element in its fundamental group . This element is not well defined; if we change f by a free homotopy we obtain another element. It turns out, that those two elements are conjugate to each other, and hence we can choose the unique cyclically reduced element in this conjugacy class. It is possible to reconstruct the free homotopy type of f from these data. This view has the advantage, that it avoids the extra choice of f and has the disadvantage that additional ambiguity arises, because one has to choose a maximal tree and an orientation of the remaining edges. The operation of Out(Fn) on the outer space is defined as follows. Every automorphism g of induces a self homotopy equivalence g′ of the bouquet of n circles. Composing f with g′ gives the desired action. And in the other model it is just application of g and making the resulting word cyclically reduced. Connection to length functions Every point in the outer space determines a unique length function . A word in determines via the chosen homotopy equivalence a closed path in X. The length of the word is then the minimal length of a path in the free homotopy class of that closed path. Such a length function is constant on each conjugacy class. The assignment defines an embedding of the outer space to some infinite dimensional projective space. Simplicial structure on the outer space In the second model an open simplex is given by all those -graphs, which have combinatorically the same underlying graph and the same edges are labeled with the same words (only the length of the edges may differ). The boundary simplices of such a simplex consists of all graphs, that arise from this graph by collapsing an edge. If that edge is a loop it cannot be collapsed without changing the homotopy type of the graph. Hence there is no boundary simplex. So one can think about the outer space as a simplicial complex with some simplices removed. It is easy to verify, that the action of is simplicial and has finite isotropy groups. See also Train track map Automorphism group of a free group Outer space References Geometric group theory
Out(Fn)
Physics
1,577
17,367,997
https://en.wikipedia.org/wiki/Coolfluid
COOLFluiD is a component based scientific computing environment that handles high-performance computing problems with focus on complex computational fluid dynamics (CFD) involving multiphysics phenomena. It features a Collaborative Simulation Environment where multiple physical models and multiple discretization methods are implemented as components within the environment. These components form a component-based architecture where they serve as building blocks of customized applications. Capabilities Kernel Component based architecture Dynamic loading of external plugins Interpolation and integration on arbitrary elements Transparent MPI parallelization Parallel writing and reading from solution files Support for XML case files Unstructured 2D/3D hybrid meshes in many formats Numerical Methods Cell Center finite volume solver Residual distribution solver High order finite element solver Spectral Finite Volume solver Spectral Finite Difference solver Discontinuous Galerkin method solver Residual Distribution solver (dedicated to incompressible flow) Physical Models Compressible Euler and Navier-Stokes Equations Perfect and Real Gas (from low Mach to hypersonic) Chemical reacting mixtures Thermal and Chemical non-equilibrium flows Incompressible Navier-Stokes Linearized Euler (for Aeroacoustics) Ideal Magnetohydrodynamics Structural Elasticity Multi-ion Electrochemistry Heat transfer Multiple Scalar Advection models External links New COOLFluiD website on GitHub VKI is the research institute responsible for the majority of the developments. Computational fluid dynamics Fluid dynamics
Coolfluid
Physics,Chemistry,Engineering
285
41,714,108
https://en.wikipedia.org/wiki/CDP-choline%20pathway
The CDP-choline pathway, first identified by Eugene P. Kennedy in 1956, is the predominant mechanism by which mammalian cells synthesize phosphatidylcholine (PC) for incorporation into membranes or lipid-derived signalling molecules. The CDP-choline pathway represents one half of what is known as the Kennedy pathway. The other half is the CDP-ethanolamine pathway which is responsible for the biosynthesis of the phospholipid phosphatidylethanolamine (PE). The CDP-choline pathway begins with the uptake of exogenous choline into the cell. The first enzymatic reaction is catalyzed by choline kinase (CK) and involves the phosphorylation of choline to form phosphocholine. Phosphocholine is then activated by the addition of CTP catalyzed by the rate-limiting enzyme, CTP:phosphocholine cytidylyltransferase to form CDP-choline. The final step of the pathway involves the addition of the choline headgroup onto a diacylglycerol (DAG) backbone to form PC, catalyzed by choline/ethanolamine phosphotransferase (CEPT). Phosphatidylcholine can be acted upon by phospholipases to form different metabolites. Choline transport Mammalian cells are unable to synthesize sufficient quantities of de novo choline to meet physiologic requirements, and therefore must rely on exogenous sources from the diet. The uptake of choline is accomplished predominantly by the high-affinity, sodium dependent choline transporter (CHT) and requires ATP as an energy source. On the other hand, choline may enter the cell through the activation of low-affinity, sodium-independent organic cation transport proteins (OCTs) and/or carnitine/organic cation transporters (OCTNs), and do not require ATP. Lastly, choline may enter the cell through intermediate-affinity transporters, which include the choline transporter-like protein 1 (CTL1). The fate of internalized choline depends on the cell type. In pre-synaptic neurons the majority of choline will be acetylated by the enzyme choline acetyltransferase to form the neurotransmitter acetylcholine. Most other cells will phosphorylate choline by the enzyme choline kinase, the first committed step of CDP-choline pathway. Choline kinase (CK) Choline kinase (CK) is a cytosolic protein that catalyzes the following reaction: choline + ATP ⇌ phosphocholine + ADP In addition to the phosphorylation of choline, CK has also been shown to phosphorylate ethanolamine, a precursor to another important glycerophospholipid, phosphatidylethanolamine. CK functions as a dimer consisting of either α1, α2 or β subunits. Each CK isoform is ubiquitously expressed throughout tissues, however CKα is enriched in the testis and liver, whereas CKβ is enriched in the liver and the heart. Homozygous deletion of CKα is embryonic lethal after about 5 days, whereas deletion of CKβ is not. Under normal circumstances, choline kinase is not the rate-limiting step of the CDP-choline pathway. However in rapidly dividing cells, there is increased CK expression and activity as a result of increased demand for PC synthesis. CTP:phosphocholine cytidylyltransferase (CCT) CTP:phosphocholine cytidylyltransferase (CCT), the rate-limiting enzyme of the pathway, is a nuclear/cytosolic enzyme and catalyzes the following reaction: phosphocholine + CTP ⇌ CDP-choline + PPi CCT functions as a dimer of either α and β subunits encoded by Pcyt1a and Pcyt1b, respectively. CCTα has four domains; a Nuclear localization signal (NLS), an α-helical membrane binding domain, a catalytic domain, and a phosphorylation domain. The major difference between the α and β isoforms is that CCTβ lacks the NLS resulting in a predominantly cytosolic pool of CCTβ. On the other hand, the presence of an NLS results in a predominantly nuclear pool of CCTα. CCTα shuttles between the nucleus (where it is considered inactive) to the cytoplasm where it associates with membranes and is activated in response to lipid activators or during progression through the cell cycle when PC demand is high. CCTα is an amphitropic enzyme, meaning that it exists as either an inactive soluble form, or an active, membrane bound form. Whether or not CCTα is membrane bound is largely dictated by the relative composition of membranes. If membranes are low in PC, and relatively enriched in anionic lipids, diacylglycerol, or phosphatidylethanolamine, CCT inserts into the membrane bilayer via its membrane binding domain. This binding event relieves an autoinhibitory constraint on the catalytic domain, resulting in a decrease in the Km for phosphocholine. Choline/ethanolamine phosphotransferase (CEPT) Choline/ethanolamine phosphotransferase (CEPT), or Choline Phosphotransferase (CPT) the last enzymatic reaction in the CDP-choline pathway, catalyzes the following reaction: CDP-choline + 1,2-diacylglycerol ⇌ phosphatidylcholine + CMP The last step in the CDP-choline pathway is catalyzed by either CPT or CEPT and are localized to the Golgi or endoplasmic reticulum, respectively. CPT and CEPT are encoded by separate genes that share 60% sequence similarity. Both isoforms contain 7 transmembrane segments, and an α-helix near the catalytic domain that is required for CDP-alcohol binding. CPT recognizes only CDP-choline, whereas CEPT recognizes both CDP-choline and CDP-ethanolamine. The reason for this dual specificity is not exclusively known. CEPT is largely considered to be the enzyme responsible for the bulk of PC synthesis, with CPT having an exclusive role in the Golgi, where it may control the levels of the precursor DAG, an important second messenger. Neither CPT or CEPT are considered to be rate-limiting, but can be if DAG is restricted. References Biosynthesis
CDP-choline pathway
Chemistry
1,447
3,452,403
https://en.wikipedia.org/wiki/Sneden%27s%20Star
BPS CS22892-0052 (Sneden's Star) is an old population II star located at a distance of in the Milky Way's galactic halo. It belongs to a class of ultra-metal-poor stars (metallicity [Fe/H]=-3.1), specifically the very rare subclass of neutron-capture (r-process) enhanced stars. It was discovered by Tim C. Beers and collaborators with the Curtis Schmidt telescope at the Cerro Tololo Inter-American Observatory in Chile. Extended high-resolution spectroscopic observations since around 1995 (with Chris Sneden from the University of Texas at Austin as the leading observer) allowed observers to determine the abundances of 53 chemical elements in this star, as of December 2005 only second in number to the Sun. From barium (Z=56) on, all elements show the pattern of the r-process contribution to the abundances of the elements in the Solar System. Comparing the observed abundances for a stable element such as europium (Z=63) and the radioactive element thorium (Z=90) to calculated abundances of an r-process in a type II supernova explosion (as from the universities at Mainz and Basel groups of Karl-Ludwig Kratz and Friedrich-Karl Thielemann) have allowed observers to determine the age of this star to be about 13 billion years. Similar ages have been derived for other ultra-metal-poor stars (CS31082-001, BD+17°3248 and HE 1523-0901) from thorium-to-uranium ratios. References Sources Beers T.C., Preston G.W., Shectman S.A., A search for stars of very low metal abundance. I., Astron. J., 90, 2089-2102 (1985) Beers T.C., Preston G.W., Shectman S.A., A search for stars of very low metal abundance. II., Astron. J., 103, 1987-2034 (1992) Kratz, Karl-Ludwig; Bitouzet, Jean-Philippe; Thielemann, Friedrich-Karl; Moeller, Peter; Pfeiffer, Bernd, Isotopic r-process abundances and nuclear structure far from stability - Implications for the r-process mechanism, Astrophysical Journal, vol. 403, no. 1, p. 216-238 (1993) Sneden, Christopher; McWilliam, Andrew; Preston, George W.; Cowan, John J.; Burris, Debra L.; Armosky, Bradley J., The Ultra--Metal-poor, Neutron-Capture--rich Giant Star CS 22892-052, Astrophysical Journal v.467, p. 819 (1996) Cowan, John J.; Pfeiffer, B.; Kratz, K.-L.; Thielemann, F.-K.; Sneden, Christopher; Burles, Scott; Tytler, David; Beers, Timothy C., R-Process Abundances and Chronometers in Metal-poor Stars The Astrophysical Journal, Volume 521, Issue 1, pp. 194–205 (1999) External links R-Process Cosmo-Chronometers image Image Sneden's Star K-type bright giants Asymptotic-giant-branch stars Population II stars Aquarius (constellation)
Sneden's Star
Astronomy
728
311,855
https://en.wikipedia.org/wiki/Autorotation%20%28fixed-wing%20aircraft%29
For fixed-wing aircraft, autorotation is the tendency of an aircraft in or near a stall to roll spontaneously to the right or left, leading to a spin (a state of continuous autorotation). Details When the angle of attack is less than the stalling angle, any increase in angle of attack causes an increase in lift coefficient that causes the wing to rise. As the wing rises the angle of attack and lift coefficient decrease which tend to restore the wing to its original angle of attack. Conversely any decrease in angle of attack causes a decrease in lift coefficient which causes the wing to descend. As the wing descends, the angle of attack and lift coefficient increase which tends to restore the wing to its original angle of attack. For this reason the angle of attack is stable when it is less than the stalling angle. The aircraft displays damping in roll. When the wing is stalled and the angle of attack is greater than the stalling angle, any increase in angle of attack causes a decrease in lift coefficient that causes the wing to descend. As the wing descends the angle of attack increases, which causes the lift coefficient to decrease and the angle of attack to increase. Conversely any decrease in angle of attack causes an increase in lift coefficient that causes the wing to rise. As the wing rises the angle of attack decreases and causes the lift coefficient to increase further towards the maximum lift coefficient. For this reason the angle of attack is unstable when it is greater than the stalling angle. Any disturbance of the angle of attack on one wing will cause the whole wing to roll spontaneously and continuously. When the angle of attack on the wing of an aircraft reaches the stalling angle the aircraft is at risk of autorotation. This will eventually develop into a spin if the pilot does not take corrective action. Autorotation in kites and gliders Magnus effect rotating kites (wing flipping or wing tumbling) that have the rotation axis bluntly normal to the stream direction use autorotation; a net lift is possible that lifts the kite and payload to altitude. The Rotoplane, the UFO rotating kite, and the Skybow rotating ribbon arch kite use the Magnus effect resulting from the autorotating wing with rotation axis normal to the stream. Some kites are equipped with autorotation wings. Again, a third kind of autorotation occurs in self-rotating bols, rotating parachutes, or rotating helical objects sometimes used as kite tails or kite-line laundry. This kind of autorotation drives wind and water propeller-type turbines, sometimes used to generate electricity. Unlocked engine-off aircraft propellers may autorotate. Such autorotation is being explored for generating electricity to recharge flight-driving batteries. See also Airborne wind turbine Küssner effect Autorotation (airborne wind energy) References Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London. Stinton, Darryl (1996), Flying Qualities and Flight Testing of The Aeroplane, Blackwell Science Ltd, Oxford UK. Notes Aerodynamics Emergency aircraft operations ar:دوران ذاتي de:Autorotation fr:Autorotation it:Autorotazione nl:Autorotatie pl:Autorotacja (samolot) pt:Autorrotação ru:Авторотация
Autorotation (fixed-wing aircraft)
Chemistry,Engineering
683
14,429,264
https://en.wikipedia.org/wiki/Cysteinyl%20leukotriene%20receptor%201
Cysteinyl leukotriene receptor 1, also termed CYSLTR1, is a receptor for cysteinyl leukotrienes (LT) (see cysteinyl leukotrienes). CYSLTR1, by binding these cysteinyl LTs (CysLTs; viz, LTC4, LTD4, and to a much lesser extent, LTE4) contributes to mediating various allergic and hypersensitivity reactions in humans as well as models of the reactions in other animals. Gene The human gene maps to the X chromosome at position Xq13-Xq21, contains three exons with the entire open reading frame located in exon 3, and codes for a protein composed of 337 amino acids. The CYSLTR1 gene promoter region is distanced from 665 to 30 bp upstream of its transcription start site. Expression CYSLTR1 mRNA is expressed in lung smooth muscle, lung macrophages, monocytes, eosinophils, basophils, neutrophils, platelets, T cells, B lymphocytes, pluripotent hematopoietic stem cells (CD34+), mast cells, pancreas, small intestine, prostate, interstitial cells of the nasal mucosa, airway smooth muscle cells, bronchial fibroblasts and vascular endothelial cells. Function CysLTR1 is a G protein–coupled receptor that links to and when bound to its CysLT ligands activates the Gq alpha subunit and/or Ga subunit of its coupled G protein, depending on the cell type. Acting through these G proteins and their subunits, ligand-bound CysLTR1 activates a series of pathways that lead to cell function; the order of potency of the in stimulating CysLTR1 is LTD4>LTC4>LTE4 with LTE4 probably lacking sufficient potency to have much activity that operates through CysLTR1 in vivo. CysLTR1 activation by LTC4 and/or LTD4 in animal models and humans causes: airway bronchoconstriction and hyper-responsiveness to bronchoconstriction agents such as histamine; increased vascular permeability, edema, influx of eosinophils and neutrophils, smooth muscle proliferation, collagen deposition, and fibrosis in various tissue sites; and mucin secretion by goblet cells, goblet cell metaplasia, and epithelial cell hypertrophy in the membranes of the respiratory system. Animal model and human tissue (preclinical studies) implicate CysLTR1 antagonists as having protective/reparative effects in models of brain injury (trauma-, ischemia-, and cold-induced), multiple sclerosis, auto-immune encephalomyelitis, Alzheimer's disease, and Parkinson's disease. CysLTR1 activation is also associated in animal models with decreasing the blood–brain barrier (i.e. increasing the permeability of brain capillaries to elements of the blood's soluble elements) as well as promoting the movement of leukocytes for the blood to brain tissues; these effects may increase the development and frequency of epileptic seizure as well as the entry of leucocyte-borne viruses such as HIV-1 into brain tissue. Increased expression of CysLTR1 has been observed in transitional cell carcinoma of the urinary bladder, neuroblastoma and other brain cancers, prostate cancer, breast cancer, and colorectal cancer (CRC); indeed, CysLTR1 tumor expression is associated with poor survival prognoses in breast cancer and CRC patients, and drug inhibitors of CysLTR1 block the in vivo and in vivo (animal model) growth of CRC cells and tumors, respectively. The pro-cancer effects of CysLTR1 in CRC appear due to its ability to up-regulate pathways that increase in CRC cell proliferation and survival. Other cysLT receptors include cysteinyl leukotriene receptor 2 (i.e. CysLTR2) and GPR99 (also termed the oxoglutarate receptor and, sometimes, CysLTR3). The order of potency of the in stimulating CysLTR2 is LTD4=LTC4>LTE4 with LTE4 probably lacking sufficient potency to have much activity that operates through CysLTR2 in vivo. GPR99 appears to be an important receptor for CysLTs, particularly for LTE4. The CysLTs show relative potencies of LTE4>LTC4>LTD4 in stimulating GPR99-bearing cells with GPR99-deficient mice exhibiting a dose-dependent loss of vascular permeability responses in skin to LTE4 but not to LTC4 or LTD4. This and other data suggest that GPR99 is an important receptor for the in vivo actions of LTE4 but not LTD4 or LTC4 The GPR17 receptor, also termed the uracil nucleotide/cysteinyl leukotriene receptor, was initially defined as a receptor for LTC4, LTD4, and uracil nucleotides. However, more recent studies from different laboratories could not confirm these results; they found that GPR17-bearing cells did not respond to these CysLTs or nucleotides but did find that cells expressing both CysLTR1 and GPR17 receptors exhibited a marked reduction in binding LTC4 and that mice lacking GPR17 were hyper-responsive to igE-induced passive cutaneous anaphylaxis. GPR17 therefore appears to inhibit CysLTR1, at least in these model systems. In striking contrast to these studies, studies concentration on neural tissues continue to find that Oligodendrocyte progenitor cells express GPR17 and respond through this receptor to LTC4, LTD4, and certain purines (see GPR17#Function). The Purinergic receptor, P2Y12, while not directly binding or responding to CysLTs, appears to be activated as a consequence of activating CysLT1: blockage of P2Y12 activation either by receptor depletion or pharmacological methods inhibits many of the CysLTR1-dependent actions of CysLTs in various cell types in vitro as well as in an animal model of allergic disease. Ligands The major CysLTs viz., LTC4, LTD4, and LTE4, are metabolites of arachidonic acid made by the 5-lipoxygenase enzyme, ALOX5, mainly by cells involved in regulating inflammation, allergy, and other immune responses such as neutrophils, eosinophils, basophils, monocytes, macrophages, mast cells, dendritic cells, and B-lymphocytes. ALOX5 metabolizes arachidonic acid to the 5,6-epoxide precursor, LTA4, which is then acted on by LTC4 synthase which attaches the γ-glutamyl-cysteinyl-glycine tripeptide (i.e. glutathione) to carbon 6 of the intermediate thereby forming LTC4 synthase. LTC4 then exits its cells of origin through the MRP1 transporter (ABCC1) and is rapidly converted to LTD4 and then to LTE4) by cell surface-attached gamma-glutamyltransferase and dipeptidase peptidase enzymes by the sequential removal of the γ-glutamyl and then glycine residues. Gene polymorphism 927T/C (nucleotide thymine replaces cytosine at position 97 of the CysLTR1 gene) gene polymorphism in the coding region of CysLTR1 has been shown to be predictive of the severity of atopy (i.e. a predisposition toward developing certain allergic hypersensitivity reactions), but not associated with asthma, in a population of 341 Caucasians in afflicted sib-pair families from the Southampton area in the United Kingdom. This atopy severity was most apparent in female siblings but the incidence of this polymorphism is extremely low and the functionality of the 927T/C gene and its product protein are as yet unknown. The population of the small remote far South Atlantic Ocean island of Tristan da Cunha (266 permanent, genetically isolated residents) suffers a high prevalence of atopy and asthma. The CysLTR1 gene product variant, 300G/S (i.e. amino acid glycine replaces serine at the 300 position of the CysLTR1 protein), has been shown to be significantly associated with atopy in this population. The CysLTR1 300S variant exhibited significant increased sensitivity to LTD4 and LTC4 suggesting that this hypersensitivity underlies its association with atopy. Clinical significance In spite of the other receptors cited as being responsive to CysLTs, CysLTR1 appears to be critical in mediating many of the pathological responses to CysLTs in humans. Montelukast, Zafirlukast, and Pranlukast are selective receptor antagonists for the CysLTR1 but not CysLTR2. These drugs are in use and/or shown to be effective as prophylaxis and chronic treatments for allergic and non-allergic diseases such as: allergen-induced asthma and rhinitis; aspirin-exacerbated respiratory disease; exercise- and cold-air induced asthma (see Exercise-induced bronchoconstriction); and childhood sleep apnea due to adenotonsillar hypertrophy (see Acquired non-inflammatory myopathy#Diet and Trauma Induced Myopathy). However, responses to these drugs vary greatly with the drugs showing fairly high rates of poor responses and ~20% of patients reporting no change in symptoms after treatment with these agents. It seems possible that the responses of CysLTR2, GPR99, or other receptors to CysLT's may be contributing to these diseases. See also Eicosanoid receptor Cysteinyl leukotriene receptor 2 GPR99 References Further reading External links G protein-coupled receptors
Cysteinyl leukotriene receptor 1
Chemistry
2,177
9,335,905
https://en.wikipedia.org/wiki/Multidelay%20block%20frequency%20domain%20adaptive%20filter
The multidelay block frequency domain adaptive filter (MDF) algorithm is a block-based frequency domain implementation of the (normalised) Least mean squares filter (LMS) algorithm. Introduction The MDF algorithm is based on the fact that convolutions may be efficiently computed in the frequency domain (thanks to the fast Fourier transform). However, the algorithm differs from the fast LMS algorithm in that block size it uses may be smaller than the filter length. If both are equal, then MDF reduces to the FLMS algorithm. The advantages of MDF over the (N)LMS algorithm are: Lower algorithmic complexity Partial de-correlation of the input (which 'may' lead to faster convergence) Variable definitions Let be the length of the processing blocks, be the number of blocks and denote the 2Nx2N Fourier transform matrix. The variables are defined as: With normalisation matrices and : In practice, when multiplying a column vector by , we take the inverse FFT of , set the first values in the result to zero and then take the FFT. This is meant to remove the effects of the circular convolution. Algorithm description For each block, the MDF algorithm is computed as: It is worth noting that, while the algorithm is more easily expressed in matrix form, the actual implementation requires no matrix multiplications. For instance the normalisation matrix computation reduces to an element-wise vector multiplication because is block-diagonal. The same goes for other multiplications. References J.-S. Soo and K. Pang, “Multidelay block frequency domain adaptive filter,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 2, pp. 373–376, 1990. H. Buchner, J. Benesty, W. Kellermann, "An Extended Multidelay Filter: Fast Low-Delay Algorithms for Very High-Order Adaptive Systems". Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2003. A free implementation of the MDF algorithm is available in Speex (main source file) See also Adaptive filter Recursive least squares For statistical techniques relevant to LMS filter see Least squares. Digital signal processing Filter theory
Multidelay block frequency domain adaptive filter
Engineering
466
326,120
https://en.wikipedia.org/wiki/IEC%2060320
IEC 60320 Appliance couplers for household and similar general purposes is a set of standards from the International Electrotechnical Commission (IEC) specifying non-locking connectors for connecting power supply cords to electrical appliances of voltage not exceeding 250 V (a.c.) and rated current not exceeding 16 A. Different types of connector (distinguished by shape and size) are specified for different combinations of current, temperature and earthing requirements. Unlike IEC 60309 connectors, they are not coded for voltage; users must ensure that the voltage rating of the equipment is compatible with the mains supply. The standard uses the term coupler to encompass connectors on power cords and power inlets and outlets built into appliances. The first edition of IEC 320 was published in 1970 and was renumbered to 60320 in 1994. Terminology Appliance couplers enable the use of standard inlets and country-specific cord sets which allow manufacturers to produce the same appliance for many markets, where only the cord set has to be changed for a particular market. Interconnection couplers allow a power supply from a piece of equipment or an appliance to be made available to other equipment or appliances. Couplers described under these standards have standardized current and temperature ratings. The parts of the couplers are defined in the standard as follows. Connector: "part of the appliance coupler integral with, or intended to be attached to, one cord connected to the supply". Appliance inlet: "part of the appliance coupler integrated as a part of an appliance or incorporated as a separate part in the appliance or equipment or intended to be fixed to it". Plug connector: "part of the interconnection coupler integral with or intended to be attached to one cord". Appliance outlet: "part of the interconnection coupler which is the part integrated or incorporated in the appliance or equipment or intended to be fixed to it and from which the supply is obtained". Cord set: "assembly consisting of one cable or cord fitted with one non-rewirable plug and one non-rewirable connector, intended for the connection of an electrical appliance or equipment to the electrical supply". Interconnection cord set: "assembly consisting of one cable or cord fitted with one non-rewirable plug connector and one non-rewirable connector, intended for the interconnection between two electrical appliances". Non-rewirable plugs and connectors are typically permanently molded onto cords and cannot be removed or rewired without cutting the cords. The standard uses the terms "male" and "female" only for individual pins and socket contacts, but in general usage they are also applied to the complete plugs and connectors. "Connectors" and "appliance outlets" are fitted with socket contacts, and "appliance inlets" and "plug connectors" are fitted with pin contacts. Each type of coupler is identified by a standard sheet number. For appliance couplers this consists of the letter "C" followed by a number, where the standard sheet for the appliance inlet is 1 higher than the sheet for the corresponding cable connector. Many types of coupler also have common names. The most common ones are "IEC connector" for the common C13 and C14, the "figure-8 connector" for C7 and C8, and "cloverleaf connector" or "Mickey Mouse connector" for the C5/C6. "Kettle plug" (often "jug plug" in Australian or New Zealand English) is a colloquial term used for the high-temperature C16 appliance inlet (and sometimes for C15 connector that the plug goes into). “Kettle/jug plug” is also informally used to refer to regular temperature-rated C13 and C14 connectors. (A high-temperature-rated cord with a C15 connector can be used to power a computer with a C14 plug, but a cord with a low-temperature C13 connector will not fit a high-temperature appliance that has a C16 plug.) Application Detachable appliance couplers are used in office equipment, measuring instruments, IT environments, and medical devices, among many types of equipment for worldwide distribution. Each appliance's power system must be adapted to the different plugs used in different regions. An appliance with a permanently-attached plug for use in one country cannot be readily sold in another which uses an incompatible wall socket; this requires keeping track of variations throughout the product's life cycle from assembly and testing to shipping and repairs. Instead, a country-specific power supply cord can be included in the product packaging, so that model variations are minimized and factory testing is simplified. A cord which is fitted with non-rewireable (usually moulded) connectors at both ends is termed a cord set. Appliance manufacturing may be simplified by mounting an appliance coupler directly on the printed circuit board. Assembly and handling of an appliance is easier if the power cord can be removed without much effort. Appliances can be used in another country easily, with a simple change of the power supply cord (including a connector and a country-specific plug). The power supply cord can be replaced easily if damaged, because it is a standardized part that can be unplugged and re-inserted. Safety hazards, maintenance expenditure and repairs are minimized. Standards Parts of the standard IEC 60320 is divided into several parts: IEC 60320-1: General Requirements specifies two-pole and two-pole with earth couplers intended for the connection of a mains supply cord to electrical appliances. Beginning with the IEC 60320-1:2015 edition, this part also specifies interconnection couplers which enable the connection and disconnection of an appliance to a cord leading to another appliance, incorporating IEC 60320-2-2. At the same time, this part of the standard no longer includes standard sheets which were moved to a new part: IEC 60320-3. IEC 60320-2-1: Sewing machine couplers specifies couplers which are not interchangeable with other couplers from IEC 60320, for use with household sewing machines. They are rated no higher than 2.5 A and 250 V AC. IEC 60320-2-2: Interconnection couplers for household and similar equipment. This section was withdrawn in January 2016. The general requirements for these items were incorporated into IEC 60320-1 and the standard sheets were moved to IEC 60320-3. IEC 60320-2-3: Couplers with a degree of protection higher than IPX0 specifies couplers with some degree of liquid ingress protection (IP). In its second edition published in 2018, the standard sheets were moved to IEC 60320-3. IEC 60320-2-4: Couplers dependent on appliance weight for engagement. IEC 60320-3: Standard sheets and gauges. First published October 31, 2014, this part initially included the standard sheets for both appliance couplers and interconnection couplers. In a 2018 amendment, the standard sheets for the IP couplers defined by 60320-2-3 were added. For appliance couplers the various coupler outlines are designated using a combination of letters and numbers, e.g., "C14". The connector supplies power to the appliance inlet. The appliance inlet is designated by the even number one greater than the number assigned to the connector, so a C1 connector mates with a C2 inlet, and a C15A mates with a C16A. Interconnection couplers have single letter designators, e.g., "F". They consist of a plug connector and an appliance outlet. The plug connector is the part integral with, or intended to be attached to, the cord, and the appliance outlet is the part integrated with or incorporated into the appliance or equipment or intended to be fixed to it, and from which the supply is obtained. Contents of standards The standards define the mechanical, electrical and thermal requirements and safety goals of power couplers. The standard scope is limited to appliance couplers with a rated voltage not exceeding 250 V (a.c.) at 50 Hz or 60 Hz, and a rated current not exceeding 16 A. Further sub-parts of IEC 60320 focus on special topics such as protection ratings and appliance specific requirements. Selection of a coupler depends in part on the IEC appliance classes. The shape and dimensions of appliance inlets and connectors are coordinated so that a connector with lower current rating, temperature rating, or polarization cannot be inserted into an appliance inlet that requires higher ratings. (i.e. a Protection Class II connector cannot mate with a Class I inlet which requires an earth); whereas connecting a Class I connector to a Class II appliance inlet is possible because it creates no safety hazard. Pin temperature is measured where the pin projects from the engagement surface. The maximum permitted pin temperatures, are , , and , respectively (the higher temperatures are not applicable to interconnection couplers). The pin temperature is determined by the design of the appliance, and its interior temperature, rather than by its ambient temperature. Typical applications with increased pin temperatures include appliances with heating coils such as ovens or electric grills. It is generally possible to use a connector with a higher rated temperature with a lower rated appliance inlet, but the keying feature of the inlet prevents use of a connector with a lower temperature rating. Connectors are also classified according to the method of connecting the cord, either as rewirable connectors or non-rewirable connectors. In addition the standards define further general criteria such as withdrawal forces, testing procedures, the minimum number of insertion cycles, and the number of flexings of cords. IEC 60320-1 defines a cord set as an "assembly consisting of one cable or cord fitted with one plug and one connector, intended for the connection of an electrical appliance or equipment to the electrical supply". It also defines an interconnection cord set as an "assembly consisting of one cable or cord fitted with one plug connector and one connector, intended for the interconnection between two electrical appliances". In addition to the connections within the standards, as mentioned, there are possible combinations between appliance couplers and IEC interconnection couplers. Fitted with a flexible cord, the components become interconnection cords to be used for connecting appliances or for extending other interconnection cords or power supply cords. North American ratings North American rating agencies (CSA, NOM-ANCE, and UL) will certify IEC 60320 connectors for higher currents than are specified in the IEC standard itself. In particular, UL will certify: C5/C6 connectors for up to 13 A, although 10 A is more commonly seen (IEC maximum is 2.5 A) C7/C8 connectors for up to 10 A (IEC maximum is 2.5 A) C13/C14 and C15/C16 connectors for up to 15 A (IEC maximum is 10 A) C19/C20 and C21/C22 connectors for up to 20 A (IEC maximum is 16 A) Given the 120 V (±5%) mains supply used in the United States and Canada, these higher ratings permit devices with C6 and C8 inputs to draw more than 114 V × 2.5 A = 285 W from the mains, and devices with C14 inputs to draw more than 1140 W from the mains. This is exploited by high-powered computer power supplies, up to 1200 W output, and even some particularly efficient 1500 W output models to use the more popular C14 input on products sold worldwide. Although less common, power bricks with C6 and C8 inputs and ratings up to 300 W also exist. Appliance couplers The dimensions and tolerances for connectors, appliance inlets, appliance outlets and plug connectors are given in standard sheets, which are dimensioned drawings showing the features required for safety and interchangeability. Mains appliance couplers The mains appliance couplers are all engineered to connect an appliance via a power cord to mains power. C1/C2 coupler The C1 coupler and C2 inlet were commonly used for mains-powered electric shavers. These have largely been supplanted by cordless shavers with rechargeable batteries or corded shavers with an AC adapter. C5/C6 coupler This coupler is sometimes colloquially called a "cloverleaf" or "Mickey Mouse" coupler (because the cross section resembles the silhouette of the Disney character). The C6 inlet is used on laptop power supplies and portable projectors, as well as on some desktop computers, and some LCD monitors. C7/C8 coupler Commonly known as a "figure-8", "infinity" or "shotgun" connector due to the shape of its cross-section, or less commonly, a Telefunken connector after its originator. This coupler is often used for small cassette recorders, battery/mains-operated radios, battery chargers, some full-size audio-visual equipment, laptop computer power supplies, video game consoles, and similar double-insulated appliances. A C8B inlet type is defined by the standard for use by dual-voltage appliances; it has three pins and can hold a C7 connector in either of two positions, allowing the user to select voltage by choosing the position the connector is inserted. A similar but polarized connector has been made, but is not part of the standard. Sometimes called C7P, it is asymmetrical, with one side squared. Unpolarized C7 connectors can be inserted into the polarized inlets; however, doing so might be a safety risk if the device is designed for polarized power. Although not specified by IEC 60320, and not clear whether any formal written standard exists, the most common wiring appears to connect the squared side to the hot (live) line, and the rounded to the neutral. Note: Clause 9.5 was added to IEC 60320-1:2015, this requires that "It shall not be possible to engage a part of a non-standard appliance coupler with a complementary part of an appliance coupler complying with the standard sheets in any part of IEC 60320." Apple uses a modified version of this connector, with the receptacle having a proprietary pin that secures the adapter in place and provides grounding. Most Apple supplied cable adapter provide grounding through a slide-in connector, while the angled AC adapter ("duckhead") does not provide grounding, with North American plugs. These power supplies do accept the standard C7 connector, and supported by Apple for non-grounded applications. C13/C14 coupler The C13/C14 connector and inlet combination are used in a wide variety of electronic equipment ranging from computer components like the power supply, monitors, printers and other peripherals to video game consoles, instrument amplifiers, professional audio equipment and virtually all professional video equipment. An early example of a product that uses this connector is the Apple II. A power cord with a suitable power plug (for the locality where the appliance is being used) on one end and a C13 connector (connecting to the appliance) on the other is commonly called an IEC cord. There are also a variety of splitter blocks, splitter cables, and similar devices available. These are usually un-fused (with the exception of C13 cords attached to BS 1363 plugs, which are always fused). These cables are sometimes informally referred to as a "kettle cord" or "kettle lead", but the C13/14 connectors are only rated for : a device such as a kettle requires the C15/16 connector, rated for . A cable consisting of a C13 and an E interconnection connector is commonly mislabeled as an "extension cord", as although that is not the intended purpose, it can be used as such. They are also commonly mislabeled as C14 instead of E. The C13 connector and C14 inlet are also commonly found on servers, routers, and switches. Power cord sets utilizing a C13 connector and an E interconnection plug are commonplace in data centers to provide power from a PDU (power distribution unit) to a server. These data-center power cables are now offered in many colors. Colored power cables are used to color-code installations. C15/C16 coupler Some electric kettles and similar hot household appliances like home stills use a supply cord with a C15 connector and a matching C16 inlet on the appliance; their temperature rating is rather than the of the similar C13/C14 combination. The official designation in Europe for the C15/C16 coupler is a "hot-condition" coupler. These are similar in form to the C13/C14 coupler, except with a ridge opposite the earth in the C16 inlet (preventing a C13 fitting), and a corresponding valley in the C15 connector (which doesn't prevent it fitting a C14 inlet). For example, an electric kettle cord can be used to power a computer, but an unmodified computer cord cannot be used to power a kettle. There is some public confusion between C13/C14 and C15/C16 couplers, and it is not uncommon for C13/C14 to be informally referred to as "kettle plug" or "kettle lead" (or some local equivalent). In European countries the C15/C16 coupler has replaced and made obsolete the formerly common types of national appliance coupler in many applications. C15A/C16A coupler This modification of the C15/C16 coupler has an even higher temperature rating. C17/C18 coupler Similar to C13/C14 coupler, but unearthed. A C18 inlet accepts a C13 connector, but a C14 inlet does not accept a C17 connector. The IBM Wheelwriter series of electronic typewriters are one common application. Three-wire cords with C13 connectors, which are easier to find, are sometimes used in place of the two-wire cords for replacement. In this case, the ground wire will not be connected. The C17/C18 coupler is often used in audio applications where a floating ground is maintained to eliminate hum caused by ground loops. Other common applications are the power supplies of Xbox 360 game consoles, replacing the C15/C16 coupler employed initially, and large CRT televisions manufactured by RCA in the early 1990s. C19/C20 coupler Earthed, 16 A, polarized. This coupler is used for supplying power in IT applications where higher currents are required, for instance on high-power workstations and servers, power to uninterruptible power supplies, power to some power distribution units, large network routers, switches, blade enclosures, and similar equipment. This connector can also be found on high-current medical equipment. It is rectangular and has pins parallel to the long axis of the coupler face. Interconnection couplers Interconnection couplers are similar to appliance couplers, but with the gender roles reversed. Specifically, the female appliance outlet is built into a piece of equipment, while the male plug connector is attached to a cord. They are identified by letters, not numbers, with one letter identifying the plug connector, and the alphabetical next letter identifying the mating appliance outlet. For example, an E plug fits into an F outlet. Beginning with an amendment published in 2022, three of the commonly used high temperature variants have been standardized as types M–R. Cables with a C13 connector at one end and a type E plug connector at the other are commonly available. They have a variety of common uses, including connecting power between older PCs and their monitors, extending existing power cords, connecting to type F outlets strips (commonly used with rack-mount gear to save space and for international standardization) and connecting computer equipment to the output of an uninterruptible power supply (UPS). Type J outlets are used in a similar way. Withdrawn and other standard sheets C3, C4, C11 and C12 standard sheets are no longer listed in the standard. Standard sheet C25 shows retaining device dimensions. Sheet C26 shows detail dimensions for pillar-type terminals, where the end of the screw bears on a wire directly or through a pressure plate. Sheet 27 shows details for screw terminals, where the wire is held by wrapping it around the head of a screw. See also AC power plugs and sockets Power entry module Power Cannon (XLR-LNE), a compact alternative power entry connector. IEC 60309 specifies larger couplers used for higher currents, higher voltages, and polyphase systems. IEC 60906-1 A proposed standard for domestic wall sockets. NEMA connector North American standard for building receptacles and compatible cord connectors. AC power plugs and sockets#BS 1363 three-pin (rectangular) plugs and sockets British standard for building receptacles and compatible cord connectors. CEE 7 standard AC plugs and sockets References External links IEC 60799 edition 2.0 Electrical accessories — Cord sets and interconnection cord sets International Standardized Appliance Connectors (IEC-60320) Reference Chart Includes diagrams of all couplers, their rated current, equipment class, and temperature rating. Previews (table of contents and introduction) of IEC standard 60320: Appliance couplers for household and similar general purposes: IEC 60320-1 General requirements IEC 60320-2-1 Sewing machine couplers IEC 60320-2-3 Appliance couplers with a degree of protection higher than IPX0 IEC 60320-2-4 Couplers dependent on appliance weight for engagement IEC 60320-3 Standard sheets and gauges Indian national standards equivalent to IEC standards: IS/IEC 60320-1:2001 General requirements IS/IEC 60320-2-2:1998 Interconnection couplers for household and similar equipment IS/IEC 60320-2-3:1998 Appliance couplers with a degree of protection higher than IPX0 60320 Mains power connectors
IEC 60320
Technology
4,734
1,204,926
https://en.wikipedia.org/wiki/MiKTeX
MiKTeX is a free and open-source distribution of the TeX/LaTeX typesetting system compatible with Linux, MacOS, and Windows. It also contains a set of related programs. MiKTeX provides the tools necessary to prepare documents using the TeX/LaTeX markup language, as well as a simple TeX editor, TeXworks. The name comes from the login credentials of the chief developer Christian Schenk, MiK for Micro-Kid. MiKTeX can update itself by downloading new versions of previously installed components and packages, and has an easy installation process. By default, MiKTeX installs only a minimal set of packages (according to the philosophy of "just enough TeX"), which is useful in case of the limited space. It will then ask users whether they wish to download any packages that have not yet been installed but are required to render the current document. A portable version of MiKTeX, as well as a command-line installer of it, are also available. The latest version of MiKTeX is available at the MiKTeX homepage. In June 2020, Schenk decided to change the numbering convention; the new one is based on the release date. Thus 20.6 was released in June 2020. Since version 2.7, MiKTeX has support for XeTeX, MetaPost and pdfTeX and compatibility with Windows 7. Support for 32-bit computers was dropped in 2022 and for Windows 7 in 2023. See also TeX Live – Another cross-platform LaTeX distribution MacTeX – A LaTeX distribution for MacOS Texmaker – An open-source cross-platform LaTeX editor TeXstudio – Another open-source cross-platform LaTeX editor LyX – An open-source cross-platform word processor TeXnicCenter – An open-source Windows editor WinShell – A Windows freeware, closed-source multilingual integrated development environment (IDE) References External links MiKTeX project homepage Free software programmed in C Free software programmed in C++ Free software programmed in Pascal Free TeX software Linux TeX software TeX software for Windows TeX SourceForge projects
MiKTeX
Technology
444
839,812
https://en.wikipedia.org/wiki/ATC%20code%20H04
H04A Glycogenolytic hormones H04AA Glycogenolytic hormones H04AA01 Glucagon H04AA02 Dasiglucagon References H04
ATC code H04
Chemistry
48
44,714,050
https://en.wikipedia.org/wiki/ER%20%3D%20EPR
ER = EPR is a conjecture in physics stating that two entangled particles (a so-called Einstein–Podolsky–Rosen or EPR pair) are connected by a wormhole (or Einstein–Rosen bridge) and is thought by some to be a basis for unifying general relativity and quantum mechanics into a theory of everything. Overview The conjecture was proposed by Leonard Susskind and Juan Maldacena in 2013. They proposed that a wormhole (Einstein–Rosen bridge or ER bridge) is equivalent to a pair of maximally entangled black holes. EPR refers to quantum entanglement (EPR paradox). The symbol is derived from the first letters of the surnames of authors who wrote the first paper on wormholes (Albert Einstein and Nathan Rosen) and the first paper on entanglement (Einstein, Boris Podolsky and Rosen). The two papers were published in 1935, but the authors did not claim any connection between the concepts. Conjectured resolution This is a conjectured resolution to the AMPS firewall paradox. Whether or not there is a firewall depends upon what is thrown into the other distant black hole. However, as the firewall lies inside the event horizon, no external superluminal signalling would be possible. This conjecture is an extrapolation of the observation by Mark Van Raamsdonk that a maximally extended AdS-Schwarzschild black hole, which is a non-traversable wormhole, is dual to a pair of maximally entangled thermal conformal field theories via the AdS/CFT correspondence. They backed up their conjecture by showing that the pair production of charged black holes in a background magnetic field leads to entangled black holes, but also, after Wick rotation, to a wormhole. This conjecture sits uncomfortably with the linearity of quantum mechanics. An entangled state is a linear superposition of separable states. Presumably, separable states are not connected by any wormholes, but yet a superposition of such states is connected by a wormhole. The authors pushed this conjecture even further by claiming any entangled pair of particles—even particles not ordinarily considered to be black holes, and pairs of particles with different masses or spin, or with charges which aren't opposite—are connected by Planck scale wormholes. The conjecture leads to a grander conjecture that the geometry of space, time and gravity is determined by entanglement. References External links Susskind, Leonard. "ER = EPR" or "What's Behind the Horizons of Black Holes?". Stanford Institute for Theoretical Physics. November. 4, 2014. Black holes Conjectures Quantum gravity
ER = EPR
Physics,Astronomy,Mathematics
542
61,173,032
https://en.wikipedia.org/wiki/Unistellar
Unistellar is a French manufacturer of computer-connected telescopes that allow non-professional skywatchers to observe astronomical objects at relatively low cost. The first product launched was named the eVscope, and used digital astrophotographic techniques. SETI Institute has partnered with Unistellar and will be able to send requests for information and notifications to users, and receive information about transient astronomical events. The eVscope is a -diameter Prime focus reflector, focal length 450 mm. It projects its image onto a CMOS color sensor with 1.3 million pixels. The image is transmitted to a small screen in an eyepiece also mounted on the telescope. An electronic connection to a computer (smartphone, pad, or laptop) is required to make astronomical observations from the telescope. The digital technology allows multiple images to be stacked while subtracting the noise component of the observation producing images of Messier objects and faint stars as dim as an apparent magnitude of 15 with consumer-grade equipment. History The company was founded in Marseille, France, in 2015, with incubator investment from Incubateur Impulse and Pépinières d'Entreprises Innovantes with subsequent VC round capital from private investors and a VC firm named Brighteye Ventures. Unistellar unveiled their electronic telescope technology prototype in 2017 at CES2017 in Las Vegas and at IFA Next in Berlin. The company experienced difficulties bringing the product to market. The consumer-grade electronic telescope was originally planned to be available in the "fall 2018" which subsequently shifted to "early 2019," then later in 2019. By January 2020, the telescope was expected to be shipped worldwide between May and August 2020. As of December 2021, over 5000 telescopes had been delivered to customers The kit included a custom tripod and mount, a Bahtinov mask and a protective cap. Later, Unistellar introduced two new telescopes, eVscope 2 with bigger FOV and better monitor which won the T3 Platinum Award, and eQuinox with longer battery life and no monitor. Science As presented in AGU 2021 Fall Meeting, the eVscope had observed many astronomical objects up to December 2021, including the detection by 79 observers of 85 transits by Jupiter-sized exoplanets, 281 asteroid occultations (including forty-five positive ones), and three shape and spin solutions for near-Earth asteroids. The network also supported NASA's TESS mission by making transatlantic observations of an exoplanet transit, and NASA's Lucy mission by profiling Trojan asteroids this spacecraft will visit. These data are collected by observers in Europe, North America, Japan, Australia, and New Zealand. Unistellar aims to expand the network to the rest of Asia and to South America. The Unistellar Exoplanet (UE) campaign helped to improve the measurement accuracy of the orbital period of TOI 3799.01. The UE campaign also helps to refine the orbit of long-period exoplanets, such as the Jupiter-analog Kepler-167e and the eccentric planet HD 80606b, which have transit durations longer than 10 hours. This refinement will help with follow-up observations, such as JWST observation of HD 80606b. Competitive offerings The Stellina astrophotography telescope by Vaonis is a similar technology-facilitated telescope that uses a digital display in lieu of an eyepiece and stacks images to get high-resolution images of deep-sky objects. See also Digiscoping References Telescope manufacturers French companies established in 2015 Telescopes
Unistellar
Astronomy
726
2,645,572
https://en.wikipedia.org/wiki/Volvic%20%28mineral%20water%29
Volvic is a brand of mineral water. Its source is in the Chaîne des Puys-Limagne Fault, Auvergne Volcanoes Regional Park, at the Puy de Dôme in France. Over 50% of the production of Volvic water is exported to more than sixty countries throughout the world. Two bottling plants produce over one billion bottles of water annually and are the principal employers of the local Volvic commune. History The first of the springs in the area was tapped in 1922, and the first bottles appeared on the market in 1938. In October 1993, the Volvic company was bought by Groupe Danone. Since 1997, Volvic has been using PETE, a recyclable material, to make their bottles. The company became carbon-neutral during 2020. During the same year Volvic and the esports organisation Berlin International Gaming commenced a partnership. Varieties Volvic also produces a range of water that has natural fruit flavouring named Volvic Touch of Fruit, with sugar free options. Recent flavours include strawberry, summer fruits, orange & peach, cherry, and lemon & lime. Other ranges available are Volvic Juiced (water with fruit juice from concentrate), and Volvic Sparkling (sparkling flavoured water similar to Touch of Fruit). Advertising campaigns The track "Bombay Theme" from the Kollywood film Bombay'''s (1995) soundtrack is an instrumental orchestral piece composed and arranged by A. R. Rahman and conducted by K. Srinivas Murthy, recently featured in the television commercial for Volvic, starring Zinedine Zidane since 2000. In Volvic's "1L = 10L FOR AFRICA" campaign, the company promised that for every one litre of Volvic purchased, they would provide ten litres of drinking water through their "well creation" programme with World Vision in Ghana, Malawi, Mali and Zambia. Another recent campaign is the 14 Day Challenge, in which people are challenged to drink 1.5 litres of Volvic mineral water every day for 14 days, to achieve hydration to the body and mind. Volvic became the first brand in the history of Danone and advertising during 2006, when Danone paid an amount reaching seven figures for the first sponsorship made for television. The sponsorship took place on E4 and included the television shows How I Met Your Mother, The Inbetweeners, The Goldbergs and 2 Broke Girls, a series from Guy Martin, a series of The Island and a reality survival series, Eden''. In 2007, a series of four Volvic adverts were released featuring a volcano named George (voiced by Matt Berry) and a t-rex named Tyrannosaurus Alan (voiced by Tom Goodman-Hill). Alzheimer's study A 2006 study found that drinking Volvic could reduce the levels of aluminium in the bodies of people with Alzheimer's disease. There is a link between human exposure to aluminium and the incidence of Alzheimer's disease. References Sources External links Bottled water brands French brands French drinks Groupe Danone brands Massif Central Mineral water 1938 establishments in France Food and drink companies established in 1938 Companies based in Auvergne-Rhône-Alpes
Volvic (mineral water)
Chemistry
644
239,053
https://en.wikipedia.org/wiki/Construction%20delay
Construction delays are situations where project events occur at a later time than expected due to causes related to the client, consultant, and contractor etc. In residential and light construction, construction delays are often the result of miscommunication between contractors, subcontractors, and property owners. These types of misunderstandings and unrealistic expectations are usually avoided through the use of detailed critical path schedules, which specify the work, and timetable to be used, but most importantly, the logical sequence of events which must occur for a project to be completed. Incidence and impact of delays In more complex projects, problems will arise that are not foreseen in the original contract, and so other legal construction forms are subsequently used, such as change orders, lien waivers, and addenda. In construction projects, besides other projects where a schedule is being used to plan work, delays are likely to happen all the time. Delays in construction projects are frequently expensive, since there is usually a construction loan involved which charges interest, management staff dedicated to the project whose costs are time dependent, and ongoing inflation in wage and material prices. Analysis of delays What is being delayed determines if a project, or some other deadline such as a milestone, will be completed late. Before analyzing construction delays, a clear understanding of the general types is necessary. There are four basic ways to categorize delays: Critical or Non-Critical Excusable or Non-Excusable Concurrent or Non-Concurrent Compensable or Non-Compensable Before determining the impact of a delay on the project, one must determine whether the delay is critical or non-critical. Additionally, all delays are either excusable or non-excusable. Both excusable and non-excusable delays can be defined as either concurrent or non-concurrent. Delays can be further broken down into compensable or non-compensable delays. Management solutions to delays Construction supply chain plays a major role in construction market competition. Construction supply chain management assists enterprises by helping to improve competitiveness, increase profits and have more control over the different factors and variables within the project. A. Cox and P. Ireland illustrated the myriad of construction supply chains which constitute the main flows within the construction supply chain. In his research, Ghaith Al-Werikat analysed delays in relation to the myriad of construction supply chains. In particular, material flow, equipment flow, information flow, labour flow and client's information flow, offering a quantification of the impact of supply chain delays on construction projects performance. On the other hand, Economic historian Robert E. Wright argues that construction delays are caused by bid gaming, change order artistry, asymmetric information, and post contractual market power. Until those fundamental issues are confronted and resolved, many custom construction projects will continue to come in over budget, past due, or below contract specifications, he claims. See also Construction Civil engineering Project planning References Further reading External links Construction Risk.com Resource for information on latest trends in project delays and construction claims Article outlining the different types of weather delays Building engineering
Construction delay
Engineering
628
60,777,446
https://en.wikipedia.org/wiki/Barrel%20plating
Barrel plating is a form of electroplating used for plating a large number of smaller metal objects in one sitting. It consists of a non-conductive barrel-shaped cage in which the objects are placed before being subjected to the chemical bath in which they become plated. An important aspect of the barrel plating process is that the individual pieces establish a bipolar contact with one another — this results in high plating efficiency. However, because of the large amount of surface contact that the pieces have with each other, barrel plating is generally not recommended when precisely engineered or ornamental finishes are required. Barrel plating began as a practice in the United States during the US Civil War. The harsh chemicals required, however, meant that it had to await the development of non-conductive and chemically resistant plastics— primarily perspex and polypropylene— before it could receive widespread use. By 2004, however, barrel plating had become widespread: it was estimated that as much as 70% of modern electroplating facilities used barrel plating techniques at that time. References Metal plating
Barrel plating
Chemistry
223
77,096,028
https://en.wikipedia.org/wiki/NGC%205885
NGC 5885 is an intermediate barred spiral galaxy located in the constellation Libra. Its speed relative to the cosmic microwave background is 2,185 ± 13 km/s, which corresponds to a Hubble distance of 32.3 ± 2.3 Mpc (~105 million ly). NGC 5885 was discovered by German-British astronomer William Herschel in 1784. The luminosity class of NGC 5885 is III and it has a broad HI line. It also contains regions of ionized hydrogen. With a surface brightness equal to 14.39 mag/am2, we can qualify NGC 5885 as a low surface brightness galaxy (LSB). LSB galaxies are diffuse galaxies with a surface brightness less than one magnitude lower than that of the ambient night sky. To date, 11 non-redshift measurements yield a distance of 22.055 ± 5.687 Mpc (~71.9 million ly), which is outside the distance values of Hubble. Note that it is with the average value of independent measurements, when they exist, that the NASA/IPAC database calculates the diameter of a galaxy and that consequently the diameter of NGC 5885 could be approximately 37, 5 kpc (~122,000 ly) if we used the Hubble distance to calculate it. See also List of NGC objects (5001–6000) List of spiral galaxies New General Catalogue References External links NGC 5885 at NASA/IPAC NGC 5885 at SIMBAD NGC 5885 at LEDA Spiral galaxies Barred spiral galaxies Libra (constellation) 5885 Discoveries by William Herschel Astronomical objects discovered in 1784 Low surface brightness galaxies
NGC 5885
Astronomy
338
5,719,307
https://en.wikipedia.org/wiki/Paley%20graph
In mathematics, Paley graphs are undirected graphs constructed from the members of a suitable finite field by connecting pairs of elements that differ by a quadratic residue. The Paley graphs form an infinite family of conference graphs, which yield an infinite family of symmetric conference matrices. Paley graphs allow graph-theoretic tools to be applied to the number theory of quadratic residues, and have interesting properties that make them useful in graph theory more generally. Paley graphs are named after Raymond Paley. They are closely related to the Paley construction for constructing Hadamard matrices from quadratic residues. They were introduced as graphs independently by and . Sachs was interested in them for their self-complementarity properties, while Erdős and Rényi studied their symmetries. Paley digraphs are directed analogs of Paley graphs that yield antisymmetric conference matrices. They were introduced by (independently of Sachs, Erdős, and Rényi) as a way of constructing tournaments with a property previously known to be held only by random tournaments: in a Paley digraph, every small subset of vertices is dominated by some other vertex. Definition Let q be a prime power such that q = 1 (mod 4). That is, q should either be an arbitrary power of a Pythagorean prime (a prime congruent to 1 mod 4) or an even power of an odd non-Pythagorean prime. This choice of q implies that in the unique finite field Fq of order q, the element  −1 has a square root. Now let V = Fq and let . If a pair {a,b} is included in E, it is included under either ordering of its two elements. For, a − b = −(b − a), and  −1 is a square, from which it follows that a − b is a square if and only if b − a is a square. By definition G = (V, E) is the Paley graph of order q. Example For q = 13, the field Fq is just integer arithmetic modulo 13. The numbers with square roots mod 13 are: ±1 (square roots ±1 for +1, ±5 for −1) ±3 (square roots ±4 for +3, ±6 for −3) ±4 (square roots ±2 for +4, ±3 for −4). Thus, in the Paley graph, we form a vertex for each of the integers in the range [0,12], and connect each such integer x to six neighbors: x ± 1 (mod 13), x ± 3 (mod 13), and x ± 4 (mod 13). Properties The Paley graphs are self-complementary: the complement of any Paley graph is isomorphic to it. One isomorphism is via the mapping that takes a vertex to , where is any nonresidue . Paley graphs are strongly regular graphs, with parameters This in fact follows from the fact that the graph is arc-transitive and self-complementary. The strongly regular graphs with parameters of this form (for an arbitrary ) are called conference graphs, so the Paley graphs form an infinite family of conference graphs. The adjacency matrix of a conference graph, such as a Paley graph, can be used to construct a conference matrix, and vice versa. These are matrices whose coefficients are , with zero on the diagaonal, that give a scalar multiple of the identity matrix when multiplied by their transpose. The eigenvalues of Paley graphs are (with multiplicity 1) and (both with multiplicity ). They can be calculated using the quadratic Gauss sum or by using the theory of strongly regular graphs. If is prime, the isoperimetric number of the Paley graph satisfies the following bounds: When is prime, the associated Paley graph is a Hamiltonian circulant graph. Paley graphs are quasi-random: the number of times each possible constant-order graph occurs as a subgraph of a Paley graph is (in the limit for large ) the same as for random graphs, and large sets of vertices have approximately the same number of edges as they would in random graphs. The Paley graph of order 9 is a locally linear graph, a rook's graph, and the graph of the 3-3 duoprism. The Paley graph of order 13 has book thickness 4 and queue number 3. The Paley graph of order 17 is the unique largest graph G such that neither G nor its complement contains a complete 4-vertex subgraph. It follows that the Ramsey number R(4, 4) = 18. The Paley graph of order 101 is currently the largest known graph G such that neither G nor its complement contains a complete 6-vertex subgraph. Sasukara et al. (1993) use Paley graphs to generalize the construction of the Horrocks–Mumford bundle. Paley digraphs Let q be a prime power such that q = 3 (mod 4). Thus, the finite field of order q, Fq, has no square root of −1. Consequently, for each pair (a,b) of distinct elements of Fq, either a − b or b − a, but not both, is a square. The Paley digraph is the directed graph with vertex set V = Fq and arc set The Paley digraph is a tournament because each pair of distinct vertices is linked by an arc in one and only one direction. The Paley digraph leads to the construction of some antisymmetric conference matrices and biplane geometries. Genus The six neighbors of each vertex in the Paley graph of order 13 are connected in a cycle; that is, the graph is locally cyclic. Therefore, this graph can be embedded as a Whitney triangulation of a torus, in which every face is a triangle and every triangle is a face. More generally, if any Paley graph of order q could be embedded so that all its faces are triangles, we could calculate the genus of the resulting surface via the Euler characteristic as . Bojan Mohar conjectures that the minimum genus of a surface into which a Paley graph can be embedded is near this bound in the case that q is a square, and questions whether such a bound might hold more generally. Specifically, Mohar conjectures that the Paley graphs of square order can be embedded into surfaces with genus where the o(1) term can be any function of q that goes to zero in the limit as q goes to infinity. finds embeddings of the Paley graphs of order q ≡ 1 (mod 8) that are highly symmetric and self-dual, generalizing a natural embedding of the Paley graph of order 9 as a 3×3 square grid on a torus. However the genus of White's embeddings is higher by approximately a factor of three than Mohar's conjectured bound. References Further reading External links Number theory Parametric families of graphs Regular graphs Strongly regular graphs
Paley graph
Mathematics
1,457
78,165,197
https://en.wikipedia.org/wiki/Zelandopam
Zelandopam (; developmental code names YM-435, MYD-37) is a selective dopamine D1-like receptor agonist related to fenoldopam which was under development in Japan for the treatment of hypertension and heart failure but was never marketed. The drug was being developed for use by intravenous administration. The development of zelandopam appears to have been discontinued by the early 2000s. It was first described in the scientific literature by 1991. References Abandoned drugs Catechols D1-receptor agonists Peripherally selective drugs Tetrahydroisoquinolines Tetrols
Zelandopam
Chemistry
129
53,375,072
https://en.wikipedia.org/wiki/Social%20profiling
Social profiling is the process of constructing a social media user's profile using his or her social data. In general, profiling refers to the data science process of generating a person's profile with computerized algorithms and technology. There are various platforms for sharing this information with the proliferation of growing popular social networks, including but not limited to LinkedIn, Google+, Facebook and Twitter. Social profile and social data A person's social data refers to the personal data that they generate either online or offline (for more information, see social data revolution). A large amount of these data, including one's language, location and interest, is shared through social media and social network. Users join multiple social media platforms and their profiles across these platforms can be linked using different methods to obtain their interests, locations, content, and friend list. Altogether, this information can be used to construct a person's social profile. Meeting the user's satisfaction level for information collection is becoming more challenging. This is because of too much "noise" generated, which affects the process of information collection due to explosively increasing online data. Social profiling is an emerging approach to overcome the challenges faced in meeting user's demands by introducing the concept of personalized search while keeping in consideration user profiles generated using social network data. A study reviews and classifies research inferring users social profile attributes from social media data as individual and group profiling. The existing techniques along with utilized data sources, the limitations, and challenges were highlighted. The prominent approaches adopted include machine learning, ontology, and fuzzy logic. Social media data from Twitter and Facebook have been used by most of the studies to infer the social attributes of users. The literature showed that user social attributes, including age, gender, home location, wellness, emotion, opinion, relation, influence are still need to be explored. Personalized meta-search engines The ever-increasing online content has resulted in the lack of proficiency of centralized search engine's results. It can no longer satisfy user's demand for information. A possible solution that would increase coverage of search results would be meta-search engines, an approach that collects information from numerous centralized search engines. A new problem thus emerges, that is too much data and too much noise is generated in the collection process. Therefore, a new technique called personalized meta-search engines was developed. It makes use of a user's profile (largely social profile) to filter the search results. A user's profile can be a combination of a number of things, including but not limited to, "a user's manual selected interests, user's search history", and personal social network data. Social media profiling According to Samuel D. Warren II and Louis Brandeis (1890), disclosure of private information and the misuse of it can hurt people's feelings and cause considerable damage in people's lives. Social networks provide people access to intimate online interactions; therefore, information access control, information transactions, privacy issues, connections and relationships on social media have become important research fields and are subjects of concern to the public. Ricard Fogues and other co-authors state that "any privacy mechanism has at its base an access control", that dictate "how permissions are given, what elements can be private, how access rules are defined, and so on". Current access control for social media accounts tend to still be very simplistic: there is very limited diversity in the category of relationships on for social network accounts. User's relationships to others are, on most platforms, only categorized as "friend" or "non-friend" and people may leak important information to "friends" inside their social circle but not necessarily users to they consciously want to share the information to. The below section is concerned with social media profiling and what profiling information on social media accounts can achieve. Privacy leaks A lot of information is voluntarily shared on online social networks, such as photos and updates on life activities (new job, hobbies, etc.). People rest assured that different social network accounts on different platforms will not be linked as long as they do not grant permission to these links. However, according to Diane Gan, information gathered online enables "target subjects to be identified on other social networking sites such as Foursquare, Instagram, LinkedIn, Facebook and Google+, where more personal information was leaked". The majority of social networking platforms use the "opt out approach" for their features. If users wish to protect their privacy, it is user's own responsibility to check and change the privacy settings as a number of them are set to default option. A major social network platforms have developed geo-tag functions and are in popular usage. This is concerning because 39% of users have experienced profiling hacking; 78% burglars have used major social media networks and Google Street-view to select their victims; and an astonishing 54% of burglars attempted to break into empty houses when people posted their status updates and geo-locations. Facebook Formation and maintenance of social media accounts and their relationships with other accounts are associated with various social outcomes. In 2015, for many firms, customer relationship management is essential and is partially done through Facebook. Before the emergence and prevalence of social media, customer identification was primarily based upon information that a firm could directly acquire: for example, it may be through a customer's purchasing process or voluntary act of completing a survey/loyalty program. However, the rise of social media has greatly reduced the approach of building a customer's profile/model based on available data. Marketers now increasingly seek customer information through Facebook; this may include a variety of information users disclose to all users or partial users on Facebook: name, gender, date of birth, e-mail address, sexual orientation, marital status, interests, hobbies, favorite sports team(s), favorite athlete(s), or favorite music, and more importantly, Facebook connections. However, due to the privacy policy design, acquiring true information on Facebook is no trivial task. Often, Facebook users either refuse to disclose true information (sometimes using pseudonyms) or setting information to be only visible to friends, Facebook users who "LIKE" your page are also hard to identify. To do online profiling of users and cluster users, marketers and companies can and will access the following kinds of data: gender, the IP address and city of each user through the Facebook Insight page, who "LIKED" a certain user, a page list of all the pages that a person "LIKED" (transaction data), other people that a user follow (even if it exceeds the first 500, which we usually can not see) and all the publicly shared data. Twitter First launched on the Internet in March 2006, Twitter is a platform on which users can connect and communicate with any other user in just 280 characters. Like Facebook, Twitter is also a crucial tunnel for users to leak important information, often unconsciously, but able to be accessed and collected by others. According to Rachel Nuwer, in a sample of 10.8 million tweets by more than 5,000 users, their posted and publicly shared information are enough to reveal a user's income range. A postdoctoral researcher from the University of Pennsylvania, Daniel Preoţiuc-Pietro and his colleagues were able to categorize 90% of users into corresponding income groups. Their existing collected data, after being fed into a machine-learning model, generated reliable predictions on the characteristics of each income group. The mobile app called Streamd.in displays live tweets on Google Maps by using geo-location details attached to the tweet, and traces the user's movement in the real world. Profiling photos on social network The advent and universality of social media networks have boosted the role of images and visual information dissemination. Many types of visual information on social media transmit messages from the author, location information and other personal information. For example, a user may post a photo of themselves in which landmarks are visible, which can enable other users to determine where they are. In a study done by Cristina Segalin, Dong Seon Cheng and Marco Cristani, they found that profiling user posts' photos can reveal personal traits such as personality and mood. In the study, convolutional neural networks (CNNs) is introduced. It builds on the main characteristics of computational aesthetics CA (emphasizing "computational methods", "human aesthetic point of view", and "the need to focus on objective approaches") defined by Hoenig (Hoenig, 2005). This tool can extract and identify content in photos. Tags In a study called "A Rule-Based Flickr Tag Recommendation System", the author suggests personalized tag recommendations, largely based on user profiles and other web resources. It has proven to be useful in many aspects: "web content indexing", "multimedia data retrieval", and enterprise Web searches. Delicious Flickr Zooomr Marketing In 2011, marketers and retailers are increasing their market presence by creating their own pages on social media, on which they post information, ask people to like and share to enter into contests, and much more. Studies in 2011 show that on average a person spends about 23 minutes on a social networking site per day. Therefore, companies from small to large ones are investing in gathering user behavior information, rating, reviews, and more. Facebook Until 2006, communications online are not content led in terms of the amount of time people spend online. However, content sharing and creating has been the primary online activity of general social media users and that has forever changed online marketing. In the book Advanced Social media Marketing, the author gives an example of how a New York wedding planner might identify his audience when marketing on Facebook. Some of these categories may include: (1) who live in the United States; (2) Who live within 50 miles of New York; (3) Age 21 and older; (4) engaged female. No matter you choose to pay cost per click or cost per impressions/views "the cost of Facebook Marketplace ads and Sponsored Stories is set by your maximum bid and the competition for the same audiences". The cost of clicks is usually $0.5–1.5 each. Tools Klout Klout is a popular online tool that focuses on assessing a user's social influence by social profiling. It takes several social media platforms (such as Facebook, Twitter etc.) and numerous aspects into account and generate a user's score from 1 to 100. Regardless of one's number of likes for a post, or connections on LinkedIn, social media contains plentiful personal information. Klout generates a single score that indicates a person's influence. In a study called "How Much Klout do You Have...A Test of System Generated Cues on Source Credibility" done by Chad Edwards, Klout scores can influence people's perceived credibility. As Klout Score becomes a popular combined-into-one-score method of accessing people's influence, it can be a convenient tool and a biased one at the same time. A study of how social media followers influence people's judgments done by David Westerman illustrates that possible bias that Klout may contain. In one study, participants were asked to view six identical mock Twitter pages with only one major independent variable: page followers. Result shows that pages with too many or too fewer followers would both decrease its credibility, despite its similar content. Klout score may be subject to the same bias as well. While this is sometimes used during recruitment process, it remains to be controversial. Kred Kred not only assigns each user an influence score, but also allows each user to claim a Kred profile and Kred account. Through this platform, each user can view how top influencers engage with their online community and how each of your online action impacted your influence scores. Several suggestions that Kred is giving to the audience about increasing influence are: (1) be generous with your audience, free comfortable sharing content from your friends and tweeting others; (2) join an online community; (3) create and share meaningful content; (4) track your progress online. Follower Wonk Follower Wonk is specifically targeted towards Twitter analytics, which helps users to understand follower demographics, and optimizes your activities to find which activity attracts the most positive feedback from followers. Keyhole Keyhole is a hashtag tracking and analytics device that tracks Instagram, Twitter and Facebook hashtag data. It is a service that allows you to track which top influencer is using a certain hashtag and what are the other demographic information about the hashtag. When you enter a hashtag on its website, it will automatically randomly sample users that currently used this tag which allows user to analyze each hashtag they are interested in. Online activist social profile The prevalence of the Internet and social media has provided online activists both a new platform for activism, and the most popular tool. While online activism might stir up great controversy and trend, few people actually participate or sacrifice for relevant events. It becomes an interesting topic to analyse the profile of online activists. In a study done by Harp and his co-authors about online activist in China, Latin America and United States, the majority of online activists are males in Latin America and China with a median income of $10,000 or less, while the majority of online activist is female in United States with a median income of $30,000 - $69,999; and the education level of online activists in the United States tend to be postgraduate work/education while activists in other countries have lower education levels. A closer examination of their online shared content shows that the most shared information online include five types: To fundraise: Out of the three countries, China's activists have the most content on fundraise out of the three. To post links: Latin American activists have does the most on posting links. To promote debate or Discussion: Both Latin America's and China's activists posts more contents to promote debate or discussion than American activists do. To post information such as announcements and news: American activists post more such content than the activists from other countries. To communicate with Journalist: In this section, China's activists gets the lead. Social credit score in China The Chinese government hopes to establish a "social-credit system" that aims to score "financial creditworthiness of citizens", social behavior and even political behaviour. This system will be combining big data and social profiling technologies. According to Celia Hatton from BBC News, everyone in China will be expected to enroll in a national database that includes and automatically calculates fiscal information, political behavior, social behavior and daily life including minor traffic violations – a single score that evaluates a citizen's trustworthiness. Credibility scores, social influence scores and other comprehensive evaluations of people are not rare in other countries. However, China's "social-credit system" remains to be controversial as this single score can be a reflection of a person's every aspect. Indeed, "much about the social-credit system remains unclear". How would companies be limited by credit score system in China? Although the implementation of social credit score remains controversial in China, Chinese government aims to fully implement this system by 2018. According to Jake Laband (the deputy director of the Beijing office of the US-China Business Council), low credit scores will "limit eligibility for financing, employment, and Party membership, as well restrict real estate transactions and travel." Social credit score will not only be affected by legal criteria, but also social criteria, such as contract breaking. However, this has been a great concern for privacy for big companies due to the huge amount of data that will be analyzed by the system. See also Account verification Digital identity Online identity Online identity management Online presence management Online reputation Persona (user experience) Personal information Personal identity Real-name system Reputation management Reputation system Social media optimization User profile References Data mining Identity management Social information processing Social media Social networks
Social profiling
Technology
3,280
424,420
https://en.wikipedia.org/wiki/Accelerator%20physics
Accelerator physics is a branch of applied physics, concerned with designing, building and operating particle accelerators. As such, it can be described as the study of motion, manipulation and observation of relativistic charged particle beams and their interaction with accelerator structures by electromagnetic fields. It is also related to other fields: Microwave engineering (for acceleration/deflection structures in the radio frequency range). Optics with an emphasis on geometrical optics (beam focusing and bending) and laser physics (laser-particle interaction). Computer technology with an emphasis on digital signal processing; e.g., for automated manipulation of the particle beam. Plasma physics, for the description of intense beams. The experiments conducted with particle accelerators are not regarded as part of accelerator physics, but belong (according to the objectives of the experiments) to, e.g., particle physics, nuclear physics, condensed matter physics or materials physics. The types of experiments done at a particular accelerator facility are determined by characteristics of the generated particle beam such as average energy, particle type, intensity, and dimensions. Acceleration and interaction of particles with RF structures While it is possible to accelerate charged particles using electrostatic fields, like in a Cockcroft-Walton voltage multiplier, this method has limits given by electrical breakdown at high voltages. Furthermore, due to electrostatic fields being conservative, the maximum voltage limits the kinetic energy that is applicable to the particles. To circumvent this problem, linear particle accelerators operate using time-varying fields. To control this fields using hollow macroscopic structures through which the particles are passing (wavelength restrictions), the frequency of such acceleration fields is located in the radio frequency region of the electromagnetic spectrum. The space around a particle beam is evacuated to prevent scattering with gas atoms, requiring it to be enclosed in a vacuum chamber (or beam pipe). Due to the strong electromagnetic fields that follow the beam, it is possible for it to interact with any electrical impedance in the walls of the beam pipe. This may be in the form of a resistive impedance (i.e., the finite resistivity of the beam pipe material) or an inductive/capacitive impedance (due to the geometric changes in the beam pipe's cross section). These impedances will induce wakefields (a strong warping of the electromagnetic field of the beam) that can interact with later particles. Since this interaction may have negative effects, it is studied to determine its magnitude, and to determine any actions that may be taken to mitigate it. Beam dynamics Due to the high velocity of the particles, and the resulting Lorentz force for magnetic fields, adjustments to the beam direction are mainly controlled by magnetostatic fields that deflect particles. In most accelerator concepts (excluding compact structures like the cyclotron or betatron), these are applied by dedicated electromagnets with different properties and functions. An important step in the development of these types of accelerators was the understanding of strong focusing. Dipole magnets are used to guide the beam through the structure, while quadrupole magnets are used for beam focusing, and sextupole magnets are used for correction of dispersion effects. A particle on the exact design trajectory (or design orbit) of the accelerator only experiences dipole field components, while particles with transverse position deviation are re-focused to the design orbit. For preliminary calculations, neglecting all fields components higher than quadrupolar, an inhomogenic Hill differential equation can be used as an approximation, with a non-constant focusing force , including strong focusing and weak focusing effects the relative deviation from the design beam impulse the trajectory radius of curvature , and the design path length , thus identifying the system as a parametric oscillator. Beam parameters for the accelerator can then be calculated using Ray transfer matrix analysis; e.g., a quadrupolar field is analogous to a lens in geometrical optics, having similar properties regarding beam focusing (but obeying Earnshaw's theorem). The general equations of motion originate from relativistic Hamiltonian mechanics, in almost all cases using the Paraxial approximation. Even in the cases of strongly nonlinear magnetic fields, and without the paraxial approximation, a Lie transform may be used to construct an integrator with a high degree of accuracy. Modeling Codes There are many different software packages available for modeling the different aspects of accelerator physics. One must model the elements that create the electric and magnetic fields, and then one must model the charged particle evolution within those fields. Beam diagnostics A vital component of any accelerator are the diagnostic devices that allow various properties of the particle bunches to be measured. A typical machine may use many different types of measurement device in order to measure different properties. These include (but are not limited to) Beam Position Monitors (BPMs) to measure the position of the bunch, screens (fluorescent screens, Optical Transition Radiation (OTR) devices) to image the profile of the bunch, wire-scanners to measure its cross-section, and toroids or ICTs to measure the bunch charge (i.e., the number of particles per bunch). While many of these devices rely on well understood technology, designing a device capable of measuring a beam for a particular machine is a complex task requiring much expertise. Not only is a full understanding of the physics of the operation of the device necessary, but it is also necessary to ensure that the device is capable of measuring the expected parameters of the machine under consideration. Success of the full range of beam diagnostics often underpins the success of the machine as a whole. Machine tolerances Errors in the alignment of components, field strength, etc., are inevitable in machines of this scale, so it is important to consider the tolerances under which a machine may operate. Engineers will provide the physicists with expected tolerances for the alignment and manufacture of each component to allow full physics simulations of the expected behaviour of the machine under these conditions. In many cases it will be found that the performance is degraded to an unacceptable level, requiring either re-engineering of the components, or the invention of algorithms that allow the machine performance to be 'tuned' back to the design level. This may require many simulations of different error conditions in order to determine the relative success of each tuning algorithm, and to allow recommendations for the collection of algorithms to be deployed on the real machine. See also Particle accelerator Significant publications for accelerator physics Category:Accelerator physics Category:Accelerator physicists Category:Particle accelerators References External links United States Particle Accelerator School UCB/LBL Beam Physics site BNL page on The Alternating Gradient Concept Experimental particle physics
Accelerator physics
Physics
1,353
21,467,370
https://en.wikipedia.org/wiki/Moshe%20Zakai
Moshe Zakai (December 22, 1926 – November 27, 2015) was a Distinguished Professor at the Technion, Israel in electrical engineering, member of the Israel Academy of Sciences and Humanities and Rothschild Prize winner. Biography Moshe Zakai was born in Sokółka, Poland, to his parents Rachel and Eliezer Zakheim with whom he immigrated to Israel in 1936. He got the BSc degree in electrical engineering from the Technion – Israel Institute of Technology in 1951. He joined the scientific department of the Defense Minister of Israel, where he was assigned to research and development of radar systems. From 1956 to 1958, he did graduate work at the University of Illinois on an Israeli Government Fellowship, and was awarded the PhD in electrical engineering. He then returned to the scientific department as head of the communication research group. In 1965, he joined the faculty of the Technion as an associate professor. In 1969, he was promoted to the rank of professor and in 1970, he was appointed the holder of the Fondiller Chair in Telecommunication. He was appointed distinguished professor in 1985. From 1970 until 1973, he served as the dean of the faculty of Electrical Engineering, and from 1976 to 1978 he served as vice president of academic affairs. He retired in 1998 as distinguished professor emeritus. Moshe Zakai was married to Shulamit (Mita) Briskman, they have 3 children and 12 grandchildren. Major awards 1973 Fellow of the Institute of Electrical and Electronics Engineers (IEEE) 1988 Fellow of the Institute of Mathematical Statistics 1989 Foreign member of the US National Academy of Engineering 1993 Member of the Israel Academy of Sciences and Humanities 1993 The IEEE Control Systems Award 1994 The Rothschild Prize in Engineering Research Background Zakai's main research concentrated on the study of the theory of stochastic processes and its application to information and control problems; namely, problems of noise in communication radar and control systems. The basic class of random processes which represent the noise in such systems are known as "white noise" or the "Wiener process" where the white noise is "something like a derivative" of the Wiener process. Since these processes vary quickly with time, the classical differential and integral calculus is not applicable to such processes. In the 1940s Kiyoshi Itō developed a stochastic calculus (the Ito calculus) for such random processes. The relation between classical and Ito calculi From the results of Ito it became clear, back in the 1950s, that if a sequence of smooth functions which present the input to a physical system converge to something like a Brownian motion, then the sequence of outputs of the system do not converge in the classical sense. Several papers written by Eugene Wong and Zakai clarified the relation between the two approaches. This opened up the way to the application of the Ito calculus to problems in physics and engineering. These results are often referred to as Wong-Zakai corrections or theorems. Nonlinear filtering The solution to the problem of the optimal filtering of a wide class of linear dynamical system is known as the Kalman filter. This led to the same problem for nonlinear dynamical systems. The results for this case were highly complicated and were initially studied by Stratonovich in 1959 - 1960 and later by Kushner in 1964, leading to the Kushner-Stratonovich equation, a non-linear stochastic partial differential equation (SPDE) for the conditional probability density representing the optimal filter. Around 1967, Zakai derived a considerably simpler SPDE for an unnormalized version of the optimal filter density. It is known as the Zakai equation, and it has the great advantage of being a linear SPDE. The Zakai equation has been the starting point for further research work in this field. Comparing practical solutions with the optimal solution In many cases the optimal design of communication or radar operating under noise is too complicated to be practical, while practical solutions are known. In such cases it is extremely important to know how close the practical solution is to the theoretically optimal one. Extension of the Ito calculus to the two-parameter processes White noise and Brownian motion (the Wiener process) are functions of a single parameter, namely time. For problems such as rough surfaces it is necessary to extend the Ito calculus to two parameter Brownian sheets. Several papers which he wrote jointly with Wong extend the Ito integral to a "two-parameter" time. They also showed that every functional of the Brownian sheet can be represented as an extended integral. The Malliavin calculus and its application In addition to the Ito calculus, Paul Malliavin developed in the 1970s a "stochastic calculus of variations", now known as the Malliavin calculus. It turned out that in this setup it is possible to define a stochastic integral which will include the Ito integral. The papers of Zakai with David Nualart, Ali Süleyman Üstünel and Zeitouni promoted the understanding and applicability of the Malliavin calculus. The monograph of Üstünel and Zakai deals with the application of the Malliavin calculus to derive relations between the Wiener process and other processes which are in some sense "similar" to the probability law of the Wiener process. In the last decade he extended to transformations which are in some sense a "rotation" of the Wiener process and with Ustunel extended to some general cases results of information theory which were known for simpler spaces. Further information On his life and research, see pages xi–xiv of the volume in honor of Zakai's 65 birthday. For the list of publications until 1990, see pages xv–xx. For publications between 1990 and 2000, see [17]. For later publications search for M Zakai in arXiv. References Israeli scientists Israeli inventors Jewish scientists Academic staff of Technion – Israel Institute of Technology Members of the Israel Academy of Sciences and Humanities Israeli Jews Polish emigrants to Israel 1926 births 2015 deaths Mathematical analysts Foreign associates of the National Academy of Engineering
Moshe Zakai
Mathematics
1,202
62,112,877
https://en.wikipedia.org/wiki/Dependent%20random%20choice
In mathematics, dependent random choice is a probabilistic technique that shows how to find a large set of vertices in a dense graph such that every small subset of vertices has many common neighbors. It is a useful tool to embed a graph into another graph with many edges. Thus it has its application in extremal graph theory, additive combinatorics and Ramsey theory. Statement of theorem Let , and suppose: Every graph on vertices with at least edges contains a subset of vertices with such that for all with , has at least common neighbors. Proof The basic idea is to choose the set of vertices randomly. However, instead of choosing each vertex uniformly at random, the procedure randomly chooses a list of vertices first and then chooses common neighbors as the set of vertices. The hope is that in this way, the chosen set would be more likely to have more common neighbors. Formally, let be a list of vertices chosen uniformly at random from with replacement (allowing repetition). Let be the common neighborhood of . The expected value of isFor every -element subset of , contains if and only if is contained in the common neighborhood of , which occurs with probability An is bad if it has less than common neighbors. Then for each fixed -element subset of , it is contained in with probability less than . Therefore by linearity of expectation, To eliminate bad subsets, we exclude one element in each bad subset. The number of remaining elements is at least , whose expected value is at least Consequently, there exists a such that there are at least elements in remaining after getting rid of all bad -element subsets. The set of the remaining elements expresses the desired properties. Applications Turán numbers of a bipartite graph Dependent random choice can help find the Turán number. Using appropriate parameters, if is a bipartite graph in which all vertices in have degree at most , then the extremal number where only depends on . Formally, with , let be a sufficiently large constant such that If then and so the assumption of dependent random choice holds. Hence, for each graph with at least edges, there exists a vertex subset of size satisfying that every -subset of has at least common neighbors. By embedding into by embedding into arbitrarily and then embedding the vertices in one by one, then for each vertex in , it has at most neighbors in , which shows that their images in have at least common neighbors. Thus can be embedded into one of the common neighbors while avoiding collisions. This can be generalized to degenerate graphs using a variation of dependent random choice. Embedding a 1-subdivision of a complete graph DRC can be applied directly to show that if is a graph on vertices and edges, then contains a 1-subdivision of a complete graph with vertices. This can be shown in a similar way to the above proof of the bound on Turán number of a bipartite graph. Indeed, if we set , we have (since )and so the DRC assumption holds. Since a 1-subdivision of the complete graph on vertices is a bipartite graph with parts of size and where every vertex in the second part has degree two, the embedding argument in the proof of the bound on Turán number of a bipartite graph produces the desired result. Variation A stronger version finds two subsets of vertices in a dense graph so that every small subset of vertices in has a lot of common neighbors in . Formally, let be some positive integers with , and let be some real number. Suppose that the following constraints hold: Then every graph on vertices with at least edges contains two subsets of vertices so that any vertices in have at least common neighbors in . Extremal number of a degenerate bipartite graph Using this stronger statement, one can upper bound the extremal number of -degenerate bipartite graphs: for each -degenerate bipartite graph with at most vertices, the extremal number is at most Ramsey number of a degenerate bipartite graph This statement can be also applied to obtain an upper bound of the Ramsey number of a degenerate bipartite graphs. If is a fixed integer, then for every bipartite -degenerate bipartite graph on vertices, the Ramsey number is of the order References Further reading Dependent Random Choice - MIT Math Extremal graph theory Probabilistic arguments
Dependent random choice
Mathematics
894
29,405,971
https://en.wikipedia.org/wiki/Clathrus%20mauritianus
Clathrus mauritianus is a species of fungus in the stinkhorn family. It is found in Mauritius. References Phallales Fungi of Africa Fungi described in 1910 Fungi of Mauritius Fungus species
Clathrus mauritianus
Biology
42
61,920,919
https://en.wikipedia.org/wiki/Pseudoalgebra
In algebra, given a 2-monad T in a 2-category, a pseudoalgebra for T is a 2-category-version of algebra for T, that satisfies the laws up to coherent isomorphisms. See also Operad Notes References Further reading External links https://ncatlab.org/nlab/show/pseudoalgebra+for+a+2-monad https://golem.ph.utexas.edu/category/2014/06/codescent_objects_and_coherenc.html Adjoint functors Abstract algebra Category theory
Pseudoalgebra
Mathematics
132
24,266
https://en.wikipedia.org/wiki/Profinite%20group
In mathematics, a profinite group is a topological group that is in a certain sense assembled from a system of finite groups. The idea of using a profinite group is to provide a "uniform", or "synoptic", view of an entire system of finite groups. Properties of the profinite group are generally speaking uniform properties of the system. For example, the profinite group is finitely generated (as a topological group) if and only if there exists such that every group in the system can be generated by elements. Many theorems about finite groups can be readily generalised to profinite groups; examples are Lagrange's theorem and the Sylow theorems. To construct a profinite group one needs a system of finite groups and group homomorphisms between them. Without loss of generality, these homomorphisms can be assumed to be surjective, in which case the finite groups will appear as quotient groups of the resulting profinite group; in a sense, these quotients approximate the profinite group. Important examples of profinite groups are the additive groups of -adic integers and the Galois groups of infinite-degree field extensions. Every profinite group is compact and totally disconnected. A non-compact generalization of the concept is that of locally profinite groups. Even more general are the totally disconnected groups. Definition Profinite groups can be defined in either of two equivalent ways. First definition (constructive) A profinite group is a topological group that is isomorphic to the inverse limit of an inverse system of discrete finite groups. In this context, an inverse system consists of a directed set an indexed family of finite groups each having the discrete topology, and a family of homomorphisms such that is the identity map on and the collection satisfies the composition property whenever The inverse limit is the set: equipped with the relative product topology. One can also define the inverse limit in terms of a universal property. In categorical terms, this is a special case of a cofiltered limit construction. Second definition (axiomatic) A profinite group is a compact and totally disconnected topological group: that is, a topological group that is also a Stone space. Profinite completion Given an arbitrary group , there is a related profinite group the of . It is defined as the inverse limit of the groups , where runs through the normal subgroups in of finite index (these normal subgroups are partially ordered by inclusion, which translates into an inverse system of natural homomorphisms between the quotients). There is a natural homomorphism , and the image of under this homomorphism is dense in . The homomorphism is injective if and only if the group is residually finite (i.e., , where the intersection runs through all normal subgroups of finite index). The homomorphism is characterized by the following universal property: given any profinite group and any continuous group homomorphism where is given the smallest topology compatible with group operations in which its normal subgroups of finite index are open, there exists a unique continuous group homomorphism with . Equivalence Any group constructed by the first definition satisfies the axioms in the second definition. Conversely, any group satisfying the axioms in the second definition can be constructed as an inverse limit according to the first definition using the inverse limit where ranges through the open normal subgroups of ordered by (reverse) inclusion. If is topologically finitely generated then it is in addition equal to its own profinite completion. Surjective systems In practice, the inverse system of finite groups is almost always , meaning that all its maps are surjective. Without loss of generality, it suffices to consider only surjective systems since given any inverse system, it is possible to first construct its profinite group and then it as its own profinite completion. Examples Finite groups are profinite, if given the discrete topology. The group of -adic integers under addition is profinite (in fact procyclic). It is the inverse limit of the finite groups where ranges over all natural numbers and the natural maps for The topology on this profinite group is the same as the topology arising from the -adic valuation on The group of profinite integers is the profinite completion of In detail, it is the inverse limit of the finite groups where with the modulo maps for This group is the product of all the groups and it is the absolute Galois group of any finite field. The Galois theory of field extensions of infinite degree gives rise naturally to Galois groups that are profinite. Specifically, if is a Galois extension, consider the group consisting of all field automorphisms of that keep all elements of fixed. This group is the inverse limit of the finite groups where ranges over all intermediate fields such that is a Galois extension. For the limit process, the restriction homomorphisms are used, where The topology obtained on is known as the Krull topology after Wolfgang Krull. showed that profinite group is isomorphic to one arising from the Galois theory of field but one cannot (yet) control which field will be in this case. In fact, for many fields one does not know in general precisely which finite groups occur as Galois groups over This is the inverse Galois problem for a field  (For some fields the inverse Galois problem is settled, such as the field of rational functions in one variable over the complex numbers.) Not every profinite group occurs as an absolute Galois group of a field. The étale fundamental groups considered in algebraic geometry are also profinite groups, roughly speaking because the algebra can only 'see' finite coverings of an algebraic variety. The fundamental groups of algebraic topology, however, are in general not profinite: for any prescribed group, there is a 2-dimensional CW complex whose fundamental group equals it. The automorphism group of a locally finite rooted tree is profinite. Properties and facts Every product of (arbitrarily many) profinite groups is profinite; the topology arising from the profiniteness agrees with the product topology. The inverse limit of an inverse system of profinite groups with continuous transition maps is profinite and the inverse limit functor is exact on the category of profinite groups. Further, being profinite is an extension property. Every closed subgroup of a profinite group is itself profinite; the topology arising from the profiniteness agrees with the subspace topology. If is a closed normal subgroup of a profinite group then the factor group is profinite; the topology arising from the profiniteness agrees with the quotient topology. Since every profinite group is compact Hausdorff, there exists a Haar measure on which allows us to measure the "size" of subsets of compute certain probabilities, and integrate functions on A subgroup of a profinite group is open if and only if it is closed and has finite index. According to a theorem of Nikolay Nikolov and Dan Segal, in any topologically finitely generated profinite group (that is, a profinite group that has a dense finitely generated subgroup) the subgroups of finite index are open. This generalizes an earlier analogous result of Jean-Pierre Serre for topologically finitely generated pro- groups. The proof uses the classification of finite simple groups. As an easy corollary of the Nikolov–Segal result above, surjective discrete group homomorphism between profinite groups and is continuous as long as is topologically finitely generated. Indeed, any open subgroup of is of finite index, so its preimage in is also of finite index, and hence it must be open. Suppose and are topologically finitely generated profinite groups that are isomorphic as discrete groups by an isomorphism Then is bijective and continuous by the above result. Furthermore, is also continuous, so is a homeomorphism. Therefore the topology on a topologically finitely generated profinite group is uniquely determined by its structure. Ind-finite groups There is a notion of , which is the conceptual dual to profinite groups; i.e. a group is ind-finite if it is the direct limit of an inductive system of finite groups. (In particular, it is an ind-group.) The usual terminology is different: a group is called locally finite if every finitely generated subgroup is finite. This is equivalent, in fact, to being 'ind-finite'. By applying Pontryagin duality, one can see that abelian profinite groups are in duality with locally finite discrete abelian groups. The latter are just the abelian torsion groups. Projective profinite groups A profinite group is if it has the lifting property for every extension. This is equivalent to saying that is projective if for every surjective morphism from a profinite there is a section Projectivity for a profinite group is equivalent to either of the two properties: the cohomological dimension for every prime the Sylow -subgroups of are free pro--groups. Every projective profinite group can be realized as an absolute Galois group of a pseudo algebraically closed field. This result is due to Alexander Lubotzky and Lou van den Dries. Procyclic group A profinite group is if it is topologically generated by a single element that is, if the closure of the subgroup A topological group is procyclic if and only if where ranges over some set of prime numbers and is isomorphic to either or See also References . . . . Review of several books about profinite groups. . . Infinite group theory Topological groups
Profinite group
Mathematics
2,026
40,176
https://en.wikipedia.org/wiki/Joseph%20Priestley
Joseph Priestley (; 24 March 1733 – 6 February 1804) was an English chemist, Unitarian, natural philosopher, separatist theologian, grammarian, multi-subject educator and classical liberal political theorist. He published over 150 works, and conducted experiments in several areas of science. Priestley is credited with his independent discovery of oxygen by the thermal decomposition of mercuric oxide, having isolated it in 1774. During his lifetime, Priestley's considerable scientific reputation rested on his invention of carbonated water, his writings on electricity, and his discovery of several "airs" (gases), the most famous being what Priestley dubbed "dephlogisticated air" (oxygen). Priestley's determination to defend phlogiston theory and to reject what would become the chemical revolution eventually left him isolated within the scientific community. Priestley's science was integral to his theology, and he consistently tried to fuse Enlightenment rationalism with Christian theism. In his metaphysical texts, Priestley attempted to combine theism, materialism, and determinism, a project that has been called "audacious and original". He believed that a proper understanding of the natural world would promote human progress and eventually bring about the Christian millennium. Priestley, who strongly believed in the free and open exchange of ideas, advocated toleration and equal rights for religious Dissenters, which also led him to help found Unitarianism in England. The controversial nature of Priestley's publications, combined with his outspoken support of the American Revolution and later the French Revolution, aroused public and governmental contempt; eventually forcing him to flee in 1791, first to London and then to the United States, after a mob burned down his Birmingham home and church. He spent his last ten years in Northumberland County, Pennsylvania. A scholar and teacher throughout his life, Priestley made significant contributions to pedagogy, including the publication of a seminal work on English grammar and books on history; he prepared some of the most influential early timelines. The educational writings were among Priestley's most popular works. Arguably his metaphysical works, however, had the most lasting influence, as now considered primary sources for utilitarianism by philosophers such as Jeremy Bentham, John Stuart Mill, and Herbert Spencer. Early life and education (1733–1755) Priestley was born in Birstall (near Batley) in the West Riding of Yorkshire, to an established English Dissenting family who did not conform to the Church of England. He was the oldest of six children born to Mary Swift and Jonas Priestley, a finisher of cloth. Priestley was sent to live with his grandfather around the age of one. He returned home five years later, after his mother died. When his father remarried in 1741, Priestley went to live with his aunt and uncle, the wealthy and childless Sarah (d. 1764) and John Keighley, from Fieldhead. Priestley was a precocious child—at the age of four, he could flawlessly recite all 107 questions and answers of the Westminster Shorter Catechism—and his aunt sought the best education for him, intending him to enter ministry. During his youth, Priestley attended local schools, where he learned Greek, Latin, and Hebrew. Around 1749, Priestley became seriously ill and believed he was dying. Raised as a devout Calvinist, he believed a conversion experience was necessary for salvation, but doubted he had had one. This emotional distress eventually led him to question his theological upbringing, causing him to reject election and to accept universal salvation. As a result, the elders of his home church, the Independent Upper Chapel of Heckmondwike, near Leeds, refused him admission as a full member. Priestley's illness left him with a permanent stutter and he gave up any thoughts of entering the ministry at that time. In preparation for joining a relative in trade in Lisbon, he studied French, Italian, and German in addition to Aramaic, and Arabic. He was tutored by the Reverend George Haggerstone, who first introduced him to higher mathematics, natural philosophy, logic, and metaphysics through the works of Isaac Watts, Willem 's Gravesande, and John Locke. Daventry Academy Priestley eventually decided to return to his theological studies and, in 1752, matriculated at Daventry, a Dissenting academy. Because he was already widely read, Priestley was allowed to omit the first two years of coursework. He continued his intense study; this, together with the liberal atmosphere of the school, shifted his theology further leftward and he became a Rational Dissenter. Abhorring dogma and religious mysticism, Rational Dissenters emphasised rational analysis of the natural world and the Bible. Priestley later wrote that the book that influenced him the most, save the Bible, was David Hartley's Observations on Man (1749). Hartley's psychological, philosophical, and theological treatise postulated a material theory of mind. Hartley aimed to construct a Christian philosophy in which both religious and moral "facts" could be scientifically proven, a goal that would occupy Priestley for his entire life. In his third year at Daventry, Priestley committed himself to the ministry, which he described as "the noblest of all professions". Needham Market and Nantwich (1755–1761) Robert Schofield, Priestley's major modern biographer, describes his first "call" in 1755 to the Dissenting parish in Needham Market, Suffolk, as a "mistake" for both Priestley and the congregation. Priestley yearned for urban life and theological debate, whereas Needham Market was a small, rural town with a congregation wedded to tradition. Attendance and donations dropped sharply when they discovered the extent of his heterodoxy. Although Priestley's aunt had promised her support if he became a minister, she refused any further assistance when she realised he was no longer a Calvinist. To earn extra money, Priestley proposed opening a school, but local families informed him that they would refuse to send their children. He also presented a series of scientific lectures titled "Use of the Globes" that was more successful. Priestley's Daventry friends helped him obtain another position and in 1758 he moved to Nantwich, Cheshire, living at Sweetbriar Hall in the town's Hospital Street; his time there was happier. The congregation cared less about Priestley's heterodoxy and he successfully established a school. Unlike many schoolmasters of the time, Priestley taught his students natural philosophy and even bought scientific instruments for them. Appalled at the quality of the available English grammar books, Priestley wrote his own: The Rudiments of English Grammar (1761). His innovations in the description of English grammar, particularly his efforts to dissociate it from Latin grammar, led 20th-century scholars to describe him as "one of the great grammarians of his time". After the publication of Rudiments and the success of Priestley's school, Warrington Academy offered him a teaching position in 1761. Warrington Academy (1761–1767) In 1761, Priestley moved to Warrington in Cheshire and assumed the post of tutor of modern languages and rhetoric at the town's Dissenting academy, although he would have preferred to teach mathematics and natural philosophy. He fitted in well at Warrington, and made friends quickly. These included the doctor and writer John Aikin, his sister the children's author Anna Laetitia Aikin, and the potter and businessman Josiah Wedgwood. Wedgwood met Priestley in 1762, after a fall from his horse. Wedgwood and Priestley met rarely, but exchanged letters, advice on chemistry, and laboratory equipment. Wedgwood eventually created a medallion of Priestley in cream-on-blue jasperware. On 23 June 1762, Priestley married Mary Wilkinson of Wrexham. Of his marriage, Priestley wrote: This proved a very suitable and happy connexion, my wife being a woman of an excellent understanding, much improved by reading, of great fortitude and strength of mind, and of a temper in the highest degree affectionate and generous; feeling strongly for others, and little for herself. Also, greatly excelling in every thing relating to household affairs, she entirely relieved me of all concern of that kind, which allowed me to give all my time to the prosecution of my studies, and the other duties of my station. On 17 April 1763, they had a daughter, whom they named Sarah after Priestley's aunt. Educator and historian All of the books Priestley published while at Warrington emphasised the study of history; Priestley considered it essential for worldly success as well as religious growth. He wrote histories of science and Christianity in an effort to reveal the progress of humanity and, paradoxically, the loss of a pure, "primitive Christianity". In his Essay on a Course of Liberal Education for Civil and Active Life (1765), Lectures on History and General Policy (1788), and other works, Priestley argued that the education of the young should anticipate their future practical needs. This principle of utility guided his unconventional curricular choices for Warrington's aspiring middle-class students. He recommended modern languages instead of classical languages and modern rather than ancient history. Priestley's lectures on history were particularly revolutionary; he narrated a providentialist and naturalist account of history, arguing that the study of history furthered the comprehension of God's natural laws. Furthermore, his millennial perspective was closely tied to his optimism regarding scientific progress and the improvement of humanity. He believed that each age would improve upon the previous and that the study of history allowed people to perceive and to advance this progress. Since the study of history was a moral imperative for Priestley, he also promoted the education of middle-class women, which was unusual at the time. Some scholars of education have described Priestley as the most important English writer on education between the 17th-century John Locke and the 19th-century Herbert Spencer. Lectures on History was well received and was employed by many educational institutions, such as New College at Hackney, Brown, Princeton, Yale, and Cambridge. Priestley designed two Charts to serve as visual study aids for his Lectures. These charts are in fact timelines; they have been described as the most influential timelines published in the 18th century. Both were popular for decades, and the trustees of Warrington were so impressed with Priestley's lectures and charts that they arranged for the University of Edinburgh to grant him a Doctor of Law degree in 1764. During this period Priestley also regularly delivered lectures on rhetoric that were later published in 1777 as A Course of Lectures on Oratory and Criticism. History of electricity The intellectually stimulating atmosphere of Warrington, often called the "Athens of the North" (of England) during the 18th century, encouraged Priestley's growing interest in natural philosophy. He gave lectures on anatomy and performed experiments regarding temperature with another tutor at Warrington, his friend John Seddon. Despite Priestley's busy teaching schedule, he decided to write a history of electricity. Friends introduced him to the major experimenters in the field in Britain—John Canton, William Watson, Timothy Lane, and the visiting Benjamin Franklin who encouraged Priestley to perform the experiments he wanted to include in his history. Priestley also consulted with Franklin during the latter's kite experiments. In the process of replicating others' experiments, Priestley became intrigued by unanswered questions and was prompted to undertake experiments of his own design. (Impressed with his Charts and the manuscript of his history of electricity, Canton, Franklin, Watson, and Richard Price nominated Priestley for a fellowship in the Royal Society; he was accepted in 1766.) In 1767, the 700-page The History and Present State of Electricity was published to positive reviews. The first half of the text is a history of the study of electricity to 1766; the second and more influential half is a description of contemporary theories about electricity and suggestions for future research. The volume also contains extensive comments on Priestley's views that scientific inquiries be presented with all reasoning in one's discovery path, including false leads and mistakes. He contrasted his narrative approach with Newton's analytical proof-like approach which did not facilitate future researchers to continue the inquiry. Priestley reported some of his own discoveries in the second section, such as the conductivity of charcoal and other substances and the continuum between conductors and non-conductors. This discovery overturned what he described as "one of the earliest and universally received maxims of electricity", that only water and metals could conduct electricity. This and other experiments on the electrical properties of materials and on the electrical effects of chemical transformations demonstrated Priestley's early and ongoing interest in the relationship between chemical substances and electricity. Based on experiments with charged spheres, Priestley was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. He did not generalise or elaborate on this, and the general law was enunciated by French physicist Charles-Augustin de Coulomb in the 1780s. Priestley's strength as a natural philosopher was qualitative rather than quantitative and his observation of "a current of real air" between two electrified points would later interest Michael Faraday and James Clerk Maxwell as they investigated electromagnetism. Priestley's text became the standard history of electricity for over a century; Alessandro Volta (who later invented the battery), William Herschel (who discovered infrared radiation), and Henry Cavendish (who discovered hydrogen) all relied upon it. Priestley wrote a popular version of the History of Electricity for the general public titled A Familiar Introduction to the Study of Electricity (1768). He marketed the book with his brother Timothy, but unsuccessfully. Leeds (1767–1773) Perhaps prompted by Mary Priestley's ill health, or financial problems, or a desire to prove himself to the community that had rejected him in his childhood, Priestley moved with his family from Warrington to Leeds in 1767, and he became Mill Hill Chapel's minister. Two sons were born to the Priestleys in Leeds: Joseph, Junior, on 24 July 1768 and William three years later. Theophilus Lindsey, a rector at Catterick, Yorkshire, became one of Priestley's few friends in Leeds, of whom he wrote: "I never chose to publish any thing of moment relating to theology, without consulting him." Although Priestley had extended family living around Leeds, they do not appear to have communicated. Schofield conjectures that they considered him a heretic. Each year, Priestley travelled to London to consult with his close friend and publisher, Joseph Johnson, and to attend meetings of the Royal Society. Minister of Mill Hill Chapel When Priestley became its minister, Mill Hill Chapel was one of the oldest and most respected Dissenting congregations in England; however, during the early 18th century the congregation had fractured along doctrinal lines and was losing members to the charismatic Methodist movement. Priestley believed that he could strengthen the bonds of the congregation by educating the young people there. In his three-volume Institutes of Natural and Revealed Religion (1772–74), Priestley outlined his theories of religious instruction. More importantly, he laid out his belief in Socinianism. The doctrines he explicated would become the standards for Unitarians in Britain. This work marked a change in Priestley's theological thinking that is critical to understanding his later writings—it paved the way for his materialism and necessitarianism (the latter being the belief that a divine being acts in accordance with necessary metaphysical laws). Priestley's major argument in the Institutes was that the only revealed religious truths that could be accepted were those that matched one's experience of the natural world. Since his views of religion were tied deeply to his understanding of nature, the text's theism rested on the argument from design. The Institutes shocked and appalled many readers, primarily because it challenged basic Christian orthodoxies, such as the divinity of Christ and the miracle of the Virgin Birth. Methodists in Leeds penned a hymn asking God to "the Unitarian fiend expel / And chase his doctrine back to Hell." Priestley wanted to return Christianity to its "primitive" or "pure" form by eliminating the "corruptions" which had accumulated over the centuries. The fourth part of the Institutes, An History of the Corruptions of Christianity, became so long that he was forced to issue it separately in 1782. Priestley believed that the Corruptions was "the most valuable" work he ever published. In demanding that his readers apply the logic of the emerging sciences and comparative history to the Bible and Christianity, he alienated religious and scientific readers alike—scientific readers did not appreciate seeing science used in the defence of religion and religious readers dismissed the application of science to religion. Religious controversialist Priestley engaged in numerous political and religious pamphlet wars. According to Schofield, "he entered each controversy with a cheerful conviction that he was right, while most of his opponents were convinced, from the outset, that he was willfully and maliciously wrong. He was able, then, to contrast his sweet reasonableness to their personal rancor", but as Schofield points out Priestley rarely altered his opinion as a result of these debates. While at Leeds he wrote controversial pamphlets on the Lord's Supper and on Calvinist doctrine; thousands of copies were published, making them some of Priestley's most widely read works. Priestley founded the Theological Repository in 1768, a journal committed to the open and rational inquiry of theological questions. Although he promised to print any contribution, only like-minded authors submitted articles. He was, therefore, obliged to provide much of the journal's content himself. This material also became the basis for many of his later theological and metaphysical works. After only a few years, due to a lack of funds, he was forced to cease publishing the journal. However, he did revive it briefly in 1784 with similar results. Defender of Dissenters and political philosopher Many of Priestley's political writings supported the repeal of the Test and Corporation Acts, which restricted the rights of Dissenters. They could not hold political office, serve in the armed forces, or attend Oxford and Cambridge unless they subscribed to the Thirty-nine Articles of the Church of England. Dissenters repeatedly petitioned Parliament to repeal the Acts, arguing that they were being treated as second-class citizens. Priestley's friends, particularly other Rational Dissenters, urged him to publish a work on the injustices experienced by Dissenters; the result was his Essay on the First Principles of Government (1768). An early work of modern liberal political theory and Priestley's most thorough treatment of the subject, it—unusually for the time—distinguished political rights from civil rights with precision and argued for expansive civil rights. Priestley identified separate private and public spheres, contending that the government should have control only over the public sphere. Education and religion, in particular, he maintained, were matters of private conscience and should not be administered by the state. Priestley's later radicalism emerged from his belief that the British government was infringing upon these individual freedoms. Priestley also defended the rights of Dissenters against the attacks of William Blackstone, an eminent legal theorist, whose Commentaries on the Laws of England (1765–69) had become the standard legal guide. Blackstone's book stated that dissent from the Church of England was a crime and that Dissenters could not be loyal subjects. Furious, Priestley lashed out with his Remarks on Dr. Blackstone's Commentaries (1769), correcting Blackstone's interpretation of the law, his grammar (a highly politicised subject at the time), and history. Blackstone, chastened, altered subsequent editions of his Commentaries: he rephrased the offending passages and removed the sections claiming that Dissenters could not be loyal subjects, but he retained his description of Dissent as a crime. Natural philosopher: electricity, Optics, and carbonated water Although Priestley claimed that natural philosophy was only a hobby, he took it seriously. In his History of Electricity, he described the scientist as promoting the "security and happiness of mankind". Priestley's science was eminently practical and he rarely concerned himself with theoretical questions; his model was his close friend, Benjamin Franklin. When he moved to Leeds, Priestley continued his electrical and chemical experiments (the latter aided by a steady supply of carbon dioxide from a neighbouring brewery). Between 1767 and 1770, he presented five papers to the Royal Society from these initial experiments; the first four papers explored coronal discharges and other phenomena related to electrical discharge, while the fifth reported on the conductivity of charcoals from different sources. His subsequent experimental work focused on chemistry and pneumatics. Priestley published the first volume of his projected history of experimental philosophy, The History and Present State of Discoveries Relating to Vision, Light and Colours (referred to as his Optics), in 1772. He paid careful attention to the history of optics and presented excellent explanations of early optics experiments, but his mathematical deficiencies caused him to dismiss several important contemporary theories. He followed the (corpuscular) particle theory of light, influenced by the works of Reverend John Rowning and others. Furthermore, he did not include any of the practical sections that had made his History of Electricity so useful to practising natural philosophers. Unlike his History of Electricity, it was not popular and had only one edition, although it was the only English book on the topic for 150 years. The hastily written text sold poorly; the cost of researching, writing, and publishing the Optics convinced Priestley to abandon his history of experimental philosophy. Priestley was considered for the position of astronomer on James Cook's second voyage to the South Seas, but was not chosen. Still, he contributed in a small way to the voyage: he provided the crew with a method for making carbonated water, which he erroneously speculated might be a cure for scurvy. He then published a pamphlet with Directions for Impregnating Water with Fixed Air (1772). Priestley did not exploit the commercial potential of carbonated water, but others such as made fortunes from it. For his discovery of carbonated water Priestley has been labelled "the father of the soft drink", with the beverage company Schweppes regarding him as "the father of our industry". In 1773, the Royal Society recognised Priestley's achievements in natural philosophy by awarding him the Copley Medal. Priestley's friends wanted to find him a more financially secure position. In 1772, prompted by Richard Price and Benjamin Franklin, Lord Shelburne wrote to Priestley asking him to direct the education of his children and to act as his general assistant. Although Priestley was reluctant to sacrifice his ministry, he accepted the position, resigning from Mill Hill Chapel on 20 December 1772, and preaching his last sermon on 16 May 1773. Calne (1773–1780) In 1773, the Priestleys moved to Calne in Wiltshire, and a year later Lord Shelburne and Priestley took a tour of Europe. According to Priestley's close friend Theophilus Lindsey, Priestley was "much improved by this view of mankind at large". Upon their return, Priestley easily fulfilled his duties as librarian and tutor. The workload was intentionally light, allowing him time to pursue his scientific investigations and theological interests. Priestley also became a political adviser to Shelburne, gathering information on parliamentary issues and serving as a liaison between Shelburne and the Dissenting and American interests. When the Priestleys' third son was born on 24 May 1777, they named him Henry at the lord's request. Materialist philosopher Priestley wrote his most important philosophical works during his years with Lord Shelburne. In a series of major metaphysical texts published between 1774 and 1780—An Examination of Dr. Reid's Inquiry into the Human Mind (1774), Hartley's Theory of the Human Mind on the Principle of the Association of Ideas (1775), Disquisitions relating to Matter and Spirit (1777), The Doctrine of Philosophical Necessity Illustrated (1777), and Letters to a Philosophical Unbeliever (1780)—he argues for a philosophy that incorporates four concepts: determinism, materialism, causation, and necessitarianism. By studying the natural world, he argued, people would learn how to become more compassionate, happy, and prosperous. Priestley strongly suggested that there is no mind-body duality, and put forth a materialist philosophy in these works; that is, one founded on the principle that everything in the universe is made of matter that we can perceive. He also contended that discussing the soul is impossible because it is made of a divine substance, and humanity cannot perceive the divine. Despite his separation of the divine from the mortal, this position shocked and angered many of his readers, who believed that such a duality was necessary for the soul to exist. Responding to Baron d'Holbach's Système de la Nature (1770) and David Hume's Dialogues Concerning Natural Religion (1779) as well as the works of the French philosophers, Priestley maintained that materialism and determinism could be reconciled with a belief in God. He criticised those whose faith was shaped by books and fashion, drawing an analogy between the scepticism of educated men and the credulity of the masses. Maintaining that humans had no free will, Priestley argued that what he called "philosophical necessity" (akin to absolute determinism) is consonant with Christianity, a position based on his understanding of the natural world. Like the rest of nature, man's mind is subject to the laws of causation, Priestley contended, but because a benevolent God created these laws, the world and the people in it will eventually be perfected. Evil is therefore only an imperfect understanding of the world. Although Priestley's philosophical work has been characterised as "audacious and original", it partakes of older philosophical traditions on the problems of free will, determinism, and materialism. For example, the 17th-century philosopher Baruch Spinoza argued for absolute determinism and absolute materialism. Like Spinoza and Priestley, Leibniz argued that human will was completely determined by natural laws; unlike them, Leibniz argued for a "parallel universe" of immaterial objects (such as human souls) so arranged by God that its outcomes agree exactly with those of the material universe. Leibniz and Priestley share an optimism that God has chosen the chain of events benevolently; however, Priestley believed that the events were leading to a glorious millennial conclusion, whereas for Leibniz the entire chain of events was optimal in and of itself, as compared with other conceivable chains of events. Founder of British Unitarianism When Priestley's friend Theophilus Lindsey decided to found a new Christian denomination that would not restrict its members' beliefs, Priestley and others hurried to his aid. On 17 April 1774, Lindsey held the first Unitarian service in Britain, at the newly formed Essex Street Chapel in London; he had even designed his own liturgy, of which many were critical. Priestley defended his friend in the pamphlet Letter to a Layman, on the Subject of the Rev. Mr. Lindsey's Proposal for a Reformed English Church (1774), claiming that only the form of worship had been altered, not its substance, and attacking those who followed religion as a fashion. Priestley attended Lindsey's church regularly in the 1770s and occasionally preached there. He continued to support institutionalised Unitarianism for the rest of his life, writing several Defenses of Unitarianism and encouraging the foundation of new Unitarian chapels throughout Britain and the United States. Experiments and Observations on Different Kinds of Air Priestley's years in Calne were the only ones in his life dominated by scientific investigations; they were also the most scientifically fruitful. His experiments were almost entirely confined to "airs", and out of this work emerged his most important scientific texts: the six volumes of Experiments and Observations on Different Kinds of Air (1774–86). These experiments helped repudiate the last vestiges of the theory of four elements, which Priestley attempted to replace with his own variation of phlogiston theory. According to that 18th-century theory, the combustion or oxidation of a substance corresponded to the release of a material substance, phlogiston. Priestley's work on "airs" is not easily classified. As historian of science Simon Schaffer writes, it "has been seen as a branch of physics, or chemistry, or natural philosophy, or some highly idiosyncratic version of Priestley's own invention". Furthermore, the volumes were both a scientific and a political enterprise for Priestley, in which he argues that science could destroy "undue and usurped authority" and that government has "reason to tremble even at an air pump or an electrical machine". Volume I of Experiments and Observations on Different Kinds of Air outlined several discoveries: "nitrous air" (nitric oxide, NO); "vapor of spirit of salt", later called "acid air" or "marine acid air" (anhydrous hydrochloric acid, HCl); "alkaline air" (ammonia, NH3); "diminished" or "dephlogisticated nitrous air" (nitrous oxide, N2O); and, most famously, "dephlogisticated air" (oxygen, O2) as well as experimental findings that showed plants revitalised enclosed volumes of air, a discovery that would eventually lead to the discovery of photosynthesis by Jan Ingenhousz. Priestley also developed a "nitrous air test" to determine the "goodness of air". Using a pneumatic trough, he would mix nitrous air with a test sample, over water or mercury, and measure the decrease in volume—the principle of eudiometry. After a small history of the study of airs, he explained his own experiments in an open and sincere style. As an early biographer writes, "whatever he knows or thinks he tells: doubts, perplexities, blunders are set down with the most refreshing candour." Priestley also described his cheap and easy-to-assemble experimental apparatus; his colleagues therefore believed that they could easily reproduce his experiments. Faced with inconsistent experimental results, Priestley employed phlogiston theory. This led him to conclude that there were only three types of "air": "fixed", "alkaline", and "acid". Priestley dismissed the burgeoning chemistry of his day. Instead, he focused on gases and "changes in their sensible properties", as had natural philosophers before him. He isolated carbon monoxide (CO), but apparently did not realise that it was a separate "air". Discovery of oxygen In August 1774 he isolated an "air" that appeared to be completely new, but he did not have an opportunity to pursue the matter because he was about to tour Europe with Shelburne. While in Paris, Priestley replicated the experiment for others, including French chemist Antoine Lavoisier. After returning to Britain in January 1775, he continued his experiments and discovered "vitriolic acid air" (sulphur dioxide, SO2). In March he wrote to several people regarding the new "air" that he had discovered in August. One of these letters was read aloud to the Royal Society, and a paper outlining the discovery, titled "An Account of further Discoveries in Air", was published in the Society's journal Philosophical Transactions. Priestley called the new substance "dephlogisticated air", which he made in the famous experiment by focusing the sun's rays on a sample of mercuric oxide. He first tested it on mice, who surprised him by surviving quite a while entrapped with the air, and then on himself, writing that it was "five or six times better than common air for the purpose of respiration, inflammation, and, I believe, every other use of common atmospherical air". He had discovered oxygen gas (O2). Priestley assembled his oxygen paper and several others into a second volume of Experiments and Observations on Air, published in 1776. He did not emphasise his discovery of "dephlogisticated air" (leaving it to Part III of the volume) but instead argued in the preface how important such discoveries were to rational religion. His paper narrated the discovery chronologically, relating the long delays between experiments and his initial puzzlements; thus, it is difficult to determine when exactly Priestley "discovered" oxygen. Such dating is significant as both Lavoisier and Swedish pharmacist Carl Wilhelm Scheele have strong claims to the discovery of oxygen as well, Scheele having been the first to isolate the gas (although he published after Priestley) and Lavoisier having been the first to describe it as purified "air itself entire without alteration" (that is, the first to explain oxygen without phlogiston theory). In his paper "Observations on Respiration and the Use of the Blood", Priestley was the first to suggest a connection between blood and air, although he did so using phlogiston theory. In typical Priestley fashion, he prefaced the paper with a history of the study of respiration. A year later, clearly influenced by Priestley, Lavoisier was also discussing respiration at the Académie des sciences. Lavoisier's work began the long train of discovery that produced papers on oxygen respiration and culminated in the overthrow of phlogiston theory and the establishment of modern chemistry. Around 1779 Priestley and Shelburne – soon to be the 1st Marquess of Landsdowne – had a rupture, the precise reasons for which remain unclear. Shelburne blamed Priestley's health, while Priestley claimed Shelburne had no further use for him. Some contemporaries speculated that Priestley's outspokenness had hurt Shelburne's political career. Schofield argues that the most likely reason was Shelburne's recent marriage to Louisa Fitzpatrick—apparently, she did not like the Priestleys. Although Priestley considered moving to America, he eventually accepted Birmingham New Meeting's offer to be their minister. Both Priestley and Shelburne's families upheld their Unitarian faith for generations. In December 2013, it was reported that Sir Christopher Bullock—a direct descendant of Shelburne's brother, Thomas Fitzmaurice (MP)—had married his wife, Lady Bullock, née Barbara May Lupton, at London's Unitarian Essex Church in 1917. Barbara Lupton was the second cousin of Olive Middleton, née Lupton, the great-grandmother of Catherine, Duchess of Cambridge. In 1914, Olive and Noel Middleton had married at Leeds' Mill Hill Chapel, which Priestley, as its minister, had once guided towards Unitarianism. Birmingham (1780–1791) In 1780 the Priestleys moved to Birmingham and spent a happy decade surrounded by old friends, until they were forced to flee in 1791 by religiously motivated mob violence in what became known as the Priestley Riots. Priestley accepted the ministerial position at New Meeting on the condition that he be required to preach and teach only on Sundays, so that he would have time for his writing and scientific experiments. As in Leeds, Priestley established classes for the youth of his parish and by 1781, he was teaching 150 students. Because Priestley's New Meeting salary was only 100 guineas, friends and patrons donated money and goods to help continue his investigations. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1782. Chemical Revolution Many of the friends that Priestley made in Birmingham were members of the Lunar Society, a group of manufacturers, inventors, and natural philosophers who assembled monthly to discuss their work. The core of the group included men such as the manufacturer Matthew Boulton, the chemist and geologist James Keir, the inventor and engineer James Watt, and the botanist, chemist, and geologist William Withering. Priestley was asked to join this unique society and contributed much to the work of its members. As a result of this stimulating intellectual environment, he published several important scientific papers, including "Experiments relating to Phlogiston, and the seeming Conversion of Water into Air" (1783). The first part attempts to refute Lavoisier's challenges to his work on oxygen; the second part describes how steam is "converted" into air. After several variations of the experiment, with different substances as fuel and several different collecting apparatuses (which produced different results), he concluded that air could travel through more substances than previously surmised, a conclusion "contrary to all the known principles of hydrostatics". This discovery, along with his earlier work on what would later be recognised as gaseous diffusion, would eventually lead John Dalton and Thomas Graham to formulate the kinetic theory of gases. In 1777, Antoine Lavoisier had written Mémoire sur la combustion en général, the first of what proved to be a series of attacks on phlogiston theory; it was against these attacks that Priestley responded in 1783. While Priestley accepted parts of Lavoisier's theory, he was unprepared to assent to the major revolutions Lavoisier proposed: the overthrow of phlogiston, a chemistry based conceptually on elements and compounds, and a new chemical nomenclature. Priestley's original experiments on "dephlogisticated air" (oxygen), combustion, and water provided Lavoisier with the data he needed to construct much of his system; yet Priestley never accepted Lavoisier's new theories and continued to defend phlogiston theory for the rest of his life. Lavoisier's system was based largely on the quantitative concept that mass is neither created nor destroyed in chemical reactions (i.e., the conservation of mass). By contrast, Priestley preferred to observe qualitative changes in heat, colour, and particularly volume. His experiments tested "airs" for "their solubility in water, their power of supporting or extinguishing flame, whether they were respirable, how they behaved with acid and alkaline air, and with nitric oxide and inflammable air, and lastly how they were affected by the electric spark." By 1789, when Lavoisier published his Traité Élémentaire de Chimie and founded the Annales de Chimie, the new chemistry had come into its own. Priestley published several more scientific papers in Birmingham, the majority attempting to refute Lavoisier. Priestley and other Lunar Society members argued that the new French system was too expensive, too difficult to test, and unnecessarily complex. Priestley in particular rejected its "establishment" aura. In the end, Lavoisier's view prevailed: his new chemistry introduced many of the principles on which modern chemistry is founded. Priestley's refusal to accept Lavoisier's "new chemistry"—such as the conservation of mass—and his determination to adhere to a less satisfactory theory has perplexed many scholars. Schofield explains it thus: "Priestley was never a chemist; in a modern, and even a Lavoisierian, sense, he was never a scientist. He was a natural philosopher, concerned with the economy of nature and obsessed with an idea of unity, in theology and in nature." Historian of science John McEvoy largely agrees, writing that Priestley's view of nature as coextensive with God and thus infinite, which encouraged him to focus on facts over hypotheses and theories, prompted him to reject Lavoisier's system. McEvoy argues that "Priestley's isolated and lonely opposition to the oxygen theory was a measure of his passionate concern for the principles of intellectual freedom, epistemic equality and critical inquiry." Priestley himself claimed in the last volume of Experiments and Observations that his most valuable works were his theological ones because they were "superior [in] dignity and importance". Defender of English Dissenters and French revolutionaries Although Priestley spent much of this time defending phlogiston theory from the "new chemists", most of what he published in Birmingham was theological. For example, in 1782, he published the fourth volume of his Institutes, An History of the Corruptions of Christianity, describing how he thought the teachings of the early Christian church had been "corrupted" or distorted. Schofield describes the work as "derivative, disorganized, wordy, and repetitive, detailed, exhaustive, and devastatingly argued". The text addresses issues ranging from the divinity of Christ to the proper form for the Lord's Supper. In 1786, Priestley published its provocatively titled sequel, An History of Early Opinions concerning Jesus Christ, compiled from Original Writers, proving that the Christian Church was at first Unitarian. Thomas Jefferson later wrote of the profound effect that these two books had on him: "I have read his Corruptions of Christianity, and Early Opinions of Jesus, over and over again; and I rest on them ... as the basis of my own faith. These writings have never been answered." Although a few readers such as Jefferson and other Rational Dissenters approved of the work, many others reviewed it harshly because of its extreme theological positions, particularly its rejection of the Trinity. In 1785, while Priestley was engaged in a pamphlet war over Corruptions, he also published The Importance and Extent of Free Enquiry, claiming that the Reformation had not really reformed the church. In words that would boil over into a national debate, he challenged his readers to enact change: Let us not, therefore, be discouraged, though, for the present, we should see no great number of churches professedly unitarian .... We are, as it were, laying gunpowder, grain by grain, under the old building of error and superstition, which a single spark may hereafter inflame, so as to produce an instantaneous explosion; in consequence of which that edifice, the erection of which has been the work of ages, may be overturned in a moment, and so effectually as that the same foundation can never be built upon again .... Although discouraged by friends from using such inflammatory language, Priestley refused to back down from his opinions in print and he included it, forever branding himself as "Gunpowder Joe". After the publication of this seeming call for revolution in the midst of the French Revolution, pamphleteers stepped up their attacks on Priestley and he and his church were even threatened with legal action. In 1787, 1789, and 1790, Dissenters again tried to repeal the Test and Corporation Acts. Although they might have succeeded initially, by 1790, with the fears of revolution looming in Parliament, few were swayed by appeals to equal rights. Political cartoons, one of the most effective and popular media of the time, skewered the Dissenters and Priestley. In Parliament, William Pitt and Edmund Burke argued against the repeal, a betrayal that angered Priestley and his friends, who had expected the two men's support. Priestley wrote a series of Letters to William Pitt and Letters to Burke. Dissenters such as Priestley who supported the French Revolution came under increasing suspicion as scepticism regarding the revolution grew. In its propaganda against "radicals", Pitt's administration used the "gunpowder" statement to argue that Priestley and other Dissenters wanted to overthrow the government. Burke, in his famous Reflections on the Revolution in France (1790), tied natural philosophers, and specifically Priestley, to the French Revolution, writing that radicals who supported science in Britain "considered man in their experiments no more than they do mice in an air pump". Burke also associated republican principles with alchemy and insubstantial air, mocking the scientific work done by both Priestley and French chemists. He made much in his later writings of the connections between "Gunpowder Joe", science, and Lavoisier—who was improving gunpowder for the French in their war against Britain. Paradoxically, a secular statesman, Burke, argued against science and maintained that religion should be the basis of civil society, whereas a Dissenting minister, Priestley, argued that religion could not provide the basis for civil society and should be restricted to one's private life. Priestley also supported the campaign to abolish the British slave trade and published a sermon in 1788 in which he declared that nobody treated enslaved people "with so much cruelty as the English". Birmingham riots of 1791 The animus that had been building against Dissenters and supporters of the American and French Revolutions exploded in July 1791. Priestley and several other Dissenters had arranged to have a celebratory dinner on Bastille Day, the anniversary of the storming of the Bastille, a provocative action in a country where many disapproved of the French Revolution and feared that it might spread to Britain. Amid fears of violence, Priestley was convinced by his friends not to attend. Rioters gathered outside the hotel during the banquet and attacked the attendees as they left. The rioters moved on to the New Meeting and Old Meeting churches, and burned both to the ground. Priestley and his wife fled from their home; although their son William and others stayed behind to protect their property, the mob overcame them and torched Priestley's house "Fairhill" at Sparkbrook, destroying his valuable laboratory and all of the family's belongings. Twenty-six other Dissenters' homes and three more churches were burned in the three-day riot. Priestley spent several days hiding with friends until he was able to travel safely to London. The carefully executed attacks of the "mob" and the farcical trials of only a handful of the "leaders" convinced many at the time—and modern historians later—that the attacks were planned and condoned by local Birmingham magistrates. When George III was eventually forced to send troops to the area, he said: "I cannot but feel better pleased that Priestley is the sufferer for the doctrines he and his party have instilled, and that the people see them in their true light." Hackney (1791–1794) Unable to return to Birmingham, the Priestleys eventually settled in Lower Clapton, a district in Hackney, Middlesex where he gave a series of lectures on history and natural philosophy at the Dissenting academy, the New College at Hackney. Friends helped the couple rebuild their lives, contributing money, books, and laboratory equipment. Priestley tried to obtain restitution from the government for the destruction of his Birmingham property, but he was never fully reimbursed. He also published An Appeal to the Public on the Subject of the Riots in Birmingham (1791), which indicted the people of Birmingham for allowing the riots to occur and for "violating the principles of English government". The couple's friends urged them to leave Britain and emigrate to either France or the new United States, even though Priestley had received an appointment to preach for the Gravel Pit Meeting congregation. Priestley was minister between 1793 and 1794 and the sermons he preached there, particularly the two Fast Sermons, reflect his growing millenarianism, his belief that the end of the world was fast approaching. After comparing Biblical prophecies to recent history, Priestley concluded that the French Revolution was a harbinger of the Second Coming of Christ. Priestley's works had always had a millennial cast, but after the beginning of the French Revolution, this strain increased. He wrote to a younger friend that while he himself would not see the Second Coming, his friend "may probably live to see it ... It cannot, I think be more than twenty years [away]." Daily life became more difficult for the family: Priestley was burned in effigy along with Thomas Paine; vicious political cartoons continued to be published about him; letters were sent to him from across the country, comparing him to the devil and Guy Fawkes; tradespeople feared the family's business; and Priestley's Royal Academy friends distanced themselves. As the penalties became harsher for those who spoke out against the government, Priestley examined options for removing himself and his family from England. Joseph Priestley's son William was presented to the French Assembly and granted letters of naturalisation on 8 June 1792. Priestley learned about it from the Morning Chronicle. A decree of 26 August 1792 by the French National Assembly conferred French citizenship on Joseph Priestley and others who had "served the cause of liberty" by their writings. Priestley accepted French citizenship, considering it "the greatest of honours". In the French National Convention election on 5 September 1792, Joseph Priestley was elected to the French National Convention by at least two departments, (Orne and Rhône-et-Loire). He declined the honour, on the grounds that he was not fluent in French. As relations between England and France worsened, a removal to France became impracticable. Following the declaration of war of February 1793, and the Aliens Bill of March 1793, which forbade correspondence or travel between England and France, William Priestley left France for America. Joseph Priestley's sons Harry and Joseph chose to leave England for America in August 1793. Finally Priestley himself followed with his wife, boarding the Sansom at Gravesend on 7 April 1794. Five weeks after Priestley left, William Pitt's administration began arresting radicals for seditious libel, resulting in the famous 1794 Treason Trials. Pennsylvania (1794–1804) The Priestleys arrived in New York City on 4 June 1794, where they were fêted by various political factions vying for Priestley's endorsement. Priestley declined their entreaties, hoping to avoid political discord in his new country. Before travelling to a new home in the backwoods of Northumberland County, Pennsylvania, at Point township (now the Borough of Northumberland), Priestley and his wife lodged in Philadelphia, where he gave a series of sermons which led to the founding of the First Unitarian Church of Philadelphia. Priestley turned down an opportunity to teach chemistry at the University of Pennsylvania. Priestley's son Joseph Priestley Jr. was a leading member of a consortium that had purchased of virgin woodland between the forks of Loyalsock Creek. This they intended to lease or sell in plots, with payment deferred to seven annual instalments, with interest. His brothers, William and Henry, bought a plot of woodland which they attempted to transform into a farm, later called "Fairhill", felling and uprooting trees, and making lime to sweeten the soil by building their own lime kilns. Henry Priestley died 11 December 1795, possibly of malaria which he may have contracted after landing at New York. Mary Priestley's health, already poor, deteriorated further; although William's wife, Margaret Foulke-Priestley, moved in with the couple to nurse Mary 24 hours a day, Mary Priestley died 17 September 1796. Priestley then moved in with his elder son, Joseph Jr., and his wife Elizabeth Ryland-Priestley. Thomas Cooper, whose son, Thomas Jr., was living with the Priestleys, was a frequent visitor. Since his arrival in America, Priestley had continued to defend his Christian Unitarian beliefs; now, falling increasingly under the influence of Thomas Cooper and Elizabeth Ryland-Priestley, he was unable to avoid becoming embroiled in political controversy. In 1798, when, in response to the Pinckney affair, a belligerent President Adams sought to enlarge the navy and mobilise the militia into what Priestley and Cooper saw as a 'standing army', Priestley published an anonymous newspaper article: Maxims of political arithmetic, which attacked Adams, defended free trade, and advocated a form of Jeffersonian isolationism. In the same year, a small package, addressed vaguely: "Dr Priestley in America," was seized by the Royal Navy on board a neutral Danish boat. It was found to contain three letters, one of which was signed by the radical printer John Hurford Stone. These intercepted letters were published in London, and copied in numerous papers in America. One of the letters was addressed to "MBP", with a note: "I inclose a note for our friend MBP—but, as ignorant of the name he bears at present among you, I must beg you to seal and address it." This gave the intercepted letters a tinge of intrigue. Fearful lest they be taken as evidence of him being a 'spy in the interest of France', Priestley sent a clumsy letter to numerous newspaper editors, in which he naively named "MBP" (Member of the British Parliament) as Mr. Benjamin Vaughan, who "like me, thought it necessary to leave England, and for some time is said to have assumed a feigned name." William Cobbett, in his Porcupine's Gazette, 20 August 1798, added that Priestley "has told us who Mr MBP is, and has confirmed me in the opinion of their both being spies in the interest of France." Joseph Priestley Jr. left on a visit to England at Christmas 1798, not returning until August 1800. In his absence, his wife Elizabeth Ryland-Priestley and Thomas Cooper became increasing close, collaborating in numerous political essays. Priestley continued to be influenced by Elizabeth and Cooper, even helping hawk a seditious handbill Cooper had printed around Point township, and across the Susquehanna at Sunbury. In September 1799, William Cobbett printed extracts from this handbill, asserting that: "Dr Priestley has taken great pains to circulate this address, has travelled through the country for the purpose, and is in fact the patron of it." He challenged Priestley to "clear himself of the accusation" or face prosecution." Barely a month later, in November and December 1799, Priestley stepped forward in his own defence, with his Letters to the inhabitants of Northumberland. Priestley's son, William, now living in Philadelphia, was increasingly embarrassed by his father's actions. He confronted his father, expressing John and Benjamin Vaughan's unease, his own wife's concerns about Elizabeth Ryland-Priestley's dietary care, and his own concerns at the closeness of Elizabeth Ryland-Priestley and Thomas Cooper's relationship, and their adverse influence on Dr Priestley; but this only led to a further estrangement between William and his sister-in-law. When, a while later, Priestley's household suffered a bout of food poisoning, perhaps from milk sickness or a bacterial infection, Elizabeth Ryland-Priestley falsely accused William of having poisoned the family's flour. Although this allegation has attracted the attention of some modern historians, it is believed to be without foundation. Priestley continued the educational projects that had always been important to him, helping to establish the "Northumberland Academy" and donating his library to the fledgling institution. He exchanged letters regarding the proper structure of a university with Thomas Jefferson, who used this advice when founding the University of Virginia. Jefferson and Priestley became close, and when the latter had completed his General History of the Christian Church, he dedicated it to President Jefferson, writing that "it is now only that I can say I see nothing to fear from the hand of power, the government under which I live being for the first time truly favourable to me." Priestley tried to continue his scientific investigations in America with the support of the American Philosophical Society, to which he had been previously elected a member in 1785. He was hampered by lack of news from Europe; unaware of the latest scientific developments, Priestley was no longer on the forefront of discovery. Although the majority of his publications focused on defending phlogiston theory, he also did some original work on spontaneous generation and dreams. Despite Priestley's reduced scientific output, his presence stimulated American interest in chemistry. By 1801, Priestley had become so ill that he could no longer write or experiment. He died on the morning of 6 February 1804, aged seventy and was buried at Riverview Cemetery in Northumberland, Pennsylvania. Priestley's epitaph reads: Degrees PRIESTLEY, JOSEPH, LL.D.; University of Edinburgh. Preacher; librarian Earl of Shelbume 1773-80; pastor Unitarian congregation, Birmingham, 1780–91; Gravel Pit meeting house. Hackney, London, 1791–94; resident Northumberland, Penn,, 1794-1804; discoverer nitric oxide 1772; oxygen, hydrochloric acid and ammonia, 1774; sulphur dioxide and silicon tetrafluoride, 1775; nitrous oxide; member Royal Society 1766; American Philosophical Society; b. Fieldhead, near Leeds, Yorkshire, England, March 24, 1733; d. Northumberland, Penn., Feb. 6, 1804. Legacy By the time he died in 1804, Priestley had been made a member of every major scientific society in the Western world and he had discovered numerous substances. The 19th-century French naturalist George Cuvier, in his eulogy of Priestley, praised his discoveries while at the same time lamenting his refusal to abandon phlogiston theory, calling him "the father of modern chemistry [who] never acknowledged his daughter". Priestley published more than 150 works on topics ranging from political philosophy to education to theology to natural philosophy. He led and inspired British radicals during the 1790s, paved the way for utilitarianism, and helped found Unitarianism. A wide variety of philosophers, scientists, and poets became associationists as a result of his redaction of David Hartley's Observations on Man, including Erasmus Darwin, Coleridge, William Wordsworth, John Stuart Mill, Alexander Bain, and Herbert Spencer. Immanuel Kant praised Priestley in his Critique of Pure Reason (1781), writing that he "knew how to combine his paradoxical teaching with the interests of religion". Indeed, it was Priestley's aim to "put the most 'advanced' Enlightenment ideas into the service of a rationalized though heterodox Christianity, under the guidance of the basic principles of scientific method". Considering the extent of Priestley's influence, relatively little scholarship has been devoted to him. In the early 20th century, Priestley was most often described as a conservative and dogmatic scientist who was nevertheless a political and religious reformer. In a historiographic review essay, historian of science Simon Schaffer describes the two dominant portraits of Priestley: the first depicts him as "a playful innocent" who stumbled across his discoveries; the second portrays him as innocent as well as "warped" for not understanding their implications better. Assessing Priestley's works as a whole has been difficult for scholars because of his wide-ranging interests. His scientific discoveries have usually been divorced from his theological and metaphysical publications to make an analysis of his life and writings easier, but this approach has been challenged recently by scholars such as John McEvoy and Robert Schofield. Although early Priestley scholarship claimed that his theological and metaphysical works were "distractions" and "obstacles" to his scientific work, scholarship published in the 1960s, 1970s, and 1980s maintained that Priestley's works constituted a unified theory. However, as Schaffer explains, no convincing synthesis of his work has yet been expounded. More recently, in 2001, historian of science Dan Eshet has argued that efforts to create a "synoptic view" have resulted only in a rationalisation of the contradictions in Priestley's thought, because they have been "organized around philosophical categories" and have "separate[d] the producers of scientific ideas from any social conflict". Priestley has been remembered by the towns in which he served as a reforming educator and minister and by the scientific organisations he influenced. Two educational institutions have been named in his honour—Priestley College in Warrington and Joseph Priestley College in Leeds (now part of Leeds City College)—and an asteroid, 5577 Priestley, discovered in 1986 by Duncan Waldron. In Birstall, the Leeds City Square, and in Birmingham, he is memorialised through statues, and plaques commemorating him have been posted in Birmingham, Calne and Warrington. The main undergraduate chemistry laboratories at the University of Leeds were refurbished as part of a £4m refurbishment plan in 2006 and renamed as the Priestley Laboratories in his honour as a prominent chemist from Leeds. In 2016 the University of Huddersfield renamed the building housing its Applied Sciences department as the Joseph Priestley Building, as part of an effort to rename all campus buildings after prominent local figures. Since 1952 Dickinson College, Pennsylvania, has presented the Priestley Award to a "distinguished scientist whose work has contributed to the welfare of humanity". Priestley's work is recognised by a National Historic Chemical Landmark designation for his discovery of oxygen, made on 1 August 1994, at the Priestley House in Northumberland, Penn., by the American Chemical Society. Similar recognition was made on 7 August 2000, at Bowood House in Wiltshire, England. The ACS also awards their highest honour, the Priestley Medal, in his name. Several of his descendants became physicians, including the noted American surgeon James Taggart Priestley II of the Mayo Clinic. Archives Papers of Joseph Priestley are held at the Cadbury Research Library, University of Birmingham. Selected works The Rudiments of English Grammar (1761) A Chart of Biography (1765) Essay on a Course of Liberal Education for Civil and Active Life (1765) The History and Present State of Electricity (1767) Essay on the First Principles of Government (1768) A New Chart of History (1769) Institutes of Natural and Revealed Religion (1772–74) Experiments and Observations on Different Kinds of Air (1774–77) Disquisitions Relating to Matter and Spirit (1777) The Doctrine of Philosophical Necessity Illustrated (1777) Letters to a Philosophical Unbeliever (1780) An History of the Corruptions of Christianity (1782) Lectures on History and General Policy (1788) Theological Repository (1770–73, 1784–88) See also Bibliography of Benjamin FranklinMany works on Franklin make reference to Priestley List of independent discoveries List of liberal theorists Timeline of hydrogen technologies Citations Bibliography The most exhaustive biography of Priestley is Robert Schofield's two-volume work; several older one-volume treatments exist: those of Gibbs, Holt and Thorpe. Graham and Smith focus on Priestley's life in America and Uglow and Jackson both discuss Priestley's life in the context of other developments in science. Gibbs, F. W. Joseph Priestley: Adventurer in Science and Champion of Truth. London: Thomas Nelson and Sons, 1965. Graham, Jenny. Revolutionary in Exile: The Emigration of Joseph Priestley to America, 1794–1804. Transactions of the American Philosophical Society 85 (1995). . Jackson, Joe. A World on Fire: A Heretic, an Aristocrat and the Race to Discover Oxygen. New York: Viking, 2005. . Johnson, Steven. The Invention of Air: A Story of Science, Faith, Revolution, and the Birth of America. New York: Riverhead, 2008. . Smith, Edgar F. Priestley in America, 1794–1804. Philadelphia: P. Blakiston's Son and Co., 1920. Tapper, Alan. "Joseph Priestley". Dictionary of Literary Biography 252: British Philosophers 1500–1799. Eds. Philip B. Dematteis and Peter S. Fosl. Detroit: Gale Group, 2002. Thorpe, T. E. Joseph Priestley. London: J. M. Dent, 1906. Uglow, Jenny. The Lunar Men: Five Friends Whose Curiosity Changed the World. New York: Farrar, Straus and Giroux, 2002. . Secondary materials Anderson, R. G. W. and Christopher Lawrence. Science, Medicine and Dissent: Joseph Priestley (1733–1804). London: Wellcome Trust, 1987. . Bowers, J. D. Joseph Priestley and English Unitarianism in America. University Park: Pennsylvania State University Press, 2007. . Braithwaite, Helen. Romanticism, Publishing and Dissent: Joseph Johnson and the Cause of Liberty. New York: Palgrave Macmillan, 2003. . Conant, J. B., ed. "The Overthrow of the Phlogiston Theory: The Chemical Revolution of 1775–1789". Harvard Case Histories in Experimental Science. Cambridge: Harvard University Press, 1950. Crook, R. E. A Bibliography of Joseph Priestley. London: Library Association, 1966. Crossland, Maurice. "The Image of Science as a Threat: Burke versus Priestley and the 'Philosophic Revolution'". British Journal for the History of Science 20 (1987): 277–307. Donovan, Arthur. Antoine Lavoisier: Science, Administration and Revolution. Cambridge: Cambridge University Press, 1996. Eshet, Dan. "Rereading Priestley". History of Science 39.2 (2001): 127–59. Fitzpatrick, Martin. "Joseph Priestley and the Cause of Universal Toleration". The Price-Priestley Newsletter 1 (1977): 3–30. Garrett, Clarke. "Joseph Priestley, the Millennium, and the French Revolution". Journal of the History of Ideas 34.1 (1973): 51–66. Fruton, Joseph S. Methods and Styles in the Development of Chemistry. Philadelphia: American Philosophical Society, 2002. . Kramnick, Isaac. "Eighteenth-Century Science and Radical Social Theory: The Case of Joseph Priestley's Scientific Liberalism". Journal of British Studies 25 (1986): 1–30. Kuhn, Thomas. The Structure of Scientific Revolutions. 3rd ed. Chicago: University of Chicago Press, 1996. . Haakonssen, Knud, ed. Enlightenment and Religion: Rational Dissent in Eighteenth-Century Britain. Cambridge: Cambridge University Press, 1996. . McCann, H. Chemistry Transformed: The Paradigmatic Shift from Phlogiston to Oxygen. Norwood: Alex Publishing, 1978. . McEvoy, John G. "Joseph Priestley, 'Aerial Philosopher': Metaphysics and Methodology in Priestley's Chemical Thought, from 1762 to 1781". Ambix 25 (1978): 1–55, 93–116, 153–75; 26 (1979): 16–30. McEvoy, John G. "Enlightenment and Dissent in Science: Joseph Priestley and the Limits of Theoretical Reasoning". Enlightenment and Dissent 2 (1983): 47–68. McEvoy, John G. "Priestley Responds to Lavoisier's Nomenclature: Language, Liberty, and Chemistry in the English Enlightenment". Lavoisier in European Context: Negotiating a New Language for Chemistry. Eds. Bernadette Bensaude-Vincent and Ferdinando Abbri. Canton, MA: Science History Publications, 1995. . McEvoy, John G. and J.E. McGuire. "God and Nature: Priestley's Way of Rational Dissent". Historical Studies in the Physical Sciences 6 (1975): 325–404. McLachlan, John. Joseph Priestley Man of Science 1733–1804: An Iconography of a Great Yorkshireman. Braunton and Devon: Merlin Books, 1983. . McLachlan, John. "Joseph Priestley and the Study of History". Transactions of the Unitarian Historical Society 19 (1987–90): 252–63. Philip, Mark. "Rational Religion and Political Radicalism". Enlightenment and Dissent 4 (1985): 35–46. Rose, R. B. "The Priestley Riots of 1791". Past and Present 18 (1960): 68–88. Rosenberg, Daniel. Joseph Priestley and the Graphic Invention of Modern Time. Studies in Eighteenth Century Culture 36(1) (2007): pp. 55–103. Rutherford, Donald. Leibniz and the Rational Order of Nature. Cambridge: Cambridge University Press, 1995. . Schaffer, Simon. "Priestley Questions: An Historiographic Survey". History of Science 22.2 (1984): 151–83. Sheps, Arthur. "Joseph Priestley's Time Charts: The Use and Teaching of History by Rational Dissent in late Eighteenth-Century England". Lumen 18 (1999): 135–54. Watts, R. "Joseph Priestley and Education". Enlightenment and Dissent 2 (1983): 83–100. Primary materials Lindsay, Jack, ed. Autobiography of Joseph Priestley. Teaneck: Fairleigh Dickinson University Press, 1970. . Miller, Peter N., ed. Priestley: Political Writings. Cambridge: Cambridge University Press, 1993. . Passmore, John A., ed. Priestley's Writings on Philosophy, Science and Politics. New York: Collier Books, 1964. Rutt, John T., ed. Collected Theological and Miscellaneous Works of Joseph Priestley. Two vols. London: George Smallfield, 1832. Rutt, John T., ed. Life and Correspondence of Joseph Priestley. Two vols. London: George Smallfield, 1831. Schofield, Robert E., ed. A Scientific Autobiography of Joseph Priestley (1733–1804): Selected Scientific Correspondence. Cambridge: MIT Press, 1966. External links Links to Priestley's works online The Joseph Priestley Society Joseph Priestley Online : Comprehensive site with bibliography, links to related sites, images, information on manuscript collections, and other helpful information. Radio 4 program on the discovery of oxygen by the BBC Collection of Priestley images at the Schoenberg Center for Electronic Text and Image Short online biographies "Joseph Priestley: Discoverer of Oxygen" at the American Chemical Society Joseph Priestley at the Woodrow Wilson National Fellowship Foundation Joseph Priestley from the Encyclopædia Britannica ExplorePAHistory.com 1733 births 1804 deaths 18th-century American male writers 18th-century American theologians 18th-century English philosophers 18th-century English chemists 18th-century English Christian theologians 18th-century English non-fiction writers 18th-century English male writers 18th-century Unitarian clergy 19th-century American male writers 19th-century English philosophers 19th-century English writers American abolitionists American male non-fiction writers American Unitarians British emigrants to the Thirteen Colonies Christian universalist theologians Denial of the virgin birth of Jesus Discoverers of chemical elements Educators from Pennsylvania English abolitionists English Christian universalists English Dissenters English pamphleteers English philosophers English political philosophers English Unitarians Enlightenment philosophers Enlightenment scientists Fellows of the American Academy of Arts and Sciences Fellows of the Royal Society Industrial gases Leeds Blue Plaques Linguists of English Members of the Lunar Society of Birmingham Members of the Royal Swedish Academy of Sciences People associated with the University of Edinburgh People from Birstall, West Yorkshire People from Hackney Central People from Northumberland, Pennsylvania People of the American Industrial Revolution Priestley family Protestant philosophers Recipients of the Copley Medal Religious leaders from Pennsylvania
Joseph Priestley
Chemistry
14,614
14,669,989
https://en.wikipedia.org/wiki/Viola%E2%80%93Jones%20object%20detection%20framework
The Viola–Jones object detection framework is a machine learning object detection framework proposed in 2001 by Paul Viola and Michael Jones. It was motivated primarily by the problem of face detection, although it can be adapted to the detection of other object classes. In short, it consists of a sequence of classifiers. Each classifier is a single perceptron with several binary masks (Haar features). To detect faces in an image, a sliding window is computed over the image. For each image, the classifiers are applied. If at any point, a classifier outputs "no face detected", then the window is considered to contain no face. Otherwise, if all classifiers output "face detected", then the window is considered to contain a face. The algorithm is efficient for its time, able to detect faces in 384 by 288 pixel images at 15 frames per second on a conventional 700 MHz Intel Pentium III. It is also robust, achieving high precision and recall. While it has lower accuracy than more modern methods such as convolutional neural network, its efficiency and compact size (only around 50k parameters, compared to millions of parameters for typical CNN like DeepFace) means it is still used in cases with limited computational power. For example, in the original paper, they reported that this face detector could run on the Compaq iPAQ at 2 fps (this device has a low power StrongARM without floating point hardware). Problem description Face detection is a binary classification problem combined with a localization problem: given a picture, decide whether it contains faces, and construct bounding boxes for the faces. To make the task more manageable, the Viola–Jones algorithm only detects full view (no occlusion), frontal (no head-turning), upright (no rotation), well-lit, full-sized (occupying most of the frame) faces in fixed-resolution images. The restrictions are not as severe as they appear, as one can normalize the picture to bring it closer to the requirements for Viola-Jones. any image can be scaled to a fixed resolution for a general picture with a face of unknown size and orientation, one can perform blob detection to discover potential faces, then scale and rotate them into the upright, full-sized position. the brightness of the image can be corrected by white balancing. the bounding boxes can be found by sliding a window across the entire picture, and marking down every window that contains a face. This would generally detect the same face multiple times, for which duplication removal methods, such as non-maximal suppression, can be used. The "frontal" requirement is non-negotiable, as there is no simple transformation on the image that can turn a face from a side view to a frontal view. However, one can train multiple Viola-Jones classifiers, one for each angle: one for frontal view, one for 3/4 view, one for profile view, a few more for the angles in-between them. Then one can at run time execute all these classifiers in parallel to detect faces at different view angles. The "full-view" requirement is also non-negotiable, and cannot be simply dealt with by training more Viola-Jones classifiers, since there are too many possible ways to occlude a face. Components of the framework A full presentation of the algorithm is in. Consider an image of fixed resolution . Our task is to make a binary decision: whether it is a photo of a standardized face (frontal, well-lit, etc) or not. Viola–Jones is essentially a boosted feature learning algorithm, trained by running a modified AdaBoost algorithm on Haar feature classifiers to find a sequence of classifiers . Haar feature classifiers are crude, but allows very fast computation, and the modified AdaBoost constructs a strong classifier out of many weak ones. At run time, a given image is tested on sequentially. If at any point, , the algorithm immediately returns "no face detected". If all classifiers return 1, then the algorithm returns "face detected". For this reason, the Viola-Jones classifier is also called "Haar cascade classifier". Haar feature classifiers Consider a perceptron defined by two variables . It takes in an image of fixed resolution, and returns A Haar feature classifier is a perceptron with a very special kind of that makes it extremely cheap to calculate. Namely, if we write out the matrix , we find that it takes only three possible values , and if we color the matrix with white on , black on , and transparent on , the matrix is in one of the 5 possible patterns shown on the right. Each pattern must also be symmetric to x-reflection and y-reflection (ignoring the color change), so for example, for the horizontal white-black feature, the two rectangles must be of the same width. For the vertical white-black-white feature, the white rectangles must be of the same height, but there is no restriction on the black rectangle's height. Rationale for Haar features The Haar features used in the Viola-Jones algorithm are a subset of the more general Haar basis functions, which have been used previously in the realm of image-based object detection. While crude compared to alternatives such as steerable filters, Haar features are sufficiently complex to match features of typical human faces. For example: The eye region is darker than the upper-cheeks. The nose bridge region is brighter than the eyes. Composition of properties forming matchable facial features: Location and size: eyes, mouth, bridge of nose Value: oriented gradients of pixel intensities Further, the design of Haar features allows for efficient computation of using only constant number of additions and subtractions, regardless of the size of the rectangular features, using the summed-area table. Learning and using a Viola–Jones classifier Choose a resolution for the images to be classified. In the original paper, they recommended . Learning Collect a training set, with some containing faces, and others not containing faces. Perform a certain modified AdaBoost training on the set of all Haar feature classifiers of dimension , until a desired level of precision and recall is reached. The modified AdaBoost algorithm would output a sequence of Haar feature classifiers . The details of the modified AdaBoost algorithm is detailed below. Using To use a Viola-Jones classifier with on an image , compute sequentially. If at any point, , the algorithm immediately returns "no face detected". If all classifiers return 1, then the algorithm returns "face detected". Learning algorithm The speed with which features may be evaluated does not adequately compensate for their number, however. For example, in a standard 24x24 pixel sub-window, there are a total of possible features, and it would be prohibitively expensive to evaluate them all when testing an image. Thus, the object detection framework employs a variant of the learning algorithm AdaBoost to both select the best features and to train classifiers that use them. This algorithm constructs a "strong" classifier as a linear combination of weighted simple “weak” classifiers. Each weak classifier is a threshold function based on the feature . The threshold value and the polarity are determined in the training, as well as the coefficients . Here a simplified version of the learning algorithm is reported: Input: Set of positive and negative training images with their labels . If image is a face , if not . Initialization: assign a weight to each image . For each feature with Renormalize the weights such that they sum to one. Apply the feature to each image in the training set, then find the optimal threshold and polarity that minimizes the weighted classification error. That is where Assign a weight to that is inversely proportional to the error rate. In this way best classifiers are considered more. The weights for the next iteration, i.e. , are reduced for the images that were correctly classified. Set the final classifier to Cascade architecture On average only 0.01% of all sub-windows are positive (faces) Equal computation time is spent on all sub-windows Must spend most time only on potentially positive sub-windows. A simple 2-feature classifier can achieve almost 100% detection rate with 50% FP rate. That classifier can act as a 1st layer of a series to filter out most negative windows 2nd layer with 10 features can tackle “harder” negative-windows which survived the 1st layer, and so on... A cascade of gradually more complex classifiers achieves even better detection rates. The evaluation of the strong classifiers generated by the learning process can be done quickly, but it isn't fast enough to run in real-time. For this reason, the strong classifiers are arranged in a cascade in order of complexity, where each successive classifier is trained only on those selected samples which pass through the preceding classifiers. If at any stage in the cascade a classifier rejects the sub-window under inspection, no further processing is performed and continue on searching the next sub-window. The cascade therefore has the form of a degenerate tree. In the case of faces, the first classifier in the cascade – called the attentional operator – uses only two features to achieve a false negative rate of approximately 0% and a false positive rate of 40%. The effect of this single classifier is to reduce by roughly half the number of times the entire cascade is evaluated. In cascading, each stage consists of a strong classifier. So all the features are grouped into several stages where each stage has certain number of features. The job of each stage is to determine whether a given sub-window is definitely not a face or may be a face. A given sub-window is immediately discarded as not a face if it fails in any of the stages. A simple framework for cascade training is given below: f = the maximum acceptable false positive rate per layer. d = the minimum acceptable detection rate per layer. Ftarget = target overall false positive rate. P = set of positive examples. N = set of negative examples. F(0) = 1.0; D(0) = 1.0; i = 0 while F(i) > Ftarget increase i n(i) = 0; F(i)= F(i-1) while F(i) > f × F(i-1) increase n(i) use P and N to train a classifier with n(i) features using AdaBoost Evaluate current cascaded classifier on validation set to determine F(i) and D(i) decrease threshold for the ith classifier (i.e. how many weak classifiers need to accept for strong classifier to accept) until the current cascaded classifier has a detection rate of at least d × D(i-1) (this also affects F(i)) N = ∅ if F(i) > Ftarget then evaluate the current cascaded detector on the set of non-face images and put any false detections into the set N. The cascade architecture has interesting implications for the performance of the individual classifiers. Because the activation of each classifier depends entirely on the behavior of its predecessor, the false positive rate for an entire cascade is: Similarly, the detection rate is: Thus, to match the false positive rates typically achieved by other detectors, each classifier can get away with having surprisingly poor performance. For example, for a 32-stage cascade to achieve a false positive rate of , each classifier need only achieve a false positive rate of about 65%. At the same time, however, each classifier needs to be exceptionally capable if it is to achieve adequate detection rates. For example, to achieve a detection rate of about 90%, each classifier in the aforementioned cascade needs to achieve a detection rate of approximately 99.7%. Using Viola–Jones for object tracking In videos of moving objects, one need not apply object detection to each frame. Instead, one can use tracking algorithms like the KLT algorithm to detect salient features within the detection bounding boxes and track their movement between frames. Not only does this improve tracking speed by removing the need to re-detect objects in each frame, but it improves the robustness as well, as the salient features are more resilient than the Viola-Jones detection framework to rotation and photometric changes. References External links Slides Presenting the Framework Information Regarding Haar Basis Functions - open-source tool for image mining An improved algorithm on Viola-Jones object detector Citations of the Viola–Jones algorithm in Google Scholar - Adaboost Explanation from ppt by Qing Chen, Discovery Labs, University of Ottawa and a video lecture by Ramsri Goutham. Implementations MATLAB: , OpenCV: implemented as cvHaarDetectObjects(). Haar Cascade Detection in OpenCV Cascade Classifier Training in OpenCV Object recognition and categorization Facial recognition Articles with example pseudocode Gesture recognition Computer vision
Viola–Jones object detection framework
Engineering
2,688
65,818,809
https://en.wikipedia.org/wiki/Gregory%20Butte
Gregory Butte is a 4,651-foot (1,418 meter) elevation sandstone summit located in Glen Canyon National Recreation Area, in San Juan County of southern Utah. It is situated northeast of Tower Butte, and northeast of the town of Page. This iconic landmark of the Lake Powell area towers nearly 1,000 feet above the lake. Before Lake Powell was formed in the 1970s, this butte was set within a meander of the Colorado River. Gregory Butte is a butte composed of Entrada Sandstone. This sandstone, which was originally deposited as sandy mud on a tidal flat, is believed to have formed about 160 million years ago during the Jurassic period as a giant sand sea, the largest in Earth's history. This geographical feature's name was officially adopted in 1977 by the U.S. Board on Geographic Names. Geologist Herbert E. Gregory (1869–1952), mapped much of the bedrock geology of the Colorado Plateau, particularly in geologic monographs concentrating on what is now Navajo Nation land in northern Arizona and southern Utah where this butte is located. According to the Köppen climate classification system, Gregory Butte is located in an arid climate zone with hot, very dry summers, and chilly winters with very little snow. See also Colorado Plateau List of rock formations in the United States Gallery References External links Weather forecast: Gregory Butte 1958 aerial photo of Gregory Butte before Lake Powell Colorado Plateau Landforms of San Juan County, Utah Glen Canyon National Recreation Area Lake Powell Buttes of Utah One-thousanders of the United States Sandstone formations of the United States
Gregory Butte
Engineering
315
24,899,234
https://en.wikipedia.org/wiki/PPC%20Journal
PPC Journal was an early hobbyist computer magazine, originally targeted at users of HP's first programmable calculator, the HP-65. It originated as 65 Notes and the first issue was published in 1974. It later changed names in 1978 to PPC Journal and in 1980 to PPC Calculator Journal. With Volume 12 published in 1984 the magazine was renamed PPC Journal. The magazine ended publication in July 1987 (Volume 14). The founder of the PPC (Personal Programming Center) and editor of the journal was Richard J. Nelson. This hobbyist group worked around the journal and was known because Nelson discovered hidden instructions on the HP-65 calculator. Later the club and the journal got maximum notoriety when several club members discovered the "synthetic instructions" of the HP-41C. Competition A similar journal since 1976 was 52-Notes for the Texas Instruments SR-52 user community. It was edited by Richard C. Vanderburgh. Both journals deliberately established a mode of "friendly competition", often exchanging information and comparing solutions among user groups. This journal was later renamed into TI PPC Notes and edited by Maurice E. T. Swinnen (from January 1980 to December 1982) and Palmer O. Hanson, Jr. (from January 1983). References Defunct hobby magazines published in the United States Defunct computer magazines published in the United States Magazines established in 1974 Magazines disestablished in 1987
PPC Journal
Technology
290
60,474,207
https://en.wikipedia.org/wiki/Plastid%20evolution
A plastid is a membrane-bound organelle found in plants, algae and other eukaryotic organisms that contribute to the production of pigment molecules. Most plastids are photosynthetic, thus leading to color production and energy storage or production. There are many types of plastids in plants alone, but all plastids can be separated based on the number of times they have undergone endosymbiotic events. Currently there are three types of plastids; primary, secondary and tertiary. Endosymbiosis is reputed to have led to the evolution of eukaryotic organisms today, although the timeline is highly debated. Primary endosymbiosis The first plastid is highly accepted within the scientific community to be derived from the engulfment of cyanobacteria ancestor into a eukaryotic organism. Evidence supporting this belief is found in many morphological similarities such as the presence of a two plasma membranes. It is thought that the first membrane belonged to the cyanobacteria ancestor. During phagocytosis, a vesicle engulfs a molecule with its plasma membrane to allow safe import. When the cyanobacteria became engulfed, the bacterium avoided digestion and led to the double membrane found in primary plastids. However, in order to live in symbiosis, the eukaryotic cell that engulfed the cyanobacterium must now provide proteins and metabolites to maintain the functions of the bacteria in exchange for energy. Thus, an engulfed cyanobacterium must give up some of its genetic material to allow for endosymbiotic gene transfer to the eukaryote, a phenomenon that is thought to be extremely rare due to the "learned nature" of the interactions that must occur between the cells to allow for processes such as; gene transfer, protein localization, excretion of highly reactive metabolites, and DNA repair. This would mean a reduction in genome size for the cyanobacteria, but also an increase in cytobacterial genes within the eukaryotic genome. The Synechocystis sp. strain PCC6803 is a unicellular fresh water cyanobacteria that encodes 3725 genes, and a 3.9 Mb sized genome. However, most plastids rarely exceed 200 protein coding genes. It has been proposed this the closest living relative of the ancestral engulfed cyanobacterium is Gloeomargarita lithophora. Separately, somewhere about 90–140 million years ago, primary endosymbiosis happened again in the amoeboid Paulinella with a cyanobacterium in the genus Prochlorococcus. This independently evolved chloroplast is often called a chromatophore instead of a chloroplast. A 2010 study sequenced the genome of a cyanobacterium that was living extracellularly in endosymbiosis with the water-fern Azolla filiculoides. Endosymbiosis was supported by the fact that the cyanobacterium was unable to grow autonomously, and the observance of the cyanobacterium being vertically transferred between succeeding generations. After cyanobacterium genome analysis, the researchers found that over 30% of the genome was made up of pseudogenes. In addition, roughly 600 transposable elements were found within the genome. The pseudogenes were found in genes such as dnaA, DNA repair genes, glycolysis and nutrient uptake genes. dnaA is essential to initiation of DNA replication in prokaryotic organisms, thus Azolla filiculoides is thought to provide nutrients, and transcriptional factors for DNA replication in exchange for fixed nitrogen that is not readily available in water. Although the cyanobacterium had not been completely engulfed in the eukaryotic organism, the relationship is thought to demonstrate the precursor to endosymbiotic primary plastids. Secondary endosymbiosis Secondary endosymbiosis results in the engulfment of an organism that has already performed primary endosymbiosis. Thus, four plasma membranes are formed. The first originating from the cyanobacteria, the second from the eukaryote that engulfed the cyanobacteria, and the third from the eukaryote who engulfed the primary endosymbiotic eukaryote. Chloroplasts contain 16S rRNA and 23S rRNA. 16S and 23S rRNA is found only in prokaryotes by definition. Chloroplasts and mitochondria also replicate semi-autonomously outside of the cell cycle replication system via binary fission. Consistent with the theory, decreased genome size within the organelle and gene integration into the nucleus occurred. Chloroplasts genomes encode 50-200 proteins, compared to the thousands in cyanobacterium. Furthermore, in Arabidopsis, nearly 20% of the nuclear genome originate from cyanobacterium, the highly recognized origin of chloroplasts. Recent studies have been able to identify the speed and size at which chloroplast genes are able to incorporate themselves into the host genome. Using chloroplast transformation genes encoding spectinomycin and kanamycin resistance were inserted into the DNA of chloroplasts found in tobacco plants. After subjecting the plants to spectinomycin and kanamycin selection, some plants began to tolerate spectinomycin and kanamycin. Roughly 1 in every 5 million cells on the tobacco leaves highly expressed spectinomycin and kanamycin resistant genes. By using the cells expressing resistances, they were able to grow tobacco from these cells to maturity. Once mature, the plants were mated with wild-type plants, and 50% of the progeny expressed spectinomycin and kanamycin resistance genes. Pollen was thought not to be able to transfer chloroplast DNA in tobacco (which later turned out not to be as true as was thought at the time), thus leading to believe that the genes were incorporated into the tobaccos genome. Furthermore, 11kb of integrated chloroplast DNA was introduced to the host genome, transferring more DNA that previously predicted at a faster rate than previously predicted. Tertiary endosymbiosis Although previous endosymbiotic events resulted in the increase in the number of membranes, tertiary plastids can have 3-4 membranes. The most largely studied tertiary plastids are found in dinoflagellates, where several independent tertiary endosymbiosis events have occurred. In the groups that contains a haplophyte plastid, these tertiary plastids are believed to have been derived from a red algae replacing secondary plastids. Consistent with our previous rules for reduction in genome size, and incorporation of genes into the host genome, tertiary plastid genome consists of about 14 genes. These genes are broken down further into small minicircles that contain 1-3 genes. These genomes are circular like prokaryotic genomes. Further, they only encode atpA, atpB, petB, perD, psaA, psaB, psbA-E, psbI, 16S and 23S rRNA. These genes play vital proteins used in photosystem I and II, indicating further their cyanobacterial origin. Unusually, the three lineages that contain a haplophyte plastid each acquired their plastid independently. "Dinotoms" (Durinskia and Kryptoperidinium) have plastids derived from diatoms. These are highly unusual among tertiary endosymbioants as the symbioant is not reduced to a mere plastid: instead, it still has a DNA-containing nucleus, a large volume of cytoplasm, and even its own DNA-containing mitochondria. Two previously undescribed dinoflagellates ("MGD" and "TGD") contain a green algal endosymbioant that has a nucleus, most closely related to Pedinomonas. References Endosymbiotic events Photosynthesis Plastids
Plastid evolution
Chemistry,Biology
1,704
31,980,740
https://en.wikipedia.org/wiki/Stanley%20symmetric%20function
In mathematics and especially in algebraic combinatorics, the Stanley symmetric functions are a family of symmetric functions introduced by in his study of the symmetric group of permutations. Formally, the Stanley symmetric function Fw(x1, x2, ...) indexed by a permutation w is defined as a sum of certain fundamental quasisymmetric functions. Each summand corresponds to a reduced decomposition of w, that is, to a way of writing w as a product of a minimal possible number of adjacent transpositions. They were introduced in the course of Stanley's enumeration of the reduced decompositions of permutations, and in particular his proof that the permutation w0 = n(n − 1)...21 (written here in one-line notation) has exactly reduced decompositions. (Here denotes the binomial coefficient n(n − 1)/2 and ! denotes the factorial.) Properties The Stanley symmetric function Fw is homogeneous with degree equal to the number of inversions of w. Unlike other nice families of symmetric functions, the Stanley symmetric functions have many linear dependencies and so do not form a basis of the ring of symmetric functions. When a Stanley symmetric function is expanded in the basis of Schur functions, the coefficients are all non-negative integers. The Stanley symmetric functions have the property that they are the stable limit of Schubert polynomials where we treat both sides as formal power series, and take the limit coefficientwise. References Polynomials Symmetric functions
Stanley symmetric function
Physics,Mathematics
309
6,254,418
https://en.wikipedia.org/wiki/RF%20power%20amplifier
A radio-frequency power amplifier (RF power amplifier) is a type of electronic amplifier that converts a low-power radio-frequency (RF) signal into a higher-power signal. Typically, RF power amplifiers are used in the final stage of a radio transmitter, their output driving the antenna. Design goals often include gain, power output, bandwidth, power efficiency, linearity (low signal compression at rated output), input and output impedance matching, and heat dissipation. Amplifier classes RF amplifier circuits operate in different modes, called "classes", based on how much of the cycle of the sinusoidal radio signal the amplifier (transistor or vacuum tube) is conducting current. Some classes are class A, class AB, class B, which are considered the linear amplifier classes in which the active device is used as a controlled current source, while class C is a nonlinear class in which the active device is used as a switch. The bias at the input of the active device determines the class of the amplifier. A common trade-off in power amplifier design is the trade-off between efficiency and linearity. The previously named classes become more efficient, but less linear, in the order they are listed. Operating the active device as a switch results in higher efficiency, theoretically up to 100%, but lower linearity. Among the switch-mode classes are class D, class F and class E. The class D amplifier is not often used in RF applications because the finite switching speed of the active devices and possible charge storage in saturation could lead to a large I-V product, which deteriorates efficiency. Solid state vs. vacuum tube amplifiers Modern RF power amplifiers use solid-state devices, predominantly MOSFETs (metal–oxide–semiconductor field-effect transistors). The earliest MOSFET-based RF amplifiers date back to the mid-1960s. Bipolar junction transistors were also commonly used in the past, up until they were replaced by power MOSFETs, particularly LDMOS transistors, as the standard technology for RF power amplifiers by the 1990s, due to the superior RF performance of LDMOS transistors. Generally speaking, solid-state power amplifiers contain four main components: input, output, amplification stage and power supply. MOSFET transistors and other modern solid-state devices have replaced vacuum tubes in most electronic devices, but tubes are still used in some high-power transmitters (see Valve RF amplifier). Although mechanically robust, transistors are electrically fragile they are easily damaged by excess voltage or current. Tubes are mechanically fragile but electrically robust they can handle remarkably high electrical overloads without appreciable damage. Applications The basic applications of the RF power amplifier include driving to another high-power source, driving a transmitting antenna and exciting microwave cavity resonators. Among these applications, driving transmitter antennas is most well known. The transmitter–receivers are used not only for voice and data communication but also for weather sensing (in the form of a radar). RF power amplifiers using LDMOS (laterally diffused MOSFET) are the most widely used power semiconductor devices in wireless telecommunication networks, particularly mobile networks. LDMOS-based RF power amplifiers are widely used in digital mobile networks such as 2G, 3G, and 4G and the good cost/performance ratio make them the preferred option for amateur radio. Wideband amplifier design Impedance transformations over large bandwidth are difficult to realize, so conventionally, most wideband amplifiers are designed to feed a 50 Ω output load. Transistor output power is then limited to where is defined as the breakdown voltage, is defined as the knee voltage, is chosen so that the rated power can be met. The external load is, by convention, Therefore, there must be some sort of impedance matching that transforms from to The loadline method is often used in RF power amplifier design. See also FET amplifier Power electronics References External links Electronic amplifiers Radio electronics MOSFETs
RF power amplifier
Technology,Engineering
809
11,798,290
https://en.wikipedia.org/wiki/Pestalotia%20longisetula
Pestalotia longisetula is a plant pathogen causing strawberry fruit rot. Hosts and symptoms While P. longisetula is best known for infecting strawberry crops, it can also infect other plants, including apricots, peaches, guava, and tomato fruits. Some plants such as beans are immune to the disease. It takes on average about two weeks for mature plants to be fully infected, while plants at an earlier stage of growth spread infection more slowly. Infected areas become covered with white mycelia growth and the host plant starts to rot from the skin to the core of the plant. The plant as a whole suffers as the leaves develop lesions with spores to spread the disease. Infection P. longisetula infects other plants through the leaves. Spores grow on the leaves and spread through the wind. The disease thrives in areas with high humidity and high wind. Once the plant has been infected, the disease spreads throughout the leaves and then attacks the fruit, causing it to rot on the skin and then the core. After eight days, most mature plants will be completely infected and a new phase of the infection begins, which spreads to the next plant. The host plant dies in most circumstances. Using pesticides and growing strawberries in areas with low wind power and low humidity can slow the progression of the infection. Importance Countries depend on growing strawberries as profits. If the disease manifests in the area then by the time farmers locate it, a percentage of the plants are wiped out and profits are lost. Countries most affected are those that do not have access to pesticides or greenhouses to protect the plants. References External links USDA ARS Fungal Database Fungal strawberry diseases Fungi described in 1961 Xylariales Fungus species
Pestalotia longisetula
Biology
353
24,209
https://en.wikipedia.org/wiki/Profanity
Profanity, also known as swearing, cursing, or cussing, involves the use of notionally offensive words for a variety of purposes, including to demonstrate disrespect or negativity, to relieve pain, to express a strong emotion, as a grammatical intensifier or emphasis, or to express informality or conversational intimacy. In many formal or polite social situations, it is considered impolite (a violation of social norms), and in some religious groups it is considered a sin. Profanity includes slurs, but most profanities are not slurs, and there are many insults that do not use swear words. Swear words can be discussed or even sometimes used for the same purpose without causing offense or being considered impolite if they are obscured (e.g. "fuck" becomes "f***" or "the f-word") or substituted with a minced oath like "flip". Etymology and definitions Profanity may be described as offensive language, dirty words, or taboo words, among other descriptors. The term originates from classical Latin , literally , meaning and meaning . This further developed in Middle English with the meaning to desecrate a temple. In English, swearing is a catch-all linguistic term for the use of profanities, even if it does not involve taking an oath. The only other languages that use the same term for both profanities and oaths are French (), Canadian French (), and Swedish (). English uses cursing in a similar manner to swearing, especially in the United States. Cursing originally referred specifically to the use of language to cast a curse on someone, and in American English it is still commonly associated with wishing harm on another. Equivalents to cursing are used similarly in Danish (), Italian (), and Norwegian (). The terms swearing and cursing have strong associations with the use of profanity in anger. Various efforts have been made to classify different types of profanity, but there is no widely accepted typology and terms are used interchangeably. Blasphemy and obscenity are used similarly to profanity, though blasphemy has retained its religious connotation. Expletive is another English term for the use of profanity, derived from its original meaning of adding words to change a sentence's length without changing its meaning. The use of expletive sometimes refers specifically to profanity as an interjection. Epithet is used to describe profanities directed at a specific person. Some languages do not have a general term for the use of profanities, instead describing it with the phrase "using bad language". These include Mandarin (), Portuguese (), Spanish (), and Turkish (). History and study Historical profanity is difficult to reconstruct, as written records may not reflect spoken language. Despite being relatively well known compared to other linguistic mechanisms, profanity has historically been understudied because of its taboo nature. Profanity may be studied as an aspect of linguistics and sociology, or it can be a psychological and neurological subject. Besides interpersonal communication, understanding of profanity has legal implications and related to theories of language learning. In modern European languages, swearing developed from early Christianity, primarily through restrictions on taking God's name in vain in the Old Testament. Invocations of God were seen as attempts to call upon his power, willing something to be true or leveling a curse. Other mentions of God were seen as placing oneself over him, with the person uttering a name implying power over the name's owner. Modern study of profanity as its own subject of inquiry had started by 1901. Sigmund Freud influenced study of the topic by positing that swearing reflects the subconscious, including feelings of aggression, antisocial inclinations, and the broaching of taboos. Significant activity began in the 1960s with writings on the subject by Ashley Montagu and Edward Sagarin, followed by increased study the following decade. Specific types of discriminatory profanity, such as ethnophaulism and homophobia, came to be described as part of a broader type of profanity, hate speech, toward the end of the 20th century. Another increase in the study of profanity took place with the onset of the 21st century. Subjects Profanities have literal meanings, but they are invoked to indicate a state of mind, making them dependent almost entirely on connotation and emotional associations with the word, as opposed to literal denotation. The connotative function of profanity allows the denotative meaning to shift more easily, causing the word to shift until its meaning is unrelated to its origin or to lose meaning and impact altogether. Literal meanings in modern profanity typically relate to religion, sex, or the human body, which creates a dichotomy between the use of highbrow religious swears and lowbrow anatomical swears. Languages and cultures place different emphasis on the subjects of profanity. Anatomical profanity is common in Polish, for example, while swearing in Dutch is more commonly in reference to disease. Words for excrement and for the buttocks have profane variants across most cultures. Though religious swears were historically more severe, modern society across much of the world has come to see sexual and anatomical swears to be more vulgar. Common profane phrases sometimes incorporate more than one category of profanity for increased effect. The Spanish phrase () invokes scatological, religious, and sexual profanity. Other swear words do not refer to any subject, such as the English word bloody when used in its profane sense. Not all taboo words are used in swearing, with many only being used in a literal sense. Clinical or academic terminology for bodily functions and sexual activity are distinct from profanity. This includes words such as excrement and copulate in English, which are not typically invoked as swears. Academics who study profanity disagree on whether literal use of a vulgar word can constitute a swear word. Conversely, words with greater connotative senses are not always used profanely. Bastard and son of a bitch are more readily used as general terms of abuse in English compared to terrorist and rapist, despite the latter two being terms being associated with strongly immoral behavior. Some profane phrases are used metaphorically in a way that still retains elements of the original meaning, such as the English phrases all hell broke loose or shit happens, which carry the negative associations of hell and shit as undesirable places and things. Others are nonsensical when interpreted literally, like take a flying fuck in English as well as (whore of shit) in French and (the sow of Madonna) in Italian. Religion A distinction is sometimes made between religious profanity, which is casual, versus blasphemy, which is intentionally leveled against a religious concept. It was commonly believed among early civilizations that speaking about certain things can invoke them or bring about curses. Many cultures have taboos about speaking the names of evil creatures such as Satan because of these historical fears. Religions commonly develop derogatory words for those who are not among their members. Medieval Christianity developed terms like heathen and infidel to describe outsiders. Secularization in the Western world has seen exclamations such as God! divorced from their religious connotations. Religious profanity is not inherent to all languages, being absent from Japanese, indigenous languages of the Americas, and most Polynesian languages. European languages historically used the crucifixion of Jesus as a focal point for profane interjections. Phrases meaning "death of God" were used in languages like English (Sdeath), French (), and Swedish () Christian profanity encompasses both appeals to the divine, such as God or heaven, and to the diabolic, such as the Devil or hell. While the impact of religious swearing has declined in the Christian world, diabolic swearing remains profane in Germany and the Nordic countries. Islamic profanity lacks a diabolic element, referring only to divine concepts like Muhammad or holy places. Words related to Catholicism, known as , are used in Quebec French profanity, and are considered to be stronger than other profane words in French. Examples of considered profane in Quebec are (tabernacle), (host), and (sacrament). When used as profanities, are often interchangeable. The Book of Leviticus indicates that blasphemous language warrants death, while the Gospel of Matthew implies condemnation of all swearing, though only the Quakers have imposed such a ban. Islam, Judaism, and Brahmanism forbid mention of God's name entirely. In some countries, profanity words often have pagan roots that after Christian influence were turned from names of deities and spirits to profanity and used as such, like in Finnish, which was believed to be an original name of the thunder god Ukko, the chief god of the Finnish pagan pantheon. Anatomy and sexuality Profanity related to sexual activity, including insults related to genitals, exists across cultures. The specific aspects invoked are sensitive to a given culture, with differences in how much they emphasize ideas like incest or adultery. Certain types of sex acts, such as oral sex, anal sex, or masturbation, may receive particular attention. Verbs describing sexual activity are frequently profane, like fuck in English, in French, in Italian, in Spanish, and (yebatˈ) in Russian. Words describing a person as one who masturbates are often used as terms of abuse, such as the English use of jerk-off and wanker. Terms for sexually promiscuous women can be used as profanity, such English terms like hussy and slut. Reference to prostitution brings its own set of profanities. Many profane words exist to refer to a prostitute, such as whore in English, in French, in Italian, in Polish, (blyat) in Russian, and in Spanish. Some languages, including German and Swedish, do not see significant use of sexual terms as profanity. Profanities for the penis and vulva are often used as interjections. Penile interjections are often used in Italian (), Russian (, khuy), and Spanish (). Vulvar interjections are often used in Dutch (), Hungarian (), Russian (, pizda), Spanish (), and Swedish (). Such terms, especially those relating to the vulva, may also be used as terms of abuse. Profanities related to testicles are less common and their function varies across languages. They may be used as interjections, such as in English (balls or bollocks), Italian (), and Spanish (). Danish uses testicles as a term of abuse with . Words for the buttocks are used as a term of disapproval in many languages, including English (ass or arse), French (), Polish (), Russian (, zhopa), and Spanish (). Similar words for the anus appear in languages like Danish (), English (asshole or arsehole), German (), Icelandic (), Norwegian (), and Polish (). Excrement and related concepts are commonly invoked in profanity. European examples include shit in English, in French, in German, and in Italian. An example in an East Asian language would be (kuso) in Japanese. Other subjects Illness has historically been used to swear by wishing a plague on others. The names of various diseases are used as profane words in some languages; Pokkers () appears in both Danish and Norwegian as an exclamation and an intensifier. Death is another common theme in Asian languages such as Cantonese. Terminology of mental illness has become more prominent as profanity in the Western world, with terms such as idiot and retard challenging one's mental competency. Profane phrases directed at the listener's mother exist across numerous major languages, though it is absent from Germanic languages with the exception of English. These phrases often include terms of abuse that implicate the subject's mother, such as son of a bitch in English or () in Mandarin. Russian profanity places heavy emphasis on the sexual conduct of the listener's female relatives, either by describing sexual activity involving them or suggesting that the listener engage in activities with them. Aboriginal Australian languages sometimes invoke one's deceased ancestors in profanity. The names of political ideologies are sometimes invoked as swear words by their opponents. Fascist is commonly used as an epithet in the modern era, replacing historical use of radical. Far-left groups have historically used words like capitalist and imperialist as terms of abuse, while anti-communist speakers use communist in the same manner. The use of political terms in a profane sense often leads to the term becoming less impactful or losing relevance as a political descriptor entirely. Words for animals can be used as terms of abuse despite not being inherently profane, commonly referencing some attribute of the animal. Examples in English include bitch to demean a woman or louse to describe someone unwanted. They may also be used in interjections like the Italian (). Animal-related profanity is distinct from other forms in that it is used similarly across different languages. Terms for dogs are among the most common animal swears across languages, alongside those for cows, donkeys, and pigs. Swear words related to monkeys are common in Arabic and East Asian cultures. Slurs are words that target a specific demographic. These are used to project xenophobia and prejudice, often through the use of stereotypes. They typically develop in times of increased contact of conflict between different races or ethnic groups, including times of war between two or more nations. Terms for minority groups are sometimes used as swears. This can apply to both profane terms such as kike or non-profane terms such as gay. Many of these are culture-specific. In a case of using the name of one group to demean another, Hun came to be associated with a brutish caricature of Germans, first during the Renaissance and again during World War I. Some terms for people of low class or status can become generically profane or derogatory. English examples include villain, lewd, and scum. Grammar and function Profanity is used to indicate the speaker's emotional state, and the negative associations of swear words mean they are often emotionally charged. Expressions of anger and frustration are the most common reason for swearing. Such expressions are associated with abusive profanity, which is the most negatively charged and is specifically chosen to insult or offend the subject. This may take the form of a direct insult, such as calling the subject an asshole, or by addressing the subject profanely, such as telling someone to fuck off. It can also be used to indicate contempt. Cathartic profanity is used as an expression of annoyance, and it is often considered less rude than profanity directed at a specific subject. Profanity can be used as a statement of agreement or disagreement, though disagreement is more common; the hell it is and my ass are examples of English profanities that indicate disagreement. The potent nature of swearing means that it can be used to gain attention, including the use of profanity to cause shock. In some circumstances, swearing can be used as a form of politeness, such as when a speaker gives positive reinforcement by describing something as pretty fucking good. Propositional or controlled swearing is done consciously, and speakers choose their wording and how to express it. This is more common when using descriptive swearing. Non-propositional or reflexive swearing is done involuntarily as an emotional response to excitement or displeasure. Frequent swearing can become a habit, even if the speaker does not have a specific intention of being profane. Profanity is often used as a slot filler, which functions as a modifier, and modifying a noun with a swear is commonly used to indicate dislike. A profane word can modify words as an adjective, such as in it's a bloody miracle, or as an adverb, such as in they drove damn fast. One type of adverbial profanity is to use it as a modal adverb, such as in no you fucking can't. Compound words can be created to create a new modifier, such as pisspoor. Many European languages use profanity to add emphasis to question words in the form of who the hell are you? or with a preposition in the form of what in God's name is that?. Modifier profanities are frequently used as an expletive attributive, or intensifiers that put emphasis on specific ideas. These commonly take the form of interjections to express strong emotion, such as the English examples bloody hell and for fuck's sake. Such stand-alone profanities are among the most common in natural speech. Expletive infixation is the use of a profane word as an intensifier inside of another word, such as modifying absolutely to become abso-fucking-lutely. Some languages use swear words that can generically replace nouns and verbs. This is most common in Russian. Though profanity exists in nearly all cultures, there is variation in when it is used and how it affects the meaning of speech. Each language has unique profane phrases influenced by culture. Japanese is sometimes described as having no swear words, though it has a concept of () that are not based on taboos but are otherwise functionally equivalent to swears. One linguistic theory proposes that sound symbolism influences the pronunciation of profanities. This includes the suggestion that profanities are more likely to include plosives, but this remains unstudied, especially outside of Indo-European languages. The use of profanity is the most common way to express taboo ideas. The dichotomy between its taboo nature and its prevalence in day-to-day life is studied as the "swearing paradox". It is used casually in some social settings, which can facilitate bonding and camaraderie, denote a social environment as informal, and mark the speaker as part of an in-group. The way speakers use profanity in social settings allows them to project their identity and personality through communication style, and in some circumstances it can be used as a method used to impress one's peers. Stylistic swearing is used to add emphasis or intensity to speech, which can be used to emphasize an idea in an aggressive or authoritative fashion, make an idea memorable, or produce a comedic effect. Profanity often presents as formulaic language, in which specific words can only be used in specific phrases, often developed through grammaticalization. Many of these phrases allow words to be swapped, presenting variations on a phrase like what in the bloody heck, why in the flamin' hell, and how in the fuckin' hell. Profane phrases can be used as anaphoric pronouns, such as replacing him with the bastard in tell the bastard to mind his own business. They can similarly be used to support a noun instead of replacing it, such as in John is a boring son of a bitch. Though profanity is usually associated with taboo words, obscene non-verbal acts such as hand gestures may be considered profane. Spitting in someone's direction has historically been seen as a strong insult. Exposure of certain body parts, often the genitals or buttocks, is also seen as profane in many parts of the world. Though cursing often refers to the use of profanity in general, it can refer to more specific phrases of harm such as damn you or a pox on you. Historically, people swore by or to the ideas that they were invoking, instead of swearing at something. Oaths in which the speaker swears by something, such as by God, can be used as interjections or intensifiers, typically without religious connotation. This is especially common in Arabic. Self-immolating oaths, such as I'll be damned, involve speakers casting harm upon themselves. These are often invoked as conditional statements based on whether something is true—I'll be damned if... Profanity directed at an individual can take the form of an unfriendly suggestion. English examples include go to hell and kiss my ass. Some profanities, such as your mother!, imply taboos or swear words without using them explicitly. Social perception Whether speech is profane depends on context, because what is taboo or impolite in one environment might not be in another. Swear words vary in their intensity, and speakers of a language might disagree that weaker swear words are actually profane. Isolated profanities are often seen as more profane than those used in context. The identity of the speaker affects how profanity is seen, as different cultures may hold classes, sexes, age groups, and other identities to different standards. Profanity is often seen as more socially acceptable when coming from men, and it is commonly associated with machismo. Profanity varies in how it affects a speaker's credibility. It can be seen as unprofessional in some circumstances, but it can make an argument more persuasive in others. Milder words can become more impactful in different circumstances; cheat may be more provocative in schools or gambling clubs, and informer replaces crook as a term of abuse for a dishonest person in a criminal setting. Profanity is often associated with lower class professions like soldiers and carters. Expectancy violations theory holds that expectations about a speaker's behavior come from impressions based not only on the speaker's identity, but how the specific speaker typically communicates and the socially expected way to speak to a given listener. Swearing in formal contexts is a greater violation of expectations than swearing in informal conversation. Whether the profanity is spoken in public or private is also a factor in social acceptability. Conversations that involve profanity are correlated with other informal manners of speech, such as slang, humor, and discussion of sexuality. Native speakers of a language can intuitively decide what language is appropriate for a given context. Those still learning a language, such as children and non-native speakers, are more likely to use profane language without realizing that it is profane. Acceptable environments for profanity are learned in childhood as children find themselves chastised for swearing in some places more than others. Swearing is often milder among young children, and they place more stigma on terms that are not seen as profane by adults, like fart or dork. Young children are more likely to use the mildest terms as swear words, such as pooh-pooh. Adolescents develop an understanding of double meanings in terms like balls. The severity of a swear word may decline over time as it is repeated. In some cases, slurs can be reclaimed by the targeted group when they are used ironically or in a positive context, such as queer to refer to the LGBTQ community. People who speak multiple languages often have stronger emotional associations with profanity in their native languages over that of languages that they acquire later. The severity of a profane term can vary between dialects within the same language. Publishers of dictionaries must take profanity into consideration when deciding what words to include, especially when they are subject to obscenity laws. They may be wary of appearing to endorse the use of profane language by its inclusion. Slang dictionaries have historically been used to cover profanity in lieu of more formal dictionaries. In some cultures, there are situations where profanity is good etiquette. A tradition exists in some parts of China that a bride was expected to speak profanely to her groom's family in the days before the wedding, and one Aboriginal Australian culture uses profanity to denote class. Censorship and avoidance The idea of censoring taboo ideas exists in all cultures. Swearing inappropriately can be punished socially, and public swearing can bring about legal consequences. There is disagreement as to whether freedom of speech should permit all forms of profane speech, including hate speech, or if such forms of speech can be justifiably restricted. Censorship is used to restrict or penalize profanity, and governments may implement laws that disallow certain acts of profanity, including legal limitations on the broadcast of profanity over radio or television. Broadcasting has unique considerations as to what is considered acceptable, including its presence in the home and children's access to broadcasts. Profanity may be avoided when discussing taboo subjects through euphemisms. Euphemisms were historically used to avoid invoking the names of malevolent beings. Euphemisms are commonly expressed as metaphors, such as make love or sleep with as descriptors of sexual intercourse. Euphemisms can be alternate descriptors such as white meat instead of breast meat, or they may be generic terms such as unmentionables. Minced oaths are euphemisms that modify swear words until they are no longer profane, such as darn instead of damn in English. Substitution is another form of euphemism, with English examples including the replacement of fuck with the f-word or effing and the use of "four-letter words" to refer to profanity in general. Chinese and some Southeast Asian languages use puns and sound-alikes to create alternate swear words. The Chinese word for bird, , rhymes with the Chinese word for penis and is frequently invoked as a swear. The Cockney dialect of English uses rhyming slang to alter terms, including profanity; titty is rhymed as Bristol city, which is then abbreviated as bristols. Speakers and authors may engage in self-censorship under legal or social pressure. In the 21st century, censorship through social pressure is associated with political correctness in Western society. This has led to the intentional creation of new euphemisms to avoid terms that may be stigmatizing. Some become widely accepted, such as substance abuse for drug addiction, while others are ignored or derided, such as differently abled for disabled. Physiology and neurology The brain processes profanity differently than it processes other forms of language. Intentional controlled swearing is associated with the brain's left hemisphere, while reflexive swearing is associated with the right hemisphere. Swearing is associated with both language-processing parts of the brain, the left frontal and temporal lobes, as well as the emotion-processing parts, the right cerebrum and the amygdala. The association of emotional swearing with the amygdala and other parts of the limbic system suggests that some uses of profanity are related to the fight-or-flight response. Profanity requires more mental processing than other forms of language, and the use of profanity is easier to remember when recalling a conversation or other speech. Exposure to profanity leads to higher levels of arousal, and it can cause increases in heart rate and electrodermal activity as part of a fight-or-flight response. Swearing has also been shown to increase pain tolerance, especially among people who do not regularly swear. Compulsive swearing is called coprolalia, and it is associated with neurological conditions such as Tourette syndrome, dementia, and epilepsy. The ability to use profanity can remain intact even when neurological trauma causes aphasia. Frequent swearing is more common among people with damage to the brain or other parts of the nervous system. Damage to the ventromedial prefrontal cortex can negatively affect one's ability to control their use of profanity and other socially inappropriate behaviors. Damage to Broca's area and other language-processing regions of the brain can similarly make people prone to outbursts. Damage to the right hemisphere limits the ability to understand and regulate the emotional content of one's speech. Legality Australia In every Australian state and territory it is a crime to use offensive, indecent or insulting language in or near a public place. These offences are classed as summary offences. This means that they are usually tried before a local or magistrates court. Police also have the power to issue fixed penalty notices to alleged offenders. It is a defence in some Australian jurisdictions to have "a reasonable excuse" to conduct oneself in the manner alleged. Brazil In Brazil, the Penal Code does not contain any penalties for profanity in public immediately. However, direct offenses against one can be considered a crime against honor, with a penalty of imprisonment of one to three months or a fine. The analysis of the offence is considered "subjective", depending on the context of the discussion and the relationship between the parts. Canada Section 175 of Canada's Criminal Code makes it a criminal offence to "cause a disturbance in or near a public place" by "swearing […] or using insulting or obscene language". Provinces and municipalities may also have their laws against swearing in public. For instance, the Municipal Code of Toronto bars "profane or abusive language" in public parks. In June 2016, a man in Halifax, Nova Scotia, was arrested for using profane language at a protest against Bill C-51. India Sections 294A and 294B of Indian penal code have legal provisions for punishing individuals who use inappropriate or obscene words (either spoken or written) in public that are maliciously deliberate to outrage religious feelings or beliefs. In February 2015, a local court in Mumbai asked police to file a first information report against 14 Bollywood celebrities who were part of the stage show of All India Bakchod, a controversial comedy stage show known for vulgar and profanity based content. In May 2019 during the election campaign, Indian Prime Minister Narendra Modi listed out the abusive words the opposition Congress party had used against him and his mother during their campaign. In January 2016, a Mumbai-based communications agency initiated a campaign against profanity and abusive language called "Gaali free India" ( is the Hindi word for profanity). Using creative ads, it called upon people to use swatch (clean) language on the lines of Swachh Bharat Mission for nationwide cleanliness. It further influenced other news media outlets who further raised the issue of abusive language in the society especially incest abuses against women, such as "mother fucker". In an increasing market for OTT content, several Indian web series have been using profanity and expletives to gain attention of the audiences. New Zealand In New Zealand, the Summary Offences Act 1981 makes it illegal to use "indecent or obscene words in or within hearing of any public place". However, if the defendant has "reasonable grounds for believing that his words would not be overheard" then no offence is committed. Also, "the court shall have regard to all the circumstances pertaining at the material time, including whether the defendant had reasonable grounds for believing that the person to whom the words were addressed, or any person by whom they might be overheard, would not be offended". Pakistan Political leaders in Pakistan have been consistently picked up for using profane, abusive language. While there is no legislation to punish abusers, the problem aggravated with abusive language being used in the parliament and even against women. Philippines The Department of Education in the Philippine city of Baguio expressed that while cursing was prohibited in schools, children were not following this prohibition at home. Thus as part of its anti profanity initiative, in November 2018, the Baguio city government in the Philippines passed an anti profanity law that prohibits cursing and profanity in areas of the city frequented by children. This move was welcomed by educators and the Department of Education in Cordillera. Russia Swearing in public is an administrative crime in Russia. However, law enforcement rarely targets swearing people. The punishment is a fine of 500–1000 roubles or even a 15-day imprisonment. United Kingdom In public Swearing, in and of itself, is not usually a criminal offence in the United Kingdom although in context may constitute a component of a crime. However, it may be a criminal offence in Salford Quays under a public spaces protection order which outlaws the use of "foul and abusive language" without specifying any further component to the offence, although it appears to be unclear as to whether all and every instance of swearing is covered. Salford City Council claims that the defence of "reasonable excuse" allows all the circumstances to be taken into account. In England and Wales, swearing in public where it is seen to cause harassment, alarm or distress may constitute an offence under section 5(1) and (6) of the Public Order Act 1986. In Scotland, a similar common law offence of breach of the peace covers issues causing public alarm and distress. In the workplace In the United Kingdom, swearing in the workplace can be an act of gross misconduct under certain circumstances. In particular, this is the case when swearing accompanies insubordination against a superior or humiliation of a subordinate employee. However, in other cases, it may not be grounds for instant dismissal. According to a UK site on work etiquette, the "fact that swearing is a part of everyday life means that we need to navigate away through a day in the office without offending anyone, while still appreciating that people do swear. Of course, there are different types of swearing and, without spelling it out, you really ought to avoid the 'worst words' regardless of who you're talking to". Within the UK, the appropriateness of swearing can vary largely by a person's industry of employment, though it is still not typically used in situations where employees of a higher position than oneself are present. In 2006, The Guardian reported that "36% of the 308 UK senior managers and directors having responded to a survey accepted swearing as part of workplace culture", but warned about specific inappropriate uses of swearing such as when it is discriminatory or part of bullying behaviour. The article ended with a quotation from Ben Wilmott (Chartered Institute of Personnel and Development): "Employers can ensure professional language in the workplace by having a well-drafted policy on bullying and harassment that emphasises how bad language has potential to amount to harassment or bullying." United States In the United States, courts have generally ruled that the government does not have the right to prosecute someone solely for the use of an expletive, which would be a violation of their right to free speech enshrined in the First Amendment. On the other hand, they have upheld convictions of people who used profanity to incite riots, harass people, or disturb the peace. In 2011, a North Carolina statute that made it illegal to use "indecent or profane language" in a "loud and boisterous manner" within earshot of two or more people on any public road or highway was struck down as unconstitutional. In 2015, the city of Myrtle Beach, South Carolina passed an ordinance that makes profane language punishable with fines up to $500 and/or 30 days in jail. An amount of $22,000 was collected from these fines in 2017 alone. Religious views Judaism Rabbi Yisroel Cotlar wrote in Chabad.org that Judaism forbids the use of profanity as contradicting the Torah's command to "Be holy", which revolves around the concept of separating oneself from worldly practices (including the use of vulgar language). The Talmud teaches that the words that leave the mouth make an impact on the heart and mind; he stated that the use of profanity thus causes the regression of the soul. Judaism thus teaches that shemirat halashon (guarding one's tongue) is one of the first steps to spiritual improvement. Christianity Various Christian writers have condemned the use of "foul language" as being sinful, a position held since the time of the early Church. To this end, the Bible commands including "Don't use foul or abusive language. Let everything you say be good and helpful, so that your words will be an encouragement to those who hear them" (Ephesians 4:29) and also "Let there be no filthiness nor foolish talk nor crude joking, which are out of place, but instead let there be thanksgiving" (Ephesians 5:4). These teachings are echoed in Ecclesiasticus 20:19, Ecclesiasticus 23:8-15, and Ecclesiasticus 17:13-15, all of which are found in the Deuterocanon/Apocrypha. Jesus taught that "by your words you will be justified, and by your words you will be condemned." (cf. Matthew 12:36-37), with revilers being listed as being among the damned in 1 Corinthians 6:9-10. Profanity revolving around the dictum "Thou shalt not take the name of the Lord thy God in vain", one of the Ten Commandments, is regarded as blasphemy as Christians regard it as "an affront to God's holiness". Paul the Apostle defines the ridding of filthy language from one's lips as being evidence of living in a relationship with Jesus (cf. Colossians 3:1-10). The Epistle to the Colossians teaches that controlling the tongue "is the key to gaining mastery over the whole body." The Didache 3:3 included the use of "foul language" as being part of the lifestyle that puts one on the way to eternal death. The same document commands believers not to use profanity as it "breeds adultery". John Chrysostom, an early Church Father, taught that those engaged in the use of profanity should repent of the sin. The Epistle of James holds that "blessing God" is the primary function of the Christian's tongue, not speaking foul language. Saint Tikhon of Zadonsk, a bishop of Eastern Orthodox Church, lambasted profanity and blasphemy, teaching that it is "extremely unbefitting [for] Christians" and that believers should guard themselves from ever using it. Islam According to Ayatullah Ibrahim Amini, the use of "bad words" is haram in Islam. Additionally, impertinence and slander are considered immoral acts. See also Notes References Further reading Bryson, Bill (1990) The Mother Tongue Johnson, Sterling (2004) Watch Your F*cking Language McEnery, Tony (2006) Swearing in English: bad language, purity and power from 1586 to the present, Routledge . O'Connor, Jim (2000) Cuss Control Sagarin Edward (1962) The Anatomy of Dirty Words Sheidlower, Jesse (2009) The F-Word (3rd ed.) Spears, Richard A. (1990) Forbidden American English Wajnryb, Ruth (2005) Expletive Deleted: A Good Look at Bad Language External links Most vulgar words in The Online Slang Dictionary (as voted by visitors) Profanity Blasphemy Obscenity Censorship Connotation Harassment and bullying
Profanity
Biology
8,103
70,720,694
https://en.wikipedia.org/wiki/Sebacina%20sparassoidea
Sebacina sparassoidea, the white coral jelly fungus, is a species of fungus in the family Sebacinaceae. Its coral-like basidiocarps (fruit bodies) are typically a yellowish off-white and have a gelatinous and elastic texture. Found in eastern North America, in humid environments amongst rotting logs of deciduous trees, particularly oaks, it is often observed growing throughout the months of August to September. Taxonomy The white coral jelly fungus was first described in 1873 by British mycologist Miles Joseph Berkeley as a variety, var. reticulatum, of Corticium tremellinum. In 1908 it was raised to species level and placed in the genus Tremella, as Tremella reticulata, by American mycologist William Gilson Farlow. In 2003 British mycologist Peter Roberts re-examined the species and transferred it to the genus Sebacina. Since a different species (Sebacina reticulata Pat.) already existed with the species epithet reticulata, the new combination in Sebacina was applied to the earliest available synonym, as Sebacina sparassoidea. Description Fruit bodies of the white coral jelly fungus are composed of multiple, erect, coalescing, hollow lobes or branches arising from a central point. Such structures are roughly 3 to 20 cm in diameter and 3 to 12 cm tall. The associated spore print is white. Microscopically, the hyphae lack clamp connections. Basidia are septate. Basidiospores are ellipsoid, 9–13 × 6–7 μm. Edibility Sources disagree about edibility. However it is never considered dangerous, nor is it of exceptional culinary use. References Sebacinales Fungus species
Sebacina sparassoidea
Biology
364