id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
4,058,157
https://en.wikipedia.org/wiki/Roughing%20filter
Roughing filters provide pretreatment for turbid water or simple, low maintenance treatment when high water quality is not needed. External links SANDEC page Blue filter Inc. (commercial site) Rejuvenation of SSF using HRF technique Appropriate technology Water filters
Roughing filter
Chemistry
57
57,061,908
https://en.wikipedia.org/wiki/Mary%20Ryan%20%28materials%20scientist%29
Mary Patricia Ryan is a Professor of Materials Science at Imperial College London and a Fellow of the Royal Academy of Engineering. Education Ryan completed her undergraduate and postgraduate studies at the University of Manchester. Her PhD was on using "in-situ ECSTM to study the formation of ultra-thin surface oxides on base metals", and she managed to show for the first time that these surface oxides have crystalline phases. She spent three years at Brookhaven National Laboratory, New York, where she developed in situ electrochemical systems using synchrotron radiation-based techniques. Career and research Ryan is an expert in electrochemistry and interfacial material science. Ryan joined Imperial College London as a lecturer in 1998. Her research group explore the mechanism of corrosion, new protective materials and materials with thermal management capabilities. She studies the process of electrochemical deposition, the stabilities of metals and the formation processes of metal and oxide nanostructures. She pioneered the use of synchrotron X-rays to study reactive electrochemical systems, including the stability of nanostructures. In 2002, she published the seminal paper "Why stainless steel corrodes" in Nature. In 2012, she joined Amy Cruickshank to advise on how to preserve the Dornier Do 17 ('The Flying Pencil'), which was discovered in Goodwin Sands. She also contributed to the 2016 World Economic Forum, where she discussed how nano-composite materials could use heat from a vehicle's engine to power air conditioning. Her recent work focusses on how nanomaterials interact with biological systems, including the toxicity of nanoparticles and development of plasmonic materials for biosensing. She works with the heritage sector to develop new materials and conservation techniques. She has worked with the Science Museum, the Royal Air Force Museum London and the Victoria and Albert Museum. She collaborates extensively with Dr Eleanor Schofield, Head of Conservation and Collections Care at the Mary Rose Trust. In 2017, she was appointed Vice Dean of Research for the Faculty of Engineering at Imperial College London. She is the Director of the Imperial-Shell University Technology Centre in Advanced Interfacial Materials Science. Ryan is a member of the London Centre for Nanotechnology. She is an editor for Nature's Materials Degradation Journal. She was elected a Fellow of the Royal Academy of Engineering in 2015. She is a Fellow of the Institute of Materials, Minerals and Mining. She is a member of the Strategic Advisory Network of the Engineering and Physical Sciences Research Council. She is a Trustee of the Heritage science forum. Ryan was appointed Commander of the Order of the British Empire (CBE) in the 2022 Birthday Honours for services to education and materials science and engineering. References Living people Year of birth missing (living people) British materials scientists Alumni of the University of Manchester Academics of Imperial College London Fellows of the Royal Academy of Engineering Female fellows of the Royal Academy of Engineering 21st-century British women engineers Women materials scientists and engineers Commanders of the Order of the British Empire Electrochemists Fellows of the Institute of Materials, Minerals and Mining
Mary Ryan (materials scientist)
Chemistry,Materials_science,Technology
628
3,800,537
https://en.wikipedia.org/wiki/%C3%89ric%20Fombonne
Éric Fombonne is a French psychiatrist and epidemiologist based in Montreal. Career Fombonne trained in Paris and was subsequently appointed as a career research scientist in Paris, at the Institut National de la Santé et de la Recherche Médicale (INSERM). In the early 1990s, he joined Pr. Rutter's MRC Child Psychiatry Unit at the Institute of Psychiatry at King's College London where he held a Senior Lecturer post and an Honorary Consultant position at the Maudsley Hospital. In 1997, he was promoted as Reader In Epidemiological Psychiatry at the Institute of psychiatry. In 2001, he was appointed at McGill University in Canada as tenured full professor in Psychiatry. From 2001 to 2009, he directed the child psychiatry division at McGill University and the psychiatry department at the Montreal Children's Hospital, where he played a key role in the launch of its autism clinical and research program. Fombonne is also the recipient of the Canada Research Chair in child psychiatry since 2001. He is a practicing child psychiatrist and has clinics at the Montreal Children's Hospital for subjects with autism and other developmental disorders. Fombonne was president of the Association of Professors of Child and Adolescent Psychiatry of Canada (APCAPC). He was Associate Editor of the Journal of Autism and Developmental Disorders from 1994 to 2003, and is a member of several editorial boards, and a consultant for several scientific organizations such as NIH/NIMH or foundations. Over the years, he has been a supportive member of several family associations, including Autism France and Autism Europe. Research His research focuses on epidemiological investigations of childhood mental illness and related risk factors, with a particular focus on the epidemiology of autism. He conducted the first population based survey of child psychiatric disorders among school aged children in France. In the UK, he conducted research on childhood depression and its long term outcomes. In France and in the UK, and later in Canada, he performed several surveys of autism in childhood populations. He and his colleagues were credited with demonstrating that there is no epidemiological evidence of a link between the MMR vaccine or mercury-containing vaccines and autism, as postulated by other researchers including Andrew Wakefield. Fombonne was subsequently involved as a key scientific expert witness in several trials in US courts and class actions. He was influential in explicating the lack of evidence linking thimerosal in vaccines, or specific vaccine types such as MMR, to autism in children. He testified on behalf of the US DHSS and the US Department of Justice in well publicized trials in front of the Vaccine Injury Compensation Court in Washington DC between 2006 and 2008. He has conducted several epidemiological studies of child and adolescent psychopathology, looking at overall mental health, depression, eating and substance use disorders. An early focus of his work was on secular trends in the incidence of youth mental health disorders, and factors that might cause these changes over time. One of the major studies conducted by Fombonne examined depression and suicidal behaviors, which linked alcohol abuse to increased suicidal tendencies in boys, using data on 6,000 subjects. He has also been involved in long-term outcome studies of child and adolescent depression. At McGill University, Fombonne has consolidated the Autism Spectrum Disorder program at the Montreal Children's Hospital since his appointment there in 2001. He currently heads an autism research program directed at evaluating environmental risk factors, such as vaccines and environmental neurotoxicants, and investigating genetic risks associated with the heritability of autism. He has also been involved in several molecular genetic studies of autism, and in outcome studies of autism spectrum disorders. Fombonne recently conducted a metanalysis of available epidemiological evidence of the prevalence of autism. His review concluded that the prevalence rate for autism is 25/10,000 and the rate of all pervasive developmental disorders around 90/10,000. However, he also noted several more recent studies indicating a much higher prevalence rate than this with a broader inclusion basis. He attributes the apparent rise in autism cases to wider recognition of the condition, and argues that claims of an 'autism epidemic' are unfounded unless proven otherwise. In 2001, he told the BBC "That rates in recent surveys are substantially higher than 30 years ago merely reflects the adoption of a much broader concept of autism, a recognition of autism among normally intelligent subjects and an improved identification of persons with autism." However, he also states that a real change in the incidence of autism in human populations may also have contributed to the upward trends and that environmental risk factors that may influence these changes should be examined. Publications Fombonne has written over 260 scientific reports in peer reviewed journals and 40 book chapters. He was associate editor of the Journal of Autism and Developmental Disorders from 1994 to 2003. Selected works Fombonne E. (1994) The Chartres study. I. Prevalence of psychiatric disorders among French school-aged children. British Journal of Psychiatry, 164, 69–79. Fombonne E, Wostear G, Cooper V, Harrington R, Rutter M. (2001). The Maudsley long-term follow-up study of adolescent depression. I. Adult rates of psychiatric disorders. British Journal of Psychiatry, 179: 210–217. Fombonne E, Wostear G, Cooper V, Harrington R, Rutter M. (2001). The Maudsley long-term follow-up study of adolescent depression. II. Suicidality, criminality and social dysfunction in adulthood. British Journal of Psychiatry, 179: 218–223. Fombonne E, du Mazaubrun C, Cans H, Grandjean H. (1997). Autism and associated medical disorders in a large French epidemiological sample. Journal of the American Academy of Child and Adolescent Psychiatry, 36: 1561-1569 Lazoff T, Zhong LH, Piperni T, Fombonne E. (2010) Prevalence of Pervasive Developmental Disorders among children at the English Montreal School Board. Canadian Journal of Psychiatry, 55, 11: 715–20. Fombonne E, Chakrabarti S. (2001). No evidence for a new variant of Measles-Mumps-Rubella-induced autism. Pediatrics, 108 (4) e58. Chakrabarti S, Fombonne E, (2001). Pervasive developmental disorders in preschool children. Journal of the American Medical Association (JAMA), 285: 3093–3099. Smeeth L, Cook C, Fombonne E, Heavey L, Rodrigues LC, Smith PG, Hall AJ. (2004) MMR vaccination and pervasive developmental disorders: a case-control study. Lancet, 364, 963–969. Fombonne E, Zakarian R, Bennett A, Meng L, McLean-Heywood D. (2006) Pervasive developmental disorders in Montréal, Québec: prevalence and links with immunizations. Pediatrics, 118:139-150. D’Souza Y, Fombonne E, Ward B. (2006) No evidence of persisting measles virus in peripheral blood mononuclear cells from children with autism spectrum disorder. Pediatrics, 118, 1664 – 1675. Fombonne E, (1994). Increased rates of depression: update of epidemiological findings and analytical problems. Acta Psychiatrica Scandinavica, 90: 145–156. Fombonne E, Anorexia nervosa: no evidence of an increase (1995). British Journal of Psychiatry,166, 462–471. Fombonne E. Suicidal behaviours in vulnerable adolescents: time trends and their correlates (1998). British Journal of Psychiatry, 173, 154–159. Fombonne E., Quirke S., Hagen A. (2011): Epidemiology of pervasive developmental disorders. In: Autism Spectrum Disorders. Amaral DG, Dawson G, and Geschwind DH (Eds). Oxford University Press. pp. 90–111. Personal life As of 2001 Fombonne was married to Rebecca Fuhrer. They have three children. References External links BBC.co.uk - 'Autism rates "not rising"', BBC (February 15, 2001) CAIRNE-Sitr.com - 'One in 165 children now estimated to have pervasive developmental disorder, three times greater than previously thought', Eric Fombonne, MD, FRCPsych, Canadian Autism Intervention and Research Network Chairs.gc.ca - 'Eric Fonbonne', Canada Research Chair in Child and Adolescent Psychiatry, Canada Research Chairs CPA-APC.org - 'Modern Views of Autism' (opinion), Eric Fombonne, MD, FRCPsych, Canadian Journal of Psychiatry (September, 2003) MUHC.ca - 'Dr. Eric Fombonne elected to head two key associations', McGill University Research Center (September 24, 2002) UCDavis.edu - 'Eric Fombonne, M.D.: M.I.N.D. Institute Distinguished Lecturer Series' (December 14, 2005) UoGuelph.ca - 'The Prevalence of Autism' (opinion), Eric Fombonne, MD, Journal of the American Medical Association'' (JAMA), vol 289, no 1, p 49 (January 1, 2003) Living people Year of birth missing (living people) Academics of King's College London Autism researchers Canada Research Chairs Canadian psychiatrists French public health doctors French psychiatrists Academic staff of McGill University Physicians from Paris Researchers in alcohol abuse Vaccinologists
Éric Fombonne
Biology
2,020
31,225,305
https://en.wikipedia.org/wiki/Determinantal%20point%20process
In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. They are suited for modelling global negative correlations, and for efficient algorithms of sampling, marginalization, conditioning, and other inference tasks. Such processes arise as important tools in random matrix theory, combinatorics, physics, machine learning, and wireless network modeling. Introduction Intuition Consider some positively charged particles confined in a 1-dimensional box . Due to electrostatic repulsion, the locations of the charged particles are negatively correlated. That is, if one particle is in a small segment , then that makes the other particles less likely to be in the same set. The strength of repulsion between two particles at locations can be characterized by a function . Formal definition Let be a locally compact Polish space and be a Radon measure on . In most concrete applications, these are Euclidean space with its Lebesgue measure. A kernel function is a measurable function . We say that is a determinantal point process on with kernel if it is a simple point process on with a joint intensity or correlation function (which is the density of its factorial moment measure) given by for every n ≥ 1 and x1, ..., xn ∈ Λ. Properties Existence The following two conditions are necessary and sufficient for the existence of a determinantal random point process with intensities ρk. Symmetry: ρk is invariant under action of the symmetric group Sk. Thus: Positivity: For any N, and any collection of measurable, bounded functions k = 1, ..., N with compact support: If Then Uniqueness A sufficient condition for the uniqueness of a determinantal random process with joint intensities ρk is for every bounded Borel Examples Gaussian unitary ensemble The eigenvalues of a random m × m Hermitian matrix drawn from the Gaussian unitary ensemble (GUE) form a determinantal point process on with kernel where is the th oscillator wave function defined by and is the th Hermite polynomial. Airy process The Airy process has kernel functionwhere is the Airy function. This process arises from rescaled eigenvalues near the spectral edge of the Gaussian Unitary Ensemble. It was introduced in 1992. Poissonized Plancherel measure The poissonized Plancherel measure on integer partition (and therefore on Young diagramss) plays an important role in the study of the longest increasing subsequence of a random permutation. The point process corresponding to a random Young diagram, expressed in modified Frobenius coordinates, is a determinantal point process on + with the discrete Bessel kernel, given by: where For J the Bessel function of the first kind, and θ the mean used in poissonization. This serves as an example of a well-defined determinantal point process with non-Hermitian kernel (although its restriction to the positive and negative semi-axis is Hermitian). Uniform spanning trees Let G be a finite, undirected, connected graph, with edge set E. Define Ie:E → ℓ2(E) as follows: first choose some arbitrary set of orientations for the edges E, and for each resulting, oriented edge e, define Ie to be the projection of a unit flow along e onto the subspace of ℓ2(E) spanned by star flows. Then the uniformly random spanning tree of G is a determinantal point process on E, with kernel . References Point processes
Determinantal point process
Mathematics
732
66,675
https://en.wikipedia.org/wiki/Light%20curve
In astronomy, a light curve is a graph of the light intensity of a celestial object or region as a function of time, typically with the magnitude of light received on the y-axis and with time on the x-axis. The light is usually in a particular frequency interval or band. Light curves can be periodic, as in the case of eclipsing binaries, Cepheid variables, other periodic variables, and transiting extrasolar planets; or aperiodic, like the light curve of a nova, cataclysmic variable star, supernova, microlensing event, or binary as observed during occultation events. The study of the light curve, together with other observations, can yield considerable information about the physical process that produces it or constrain the physical theories about it. Variable stars Graphs of the apparent magnitude of a variable star over time are commonly used to visualise and analyse their behaviour. Although the categorisation of variable star types is increasingly done from their spectral properties, the amplitudes, periods, and regularity of their brightness changes are still important factors. Some types such as Cepheids have extremely regular light curves with exactly the same period, amplitude, and shape in each cycle. Others such as Mira variables have somewhat less regular light curves with large amplitudes of several magnitudes, while the semiregular variables are less regular still and have smaller amplitudes. The shapes of variable star light curves give valuable information about the underlying physical processes producing the brightness changes. For eclipsing variables, the shape of the light curve indicates the degree of totality, the relative sizes of the stars, and their relative surface brightnesses. It may also show the eccentricity of the orbit and distortions in the shape of the two stars. For pulsating stars, the amplitude or period of the pulsations can be related to the luminosity of the star, and the light curve shape can be an indicator of the pulsation mode. Supernovae Light curves from supernovae can be indicative of the type of supernova. Although supernova types are defined on the basis of their spectra, each has typical light curve shapes. Type I supernovae have light curves with a sharp maximum and gradually decline, while Type II supernovae have less sharp maxima. Light curves are helpful for classification of faint supernovae and for the determination of sub-types. For example, the type II-P (for plateau) have similar spectra to the type II-L (linear) but are distinguished by a light curve where the decline flattens out for several weeks or months before resuming its fade. Planetary astronomy In planetary science, a light curve can be used to derive the rotation period of a minor planet, moon, or comet nucleus. From the Earth there is often no way to resolve a small object in the Solar System, even in the most powerful of telescopes, since the apparent angular size of the object is smaller than one pixel in the detector. Thus, astronomers measure the amount of light produced by an object as a function of time (the light curve). The time separation of peaks in the light curve gives an estimate of the rotational period of the object. The difference between the maximum and minimum brightnesses (the amplitude of the light curve) can be due to the shape of the object, or to bright and dark areas on its surface. For example, an asymmetrical asteroid's light curve generally has more pronounced peaks, while a more spherical object's light curve will be flatter. This allows astronomers to infer information about the shape and spin (but not size) of asteroids. Asteroid lightcurve database Light curve quality code The Asteroid Lightcurve Database (LCDB) of the Collaborative Asteroid Lightcurve Link (CALL) uses a numeric code to assess the quality of a period solution for minor planet light curves (it does not necessarily assess the actual underlying data). Its quality code parameter U ranges from 0 (incorrect) to 3 (well-defined): U = 0 → Result later proven incorrect U = 1 → Result based on fragmentary light curve(s), may be completely wrong. U = 2 → Result based on less than full coverage. Period may be wrong by 30 percent or ambiguous. U = 3 → Secure result within the precision given. No ambiguity. U = n.a. → Not available. Incomplete or inconclusive result. A trailing plus sign (+) or minus sign (−) is also used to indicate a slightly better or worse quality than the unsigned value. Occultation light curves The occultation light curve is often characterised as binary, where the light from the star is terminated instantaneously, remains constant for the duration, and is reinstated instantaneously. The duration is equivalent to the length of a chord across the occulting body. Circumstances where the transitions are not instantaneous are; when either the occulting or occulted body are double, e.g. a double star or double asteroid, then a step light curve is observed. when the occulted body is large, e.g. a star like Antares, then the transitions are gradual. when the occulting body has an atmosphere, e.g. the moon Titan The observations are typically recorded using video equipment and the disappearance and reappearance timed using a GPS disciplined Video Time Inserter (VTI). Occultation light curves are archived at the VizieR service. Exoplanet discovery Periodic dips in a star's light curve graph could be due to an exoplanet passing in front of the star that it is orbiting. When an exoplanet passes in front of its star, light from that star is temporarily blocked, resulting in a dip in the star's light curve. These dips are periodic, as planets periodically orbit a star. Many exoplanets have been discovered via this method, which is known as the astronomical transit method. Light curve inversion Light curve inversion is a mathematical technique used to model the surfaces of rotating objects from their brightness variations. This can be used to effectively image starspots or asteroid surface albedos. Microlensing Microlensing is a process where relatively small and low-mass astronomical objects cause a brief small increase in the brightness of a more distant object. This is caused by the small relativistic effect as larger gravitational lenses, but allows the detection and analysis of otherwise-invisible stellar and planetary mass objects. The properties of these objects can be inferred from the shape of the lensing light curve. For example, PA-99-N2 is a microlensing event that may have been due to a star in the Andromeda Galaxy that has an exoplanet. References External links The AAVSO online light curve generator can plot light curves for thousands of variable stars The Open Astronomy Catalogs have light curves for several transient types, including supernovae Lightcurves: An Introduction by NASA's Imagine the Universe DAMIT Database of Asteroid Models from Inversion Techniques Variable stars Concepts in stellar astronomy Planetary science
Light curve
Physics,Astronomy
1,449
51,805,327
https://en.wikipedia.org/wiki/Outline%20of%20galaxies
The following outline is provided as an overview of and topical guide to galaxies: Galaxies – gravitationally bound systems of stars, stellar remnants, interstellar gas, dust, and dark matter. The word galaxy is derived from the Greek galaxias (γαλαξίας), literally "milky", a reference to the Milky Way. Galaxies range in size from dwarfs with just a few billion (109) stars to giants with one hundred trillion (1014) stars, each orbiting its galaxy's center of mass. Galaxies are categorized according to their visual morphology as elliptical, spiral and irregular. Many galaxies are thought to have black holes at their active centers. The Milky Way's central black hole, known as Sagittarius A*, has a mass four million times greater than the Sun. As of March 2016, GN-z11 is the oldest and most distant observed galaxy with a comoving distance of 32 billion light-years from Earth, and observed as it existed just 400 million years after the Big Bang. Previously, as of July 2015, EGSY8p7 was the most distant known galaxy, estimated to have a light travel distance of 13.2 billion light-years away. Types of galaxies List of galaxies Lists of galaxies By morphological classification Galaxy morphological classification Disc galaxy Lenticular galaxy Barred lenticular galaxy Unbarred lenticular galaxy Spiral galaxy   (list) Anemic galaxy Barred spiral galaxy Flocculent spiral galaxy Grand design spiral galaxy Intermediate spiral galaxy Magellanic spiral Unbarred spiral galaxy Dwarf galaxy Dwarf elliptical galaxy Dwarf spheroidal galaxy Dwarf spiral galaxy Elliptical galaxy Type-cD galaxy Irregular galaxy Barred irregular galaxy Peculiar galaxy Ring galaxy   (list) Polar-ring galaxy   (list) By nucleus Active galactic nucleus Blazar Low-ionization nuclear emission-line region Markarian galaxies Quasar Radio galaxy X-shaped radio galaxy Relativistic jet Seyfert galaxy By emissions Energetic galaxies Lyman-alpha emitter Luminous infrared galaxy Starburst galaxy Pea galaxy Hot, dust-obscured galaxies (Hot DOGs) Low activity Low-surface-brightness galaxy Ultra diffuse galaxy By interaction Field galaxy Galactic tide Galaxy cloud Interacting galaxy Galaxy merger Jellyfish galaxy Satellite galaxy Superclusters Galaxy filament Void galaxy By other aspect Galaxies named after people Largest galaxies Nearest galaxies Nature of galaxies Galactic phenomena Galactic year – duration of time required for the Sun to orbit once around the center of the Milky Way Galaxy. Galaxy formation and evolution Galaxy merger Hubble's law Galaxy components Components of galaxies in general Active galactic nucleus Galactic bulge Galactic disc Galactic habitable zone Galactic halo Dark matter halo Galactic corona Galactic magnetic fields Galactic plane Galactic spheroid Interstellar medium Spiral arms Supermassive black hole Structure of specific galaxies Milky Way components Galactic Center Galactic quadrant Spiral arms of the Milky Way Carina–Sagittarius Arm Norma Arm Orion Arm Perseus Arm Scutum–Centaurus Arm Galactic ridge Galactic cartography Galactic coordinate system Galactic longitude Galactic latitude Galaxy rotation curve Larger constructs composed of galaxies Galaxy groups and clusters   (list) Local Group – galaxy group that includes the Milky Way Galaxy group Galaxy cluster Supercluster   (list) Brightest cluster galaxy Fossil galaxy group Galaxy filament Intergalactic phenomena Galactic orientation Galaxy merger Andromeda–Milky Way collision Hypothetical intergalactic phenomena Intergalactic travel Intergalactic dust Intergalactic stars Void   (list) Fields that study galaxies Astronomy Galactic astronomy – studies the Milky Way galaxy. Extragalactic astronomy – studies everything outside the Milky Way galaxy, including other galaxies. Astrophysics Cosmology Physical cosmology Galaxy-related publications Galaxy catalogs Atlas of Peculiar Galaxies Catalogue of Galaxies and Clusters of Galaxies David Dunlap Observatory Catalogue Lyon-Meudon Extragalactic Database Morphological Catalogue of Galaxies Multiwavelength Atlas of Galaxies Principal Galaxies Catalogue Shapley-Ames Catalog Uppsala General Catalogue Vorontsov-Vel'yaminov Interacting Galaxies Persons influential in the study of galaxies Galileo Galilei – discovered that the Milky Way is composed of a huge number of faint stars. Edwin Hubble See also Barred spiral galaxy Galaxy color–magnitude diagram Dark galaxy Faint blue galaxy Galaxy color–magnitude diagram Illustris project Protogalaxy Cosmos Redshift 7 List of quasars References External links Galaxies, SEDS Messier pages An Atlas of The Universe Galaxies — Information and amateur observations The Oldest Galaxy Yet Found Galaxy classification project, harnessing the power of the internet and the human brain How many galaxies are in our Universe? 3-D Video (01:46) – Over a Million Galaxies of Billions of Stars each – BerkeleyLab/animated. Galaxies
Outline of galaxies
Astronomy
939
67,646,766
https://en.wikipedia.org/wiki/Evercast
Evercast is a privately held software as a service company that makes collaborative software primarily for the film, television, and other creative industry sectors. Its platform allows remotely located creative teams to collaborate in real-time on video production tasks, such as reviewing dailies, editing footage, sound mixing, animation, visual effects, and other components simultaneously. Its primary users are directors, editors, VFX artists, animators, and sound teams in the film, television, advertising, and video gaming industries. History The company was founded in 2015 by Alex Cyrell, Brad Thomas, and Blake Brinker, and is based in Scottsdale, Arizona. After using the software, film editor Roger Barton joined the company and became a co-founder and investor. In 2020, Evercast won an Engineering Emmy award. Funding In 2020, an unnamed angel investor provided just over $3 million of funding. References Software companies based in Arizona Collaborative software Film editing Web conferencing Impact of the COVID-19 pandemic on science and technology Software associated with the COVID-19 pandemic
Evercast
Technology
220
6,946,381
https://en.wikipedia.org/wiki/Exploding%20trousers
In New Zealand in the 1930s, farmers reportedly had trouble with exploding trousers as a result of attempts to control ragwort, an agricultural weed. Farmers had been spraying sodium chlorate, a government recommended weedkiller, onto the ragwort, and some of the spray had ended up on their clothes. Sodium chlorate is a strong oxidizing agent, and reacted with the organic fibres (i.e., the wool and the cotton) of the clothes. Reports had farmers' trousers variously smoldering and bursting into flame, particularly when exposed to heat or naked flames. One report had trousers that were hanging on a washing line starting to smoke. There were also several reports of trousers exploding while farmers were wearing them, causing severe burns. The history was written up by James Watson of Massey University in a widely reported article, "The Significance of Mr. Richard Buckley's Exploding Trousers" − which later won him an Ig Nobel Prize. On television In their May 2006 "Exploding Pants" episode the popular U.S. television show MythBusters investigated the idea that trousers could explode based on the events of New Zealand in the 1930s. Experimenters tested four substances on 100% cotton overalls: a paste comprising a mixture of gunpowder and water; a "herbicide from the 1930s" which was sodium chlorate, a potentially explosive herbicide used at the time of the events; a "fertilizer from the 1930s" which was ammonium nitrate mixed with a liquid fuel (most likely diesel, as an ammonium nitrate bottle, with the label facing the camera, was in the foreground of the shot, in the presence of a red plastic fuel can on the table); gun cotton, the common name for nitrocellulose. Each of these were put to four different ignition methods: flame, radiant heat, friction, and impact. Although not naming "the herbicide" as sodium chlorate, they confirmed that trousers impregnated therewith would indeed vigorously combust upon exposure to flame, radiant heat, and impact, though their friction tests did not cause ignition. However, combustion (i.e. an exothermic chemical reaction between a fuel and an oxidant) is not the same as an explosion, which involves a rapid increase in volume accompanied by the release of energy in an extreme manner (i.e. a shock wave). Even so, a person witnessing such an event (especially if they were wearing the trousers) would likely describe such a sudden event as an explosion. The tests also revealed that none of the other three substances caused combustion of the trousers, thus indicating that sodium chlorate was probably a cause for the events that occurred. ABC's The Science Show described exploding trousers as "the scenario for a Goon Show", and, in an example of art imitating life, it actually was. The Goons wrote a script about a chemical which "when applied to the tail of a military soldier shirt, is tasteless, colourless, and odourless" but that "The moment the wearer sits down, the heat from his body causes the chemical to explode.". In the final episode of Blackadder Goes Forth, Captain Edmund Blackadder says that he's "Off to Hartlepool to buy some exploding trousers" when feigning madness to avoid going over the top. See also Agriculture in New Zealand References Further reading (The author won an Ig Nobel Prize in 2005 for this paper) Trousers Agriculture in New Zealand Pesticides in New Zealand Trousers and shorts Occupational safety and health Forteana
Exploding trousers
Chemistry
736
52,890,895
https://en.wikipedia.org/wiki/Expanded%20crater
An expanded crater is a type of secondary impact crater. Large impacts often create swarms of small secondary craters from the debris that is blasted out as a consequence of the impact. Studies of a type of secondary craters, called expanded craters, have given insights into places where abundant ice may be present in the ground. Expanded craters have lost their rims, this may be because any rim that was once present has collapsed into the crater during expansion or, lost its ice, if composed of ice. Excess ice (ice in addition to what is in the pores of the ground) is widespread throughout the Martian mid-latitudes, especially in Arcadia Planitia. In this region, are many expanded secondary craters that probably form from impacts that destabilize a subsurface layer of excess ice, which subsequently sublimates. With sublimation the ice changes directly from a solid to gaseous form. In the impact, the excess ice is broken up, resulting in an increase in surface area. Ice will sublimate much more if there is more surface area. After the ice disappears into the atmosphere, dry soil material will collapse and cause the crater diameter to become larger. Since this region still has abundant expanded craters, the area between the expanded craters would have abundant ice under the surface. If all the ice was gone, all the expanded craters would also be gone. Expanded craters are more frequent in the inner layer of a type of crater called double-layer ejecta craters (formerly called rampart craters). Double layer craters are believed to form in ice-rich ground. Research, published in 2015, mapped expanded craters in Arcadia Planitia, found in the northern mid latitudes, and the research team concluded that the ice may be tens of millions of years old. The age was determined from the age of four primary craters that produced the secondary craters that later expanded when ice sublimated. The craters were Steinheim, Gan, Domoni, and an unnamed crater with a diameter of 6 km. Based on measurements and models, the researchers calculated that at least 6000 Km3 of ice is still preserved in non-cratered portions of Arcadia Planitia. Places on Mars that display expanded craters may indicate where future colonists can find water ice. See also Diacria quadrangle References Impact craters
Expanded crater
Astronomy
464
2,903,021
https://en.wikipedia.org/wiki/15%20Aquilae
15 Aquilae (abbreviated 15 Aql) is a star in the equatorial constellation of Aquila. 15 Aquilae is the Flamsteed designation; it also bears the Bayer designation h Aquilae. The apparent visual magnitude is 5.41, so it is faintly visible to the naked eye. An optical companion, HD 177442, is 39 arc seconds away from it The distance to 15 Aquilae can be estimated from its annual parallax shift of 11.27 mas, yielding a range of approximately from Earth with a 9 light-year margin of error. With a stellar classification of K1 III, the spectrum of 15 Aquilae matches a giant star with an age of roughly four billion years. At this stage of its evolution, the outer atmosphere of the star has expanded to 14 times the radius of the Sun. It is radiating 83 times the Sun's luminosity into space at an effective temperature of 4,560 K. This heat gives it the orange-hued glow of a K-type star. This star is most likely a member of the thin disk population of the Milky Way. It is orbiting through the galaxy with an eccentricity of 0.06, which carries it as close as to the Galactic Center, and as far away as . The orbital inclination carries it no more than from the galactic plane. References External links Image 15 Aquilae Aquilae, h 177442 and 177463 Aquila (constellation) 093717 K-type giants Aquilae, 15 7225 BD-04 4684
15 Aquilae
Astronomy
324
41,394,765
https://en.wikipedia.org/wiki/York%20Leeman%20Road%20depot
The York Leeman Road railway depot, located in York, England, is a passenger multiple unit depot opened in May 2007 by Siemens. It services TransPennine Express Class 185s and locomotives. The facility's shed code is YK. History Before the 1870s, the site area was known as Bishop Fields; it was undeveloped and in agricultural use. In 1877, the new Holgate railway station (see York railway station), and its associated loop line, opened. The loop line passed through Bishop Fields and through the 20th century surrounding land north of Leeman Road. It was extensively developed, much for railway use including a large engine shed to the east, with sidings and a large carriage shed to the west. In the latter part of the 20th century, there was some contraction; the carriage shed was removed and the engine shed ceased to have an operational role until it became part of the National Railway Museum in 1975. In 2004, the site of the depot was occupied by a mixture of disused and in-use railway sidings. TransPennine Express depot In 2003, the new TransPennine Express franchise was awarded to First TransPennine Express (FTPE). As part of the franchise agreement, FTPE was to introduce a new fleet of 100 mph trains, together with new maintenance facilities for the fleet; the main depot was to be in Manchester, with a secondary depot in York. An order for 51 Class 185 diesel multiple units, and the associated maintenance facilities, was placed with Siemens in 2003. The facilities required by the franchise agreement included: a one road three-car length shed, with sidings for 8 three-car trains; siding facilities for controlled emission toilet servicing and fuelling; train electric supply (125 A three phase); offices and stores; and a capacity overhead hoist. In 2004, Siemens submitted a planning application for a depot on Leeman Road to York City Council, the plans were approved in May 2005. A ground breaking ceremony took place in December 2005. The main contractors for the £10m works were the Spencer Group. The depot was opened in May 2007, by Prince Andrew, Duke of York. Since April 2016, the Class 185s have been operated by TransPennine Express (TPE). In 2018, work began on an £11 million upgrade to the depot, which included signalling upgrades and road lengthening. The project was undertaken to allow the depot to accommodate Class 68 with Mark 5A carriages and Class 802 units. Allocation and stabling As of 2018, the depot's allocation consists of TransPennine Express locomotives and Desiros. Notes References Sources External links Railway depots in Yorkshire Rail transport in York Siemens Mobility projects
York Leeman Road depot
Technology,Engineering
544
72,819,916
https://en.wikipedia.org/wiki/Omar%20Khayyam%20%281923%20film%29
Omar Khayam is an American silent movie. It was widely distributed in Australia in 1923, where it was praised for its imaginative technical effects. It bears many similarities to the lost film A Lover's Oath, which was made in 1921 but not released until 1925. Plot The story, through which many images of Persian life and thought are interspersed, concerns three boyhood friends, Omar, Nizam and Hassan, who made a solemn pact that whichever became successful would share his good fortune with the others. Nizam grew to become ruler of the country, and assisted Omar in his studies. Hassan became a scheming scoundrel whose lusting after the beautiful Shirin was cut short by his clever wife. The film concludes with the lovers "beneath the boughs". The film includes many scenes relating to verses from the Rubaiyat of Omar Khayyam, including the market places, the Sultan and his courtiers, the muezzin calling the faithful to prayer, the crowd clamoring at the tavern door, the potter molding his wet clay, gardens ablaze with roses, ruined temples and palaces, Heaven and Hell. Images of astronomical phenomena were especially praised. The film opened at Hoyts' cinemas in Sydney ("Australia" and "Apollo" theatres) on 17 February and Melbourne ("De Luxe") on 24 February 1923. Cast Frederick Warde as Omar Edwin Stevens as Hassan Paul Weigel as Sheik Rustan B. Post (perhaps Guy Bates Post) as the Vizier Kathleen Key as Shirin, daughter of the Sheik Ramon Novarro (often cited as Raymond Navarro) as her lover "and a cast of over 7,000 people" Director was Ferdinand P. Earle, jun. The film was presented throughout Australia by E. R. Chambers and E. O. Gurney of "Selected Super Films (Australasia Ltd)". Gurney has elsewhere been mentioned as an Australian film director who, like John Farrow, left for Hollywood. The film was then taken to Hobart's Strand theatre and to Adelaide, where it was shown at the Town Hall and hailed as a "masterpiece" and the "last word in artistic achievement of motion picture history". It went on to regional Victoria and New South Wales, where it was universally well received — "the picture magnificent, which no-one should miss" It reached Perth in September. Omar the Tentmaker Richard Walton Tully's play Omar the Tentmaker, depicting illicit love in the harems, was in 1921 made into a film, directed by its author, and starring Virginia Brown Faire and Guy Bates Post. It was shown in Australia around the same time as Omar Khayyam. Australian publicity for the film referenced FitzGerald's Rubaiyat, though descriptions of the film seem remote from the poetry, or the life, of the historic Omar Khayyam, who despite his surname (which means "tentmaker"), was a renowned astronomer and mathematician. Notes References 1923 films 1923 lost films 1920s American films American silent feature films Lost American films Cultural depictions of Omar Khayyam
Omar Khayyam (1923 film)
Astronomy
628
65,240,283
https://en.wikipedia.org/wiki/Fault%20zone%20hydrogeology
Fault zone hydrogeology is the study of how brittlely deformed rocks alter fluid flows in different lithological settings, such as clastic, igneous and carbonate rocks. Fluid movements, that can be quantified as permeability, can be facilitated or impeded due to the existence of a fault zone. This is because different mechanisms that deform rocks can alter porosity and permeability within a fault zone. Fluids involved in a fault system generally are groundwater (fresh and marine waters) and hydrocarbons (Oil and Gas). Take notice that permeability (k) and hydraulic conductivity (K) are used interchangeably in this article for simplified understanding Architecture A fault zone can be generally subdivided into two major sections, including a Fault Core (FC) and a Damage Zone (DZ) (Figure 1). The fault core is surrounded by the damage zone. It has a measurable thickness which increases with fault throw and displacement, i.e. increasing deformations. The damage zone envelopes the fault core irregularly in a 3D manner which can be meters to few hundred meters wide (perpendicular to the fault zone). Within a large fault system, multiple fault cores and damage zones can be found. Younger fault cores and damage zones can overlap the older ones. Different processes can alter the permeability of the fault zone in the fault core and the damage zone will be discussed respectively in the next section. In general, the permeability of a damage zone is several orders of magnitude higher than that of a fault core as damage zones typically act as conduits (will be discussed in section 3). Within a damage zone, permeability decreases further away from a fault core. Permeability classification There are many classifications to group fault zones based on their permeability patterns. Some terms are interchangeable; while some have different subgroups. Most of expressions are listed in the following table for comparison. Dickerson's categorisation is commonly used and easier to understand in a broad range of studies. The classification of a fault zone can change spatially and temporally. The fault core and damage zone can behave differently to accommodate the deformations. Moreover, the fault zone can be dynamic through time. Thus, the permeability patterns can change for short-term and long-term effects. *K = Permeability/ Hydraulic Conductivity *fz = fault zone *hr = host rock = Undeformed rock surrounds the fault zones Mechanisms (permeability) Fault zone results from brittle deformation. Numerous mechanisms can vary the permeability of a fault zone. Some processes affect the permeability temporarily. These processes enhance the permeability for a certain period, and then reduce it later on: in this case, like seismic events, the permeability is not constant through time. Physical and chemical reactions are the major types of mechanisms. Different mechanisms can occur different in fault core and damage zone since the intensities of deformation they experience are different (Table 3). *+ = more likely to occur at Enhancing fault zone permeability Deformation bands The formation of a dilation band, in unconsolidated material, is the early result of applying extensional forces. Disaggregation of mineral fabric occurs along with the band, yet no offset is accommodated by the movement of grains (Figure 3). Further deformation causes offsets of mineral grains by rotation and sliding. This is called a shear band. The pore network is rearranged by granular movements (also called particulate flow), hence moderately enhance permeability. However, continuing deformation leads to the cataclasis of mineral grains which will further reduce permeability later on (section 3.2.3) (Figure 4). Brecciation Brecciation refers to the formation of angular, coarse-grained fragments embedded in a fine-grained matrix. As breccia (the rock experienced brecciation) is often non-cohesive, thus, permeability can be increased up to four or five orders of magnitude. However, the void space enlarged by brecciation will lead to further displacement along the fault zone by cementation, resulting in a strong permeability reduction (Figure 5). Fracturing Fractures propagate along a fault zone in direction responding to the stress applied. The enhancement of permeability is controlled by the density, orientation, length distribution, aperture, and connectivity of fractures. Even fracture with aperture of a 100-250 μm can still greatly influence fluid movement (Figure 6). Reducing fault zone permeability Sediment mixing Sediments, typically from distinct formations, with different grain sizes, are mixed physically by deformation, resulting in a more poorly-sorted mixture. The pore space is filled by smaller grains, increasing tortuosity (mineral scale in this case) of fluid flow across the fault system. Clay smears Clay minerals are phyllo-silicate, in other words, with sheet-like structure. They are effective agents that block fluid flows across a fault zone. Clay smears, deformed layers of clay, that are developed along fault zone can act as a seal of hydrocarbon reservoir i.e. extremely low permeability that nearly prohibits all fluid flows (Figure 7). Cataclasis Cataclasis refers to pervasive brittle fracturing and comminution of grains. This mechanism becomes dominant at depth, greater than 1 km, and with larger grains. With increasing intensity of cataclasis, fault gouge, often with the presence of clay, is formed. The largest reduction occurs on the flows that are perpendicular to the band. Enhancing and reducing fault zone permeability successively Compaction and cementation Compactions and cementation generally lead to permeability reduction by losing porosity. When a large region, which consist a fault zone, experience compaction and cementation, porosity loss in host rock (undeformed rock surrounding the fault zone) can be greater than that of fault zone rock. Hence, fluids are forced to flow through a fault zone. Dissolution and precipitation Solute carried by fluids can either enhance or reduce permeability by dissolution or precipitation (cementation). Which process takes place depends on geochemical conditions like rock composition, solute concentration, temperature, and so on. The changes in porosity dominantly control whether the fluid-rock interaction continues or slows down as a strong feedback reaction. For example, minerals like carbonates, quartz, and feldspars are dissolved by the fluid-rock interactions due to enhanced permeability. Further introduction of fluids can either continuously dissolve or otherwise re-precipitate minerals in the fault core, and thus alters the permeability. Therefore, whether the feedback is positive or negative heavily depends on the geochemical conditions. Seismic event Earthquakes can either increase or decrease permeability along fault zones, depending on the hydrogeological settings. Recorded hot spring discharges show seismic waves dominantly enhance permeability, but reductions in discharge may also result occasionally. The timescale of the changes can be up to thousands of years. Hydraulic fracturing (fracking) requires increasing the interconnectedness of the pore space (in other words, permeability) of shale to allow the gas to flow through the rock, and very small deliberately induced seismic activity of magnitudes smaller than 1 are applied to enhance rock permeability. Taking the Chile earthquake in 2017 as an example, the discharge of streamflow temporally increased six times indicates a sixfold enhancement in permeability along the fault zone. Yet, seismic-induced effects are temporary that normally last for months, for Chile's case, it lasted for one and a half months which gradually decreased back to original discharge. Mechanisms (porosity) Porosity (φ) directly reflects the specific storage of rock. And brittle formation alters the pores by different mechanisms. If the pores are deformed and connected together, the permeability of rock enhances. On the other hand, if the deformed pores disconnect with each other, the permeability of rock in this case reduces. Pore types Enhancing porosity Dissolution The mineral grains can be dissolved when there is fluid flow. The spaces originally occupied by the minerals will be spare as voids, increasing the porosity of rock. The minerals that are usually dissolved are feldspar, calcite and quartz. Grain dissolution pores results from this process can enhance porosity. Reducing porosity Cataclasis, fracturing and brecciation The mineral grains are broken up into smaller pieces by faulting event. Those smaller fragments will re-organise and further be compacted to form smaller pore spaces. These processes create intragranular fracture pores and transgranular fracture pores. Important to be aware is that reducing porosity does not equal to reduction in permeability. Fracturing, bracciation and initial stage of cataclasis can connect pore spaces by cracks and dilation bands, increasing permeability. Precipitation The mineral grains can be precipitated when there is fluid flow. The voids in the rocks can be occupied by precipitation of mineral grains. The minerals fill the voids and hence, reducing the porosity. Overgrowth, precipitation around an existing mineral grain, of quartz are common. And overgrown minerals infill pre-existing pores, reducing porosity. Clay deposition Clay minerals are phyllo-silicate, in other words, with sheet-like structure. They are effective agents that block fluid flows. Kaolinite which is altered from potassium feldspar with the presence of water is a common mineral that fills pore spaces. Precipitation and infiltration only affect materials on shallow depth, hence, more clay materials infill pore spaces when they are closer to the surface. Yet, development of a fault zone introduces fluid to flow deeper. Thus, this facilitates clay deposition at depth, reducing porosity. Lithological effects Lithology has a dominant effect on controlling which mechanisms would take place along a fault zone, hence, changing the porosity and permeability. *↑ = mechanism that enhance permeability *↓ = mechanism that reduce permeability Fault type effects All faults can be classified into three types. They are normal fault, reverse fault (thrust fault) and strike-slip fault. These different faulting behaviours accommodate the displacement in distinct structural ways. The differences in faulting motions might favour or disfavour certain permeability altering mechanisms to occur. However, the main controlling factor of the permeability is the rock type. Since the characteristics of rock control how a fault zone can be developed and how fluids can move. For instance, sandstone generally has a higher porosity than that of shale. A deformed sandstone in three different faulting systems should have a higher specific storage, hence permeability, than that of shale. Similar example like the strength (resistance to deform) also significantly depends on rock types instead of fault types. Thus, the geological features of rock involved in a fault zone is a more dominated factor. On the other hand, the type of fault might not be a dominant factor but the intensity of deformation is. The higher intensity of stresses applied to the rock, the more intense the rock will be deformed. The rock will experience a greater permeability changing event. Thus, the amount of stress applied matters. Equally important is that identifying the permeability category of the fault zones (barriers, barrier-conduits and conduits) is the main scope of study. In other words, how the fault zones behave when there are fluids pass through. Studying approaches and methods Surface and subsurface test The studies of fault zones are recognised in the discipline of Structural Geology as it involves how rocks were deformed; while investigations of fluid movements are grouped in the field of Hydrology. There are mainly two types of methods used to examine the fault zone by structural geologists and hydrologists (Figure 7). In situ test includes obtaining data from boreholes, drill cores, and tunnel projects. Normally the existence of a fault zone is found as different hydraulic properties are measured across it as fault zones are rarely drilled through (except for tunnel projects) (Figure 8). The hydraulic properties of rocks are either obtained directly from outcrop samples or shallow probe holes/ trial pits, then the predictions of fault structure are made for the rocks at depth (Figure 8). Example of a subsurface test An example of a large-scale aquifer test conducted by Hadley (2020), the author used 5 wells aligned perpendicular to the Sandwich Fault Zone in the US, and the drawdowns as well as the recovery rates of water levels in every well were observed. From the evidence that the recovery rates are slower for the wells closer to the fault zone, it is suggested that the fault zone acts as a barrier for northward groundwater movement, affecting freshwater supply in the north. Example of a surface test From an outcrop study of the Zuccale Fault in Italy by Musumeci (2015), the surface outcrop findings and cross-cutting relationship are used to determine the number and mechanism of deformational events happened in the region. Moreover, the presences of breccias and cataclasites, that formed under brittle deformation, suggest that there was an initial stage of permeability increase, promoting an influx of -rich hydrous fluids. The fluids triggered low-grade metamorphism and dissolution-and-precipitation (i.e. pressure-solution) in mineral scale that shaped a foliated fault core, hence, enhancing the sealing effect significantly. Other methods Geophysics Underground fluids, particularly groundwater, create anomalies for superconducting gravity data which help study the fault zone at depth. The method combines gravitational data and groundwater conditions to determine not only the permeability of a fault zone but also whether the fault zone is active or not. Geochemistry The geochemistry conditions of mineral fluids, water or gases, can be used to determine the existence of a fault zone by comparing the geochemistry of fluids' source, given that the conditions of aquifers are known. The fluids can be categorized by the concentrations of common solutes like total dissolved solids (TDS), Mg-Ca-Na/K phase, SO4-HCO3-Cl phase, and other dissolved trace elements. Existing biases The selection of an appropriate studying approach(es) is essential as there are biases existed when determining the fault zone permeability structure. In crystalline rocks, the subsurface-focused investigations favor the discoveries of a conduit fault zone pattern; while the surface methods favor a combined barrier-conduit fault zone structure. The same biases, to a lesser extend, exist in sedimentary rock as well. The biases can be related to the differences in studying scale. For structural geologists, it is very difficult to conduct outcrop study over a vast region; likewise, for hydrologists, it is expensive and ineffective to shorten borehole intervals for testing. Economic geology It is economically worth studying the complex system, especially for arid/ semi-arid regions, where freshwater resources are limited, and potential areas with hydrocarbon storages. Further research on the fault zone, the result of deformation, gave insights on the interactions between earthquakes and hydrothermal fluids along fault zone. Moreover, hydrothermal fluids associated with the fault zone also provide information on how ore deposits accumulated. Artificial hydrocarbon reservoirs Carbon sequestration is a modern method dealing with atmospheric carbon. One of the methods is pumping atmospheric carbon to specific depleted oil and gas reservoirs at depth. However, the presence of a fault zone act as either a seal or a conduit, affecting the efficiency of hydrocarbon formation. Micro-fractures that cut along the sealing unit and the reservoir rock can greatly affect the hydrocarbon migration. The deformation band blocks the lateral (horizontal) flow of and the sealing unit keeps the from vertical migration (Gif 1). The propagation of a micro-fracture that cuts through a sealing unit, instead of having a deformation band within the sealing unit, facilitates upward migration (Gif 2). This allows fluid migrations from one reservoir to another. In this case, the deformation band still does not facilitate lateral (horizontal) fluid flow. This might lead to the loss of injected atmospheric carbon, lowering the efficiency of carbon sequestration. A fault zone that displaces sealing units and reservoir rocks can act as a conduit for hydrocarbon migration. The fault zone itself has higher storage capacity (specific capacity) than that of the reservoir rocks, therefore, before the migration to other units, the fault zone has to be fully filled (Gif 3). This can slower and concentrate the fluid migration. The fault zone facilitates vertical downwards movement of due to its buoyancy and piezometric head differences, i.e. pressure/ hydraulic head is greater at a higher elevation, which helps store at depth. Seismic-induced ore deposits Regions that are or were seismic active and with the presence of fault zones might indicate there are ore deposits. A case study in Nevada, US by Howald (2015) studied how seismic-induced fluids accumulate mineral deposits, namely sinter and gold, along spaces provided by a fault zone. Two separate seismic events were identified and dated by oxygen isotopic concentrations, followed by episodes of the upward hydrothermal fluid migrations through permeable normal fault zone. Mineralization started to take place when these hot silica-rich hydrothermal fluids met the cool meteoric water infiltrated along the fault zone until the convective flow system was shut down. In order to deposit minerals, seismic events that bring hydrothermal fluids are not the only dominant factor, the permeability of the fault zone also has to be sufficient for permitting fluid flows. Another example taken from Sheldon (2005) also shows that development of fault zone, in this case by strike-slip faulting, facilitates mineralization. Sudden dilation happened along with strike-slip events increases the porosity and permeability along the fault zone. Larger displacement will lead to greater increase in porosity. If the faulting event cut through a sealing unit which seals a confined aquifer of over-pressured fluids, the fluids can rise through the fault zone. Then mineralization will take place along the fault zone by pressure solution, reducing the porosity of the fault zone. The fluid flow channel along the fault zone will be shut down when the pores are almost occupied by newly precipitated ore minerals. Multiple seismic events have to be occurred to form these economic ore deposit with vein structure. See also Carbon sequestration Fault (geology) Fault breccia Fault gouge Hydraulic conductivity Hydrocarbons Hydrology Hydrogeology Petroleum Permeability Structural geology References Hydrogeology Faults (geology) Porous media Soil mechanics
Fault zone hydrogeology
Physics,Materials_science,Engineering,Environmental_science
3,880
33,062,822
https://en.wikipedia.org/wiki/Holdover%20in%20synchronization%20applications
Two independent clocks, once synchronized, will walk away from one another without limit. To have them display the same time it would be necessary to re-synchronize them at regular intervals. The period between synchronizations is referred to as holdover and performance under holdover relies on the quality of the reference oscillator, the PLL design, and the correction mechanisms employed. Importance The quote above suggests that one can think of holdover in synchronization applications as analogous to running on backup power. Modern wireless communication systems require at least knowledge of frequency and often knowledge of phase as well in order to work correctly. Base stations need to know what time it is, and they usually get this knowledge from the outside world somehow (from a GPS Time and Frequency receiver, or from a synchronization source somewhere in the network they are connected to). But if the connection to the reference is lost then the base station will be on its own to establish what time it is. The base station needs a way to establish accurate frequency and phase (to know what time it is) using internal (or local) resources, and that’s where the function of holdover becomes important. The importance of GPS-derived timing A key application for GPS in telecommunications is to provide synchronization in wireless basestations. Base stations depend on timing to operate correctly, particularly for the handoff that occurs when a user moves from one cell to another. In these applications holdover is used in base stations to ensure continued operation while GPS is unavailable and to reduce the costs associated with emergency repairs, since holdover allows the site to continue to function correctly until maintenance can be performed at a convenient time. Some of the most stringent requirements come from the newer generation of wireless base stations, where phase accuracy targets as low as 1μs need to be maintained for correct operation. However the need for accurate timing has been an integral part of the history of wireless communication systems as well as wireline, and it has been suggested that the search for reliable and cost effective timing solutions was spurred on by the need for CDMA to compete with lower cost solutions. Within the base station, besides standard functions, accurate timing and the means to maintain it through holdover is vitally important for services such as E911 GPS as a source of timing is a key component in not just Synchronization in telecommunications but to critical infrastructure in general. Of the 18 Critical Resource and Key infrastructure (CIKR)sectors, 15 use GPS derived timing to function correctly. One notable application where highly accurate timing accuracy (and the means to maintain it through holdover) is of importance is in the use of Synchrophasors in the power industry to detect line faults. How GPS-derived timing can fail GPS is sensitive to jamming and interference because the signal levels are so low and can easily be swamped by other sources, that can be accidental or deliberate. Also since GPS depends on line of sight signals it can be disrupted by Urban canyon effects, making GPS only available to some locations at certain times of the day, for example. A GPS outage however is not initially an issue because clocks can go into holdover, allowing the interference to be alleviated as much as the stability of the oscillator providing holdover will allow. The more stable the oscillator, the longer the system can operate without GPS. Defining holdover In Synchronization in telecommunications applications holdover is defined by ETSI as: An operating condition of a clock which has lost its controlling input and is using stored data, acquired while in locked operation, to control its output. The stored data are used to control phase and frequency variations, allowing the locked condition to be reproduced within specifications. Holdover begins when the clock output no longer reflects the influence of a connected external reference, or transition from it. Holdover terminates when the output of the clock reverts to locked mode condition. One can regard holdover then as a measure of accuracy or error acquired by a clock when there is no controlling external reference to correct for any errors. MIL-PRF-55310 defines Clock Accuracy as: Where is the synchronization error at ; is the fractional frequency difference between two clocks under comparison; is the error due to random noise; is at ; is the linear aging rate and is the frequency difference due to environmental effects. Similarly ITU G.810 defines Time Error as: Where is the time error; is the time error at ; is the fractional frequency error at ; is the linear fractional frequency drift rate; is the random phase deviation component and is the nominal frequency. Implementing holdover In applications that require synchronization (such as wireless base stations) GPS Clocks are often used and in this context are often known as a GPSDO (GPS Disciplined Oscillator) or GPS TFS (GPS Time and Frequency Source). NIST defines a Disciplined Oscillator as: An oscillator whose output frequency is continuously steered (often through the use of a phase locked loop) to agree with an external reference. For example, a GPS disciplined oscillator (GPSDO) usually consists of a quartz or rubidium oscillator whose output frequency is continuously steered to agree with signals broadcast by the GPS satellites. In a GPSDO a GPS or GNSS signal is used as the external reference that steers an internal oscillator. In a modern GPSDO the GPS processing and steering function are both implemented in a Microprocessor allowing a direct comparison between the GPS reference signal and the oscillator output. Amongst the building blocks of a GPS Time and Frequency solution the oscillator is a key component and typically they are built around an Oven Controlled Crystal Oscillator (OCXO) or a Rubidium based clock. The dominant factors influencing the quality of the reference oscillator are taken to be aging and temperature stability. However, depending upon the construction of the oscillator, barometric pressure and relative humidity can have at least as strong an influence on the stability of the quartz oscillator. What is often referred to as "random walk" instability is actually a deterministic effect of environmental parameters. These can be measured and modeled to vastly improve the performance of quartz oscillators. An addition of a Microprocessor to the reference oscillator can improve temperature stability and aging performance During Holdover any remaining clock error caused by aging and temperature instability can be corrected by control mechanisms. A combination of quartz based reference oscillator (such as an OCXO) and modern correction algorithms can get good results in Holdover applications. The holdover capability then is provided either by a free running local oscillator, or a local oscillator that is steered with software that retains knowledge of its past performance. The earliest documentation of such an effort comes from the then National Bureau of Standards in 1968 [Allan, Fey, Machlan and Barnes, "An Ultra Precise Time Synchronization System Designed By Computer Simulation", Frequency], where an analog computer consisting of ball-disk integrators implemented a third order control loop to correct for the frequency ageing of an oscillator. The first microprocessor implementation of this concept occurred in 1983 [Bourke, Penrod, "An Analysis of a Microprocessor Controlled Disciplined Frequency Standard", Frequency Control Symposium] where Loran-C broadcasts were used to discipline very high quality quartz oscillators as a caesium replacement in telecommunications wireline network synchronization. The basic aim of a steering mechanism is to improve the stability of a clock or oscillator while minimizing the number of times it needs calibration. In Holdover the learned behaviour of the OCXO is used to anticipate and correct for future behavior. Effective aging and temperature compensation can be provided by such a mechanism and the system designer is faced with a range of choices for algorithms and techniques to do this correction including extrapolation, interpolation and predictive filters (including Kalman filters). Once the barriers of aging and environmental effects are removed the only theoretical limitation to holdover performance in such a GPSDO is irregularity or noise in the drift rate, which is quantified using a metric like Allan deviation or Time deviation. The complexity in trying to predict the effects on Holdover due to systematic effects like aging and temperature stability and stochastic influences like Random Walk noise has resulted in tailor-made Holdover Oscillator solutions being introduced in the market. See also Synchronization Synchronization in Synchronous optical networking Time transfer Timekeeping in Global Positioning System Precision Time Protocol References External links Time and Frequency Systems GPS Disciplined Oscillator Modules with Holdover Compensation Holdover Oscillators Disciplined Oscillator Options Synchronization
Holdover in synchronization applications
Engineering
1,826
5,239,257
https://en.wikipedia.org/wiki/Zeta%20Cephei
Zeta Cephei (ζ Cep, ζ Cephei) is a red supergiant star, located about 1000 light-years away in the constellation of Cepheus. Zeta Cephei marks the left shoulder of Cepheus, the King of Ethiopia. It is one of the fundamental stars of the MK spectral sequence, defined as type K1.5 Ib. Characteristics Zeta Cephei has a spectral classification of K1.5Ib, indicating that it is a lower luminosity red supergiant star. It is about 173 times larger than the Sun and has a surface temperature of 4,393 K. The luminosity of Zeta Cephei is approximately 10,000 times that of the Sun. At a distance of about 840 light-years, Zeta Cephei has an apparent magnitude (m) of 3.4 and an absolute magnitude (M) of -4.7. The star has a metallicity approximately 1.6 times that of the Sun; i.e., it contains 1.6 times as much heavy-element material as the Sun. At a mass of , Zeta Cephei might end its life in a core-collapse supernova, and has been listed as a likely pre-supernova candidate by a 2022 study. It could also provide observable pre-supernova neutrino signals, just hours before the core collapses. Possible companion star Hekker et al. (2008) have detected a periodicity of 533 days, hinting at the possible presence of an as yet unseen companion. It is listed as a possible eclipsing binary with a very small amplitude. References Cepheus (constellation) Cephei, Zeta Cephei, 21 210745 105199 8465 K-type supergiants Durchmusterung objects
Zeta Cephei
Astronomy
372
245,990
https://en.wikipedia.org/wiki/Commutative%20algebra
Commutative algebra, first known as ideal theory, is the branch of algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings; rings of algebraic integers, including the ordinary integers ; and p-adic integers. Commutative algebra is the main technical tool of algebraic geometry, and many results and concepts of commutative algebra are strongly related with geometrical concepts. The study of rings that are not necessarily commutative is known as noncommutative algebra; it includes ring theory, representation theory, and the theory of Banach algebras. Overview Commutative algebra is essentially the study of the rings occurring in algebraic number theory and algebraic geometry. Several concepts of commutative algebras have been developed in relation with algebraic number theory, such as Dedekind rings (the main class of commutative rings occurring in algebraic number theory), integral extensions, and valuation rings. Polynomial rings in several indeterminates over a field are examples of commutative rings. Since algebraic geometry is fundamentally the study of the common zeros of these rings, many results and concepts of algebraic geometry have counterparts in commutative algebra, and their names recall often their geometric origin; for example "Krull dimension", "localization of a ring", "local ring", "regular ring". An affine algebraic variety corresponds to a prime ideal in a polynomial ring, and the points of such an affine variety correspond to the maximal ideals that contain this prime ideal. The Zariski topology, originally defined on an algebraic variety, has been extended to the sets of the prime ideals of any commutative ring; for this topology, the closed sets are the sets of prime ideals that contain a given ideal. The spectrum of a ring is a ringed space formed by the prime ideals equipped with the Zariski topology, and the localizations of the ring at the open sets of a basis of this topology. This is the starting point of scheme theory, a generalization of algebraic geometry introduced by Grothendieck, which is strongly based on commutative algebra, and has induced, in turns, many developments of commutative algebra. History The subject, first known as ideal theory, began with Richard Dedekind's work on ideals, itself based on the earlier work of Ernst Kummer and Leopold Kronecker. Later, David Hilbert introduced the term ring to generalize the earlier term number ring. Hilbert introduced a more abstract approach to replace the more concrete and computationally oriented methods grounded in such things as complex analysis and classical invariant theory. In turn, Hilbert strongly influenced Emmy Noether, who recast many earlier results in terms of an ascending chain condition, now known as the Noetherian condition. Another important milestone was the work of Hilbert's student Emanuel Lasker, who introduced primary ideals and proved the first version of the Lasker–Noether theorem. The main figure responsible for the birth of commutative algebra as a mature subject was Wolfgang Krull, who introduced the fundamental notions of localization and completion of a ring, as well as that of regular local rings. He established the concept of the Krull dimension of a ring, first for Noetherian rings before moving on to expand his theory to cover general valuation rings and Krull rings. To this day, Krull's principal ideal theorem is widely considered the single most important foundational theorem in commutative algebra. These results paved the way for the introduction of commutative algebra into algebraic geometry, an idea which would revolutionize the latter subject. Much of the modern development of commutative algebra emphasizes modules. Both ideals of a ring R and R-algebras are special cases of R-modules, so module theory encompasses both ideal theory and the theory of ring extensions. Though it was already incipient in Kronecker's work, the modern approach to commutative algebra using module theory is usually credited to Krull and Noether. Main tools and results Noetherian rings A Noetherian ring, named after Emmy Noether, is a ring in which every ideal is finitely generated; that is, all elements of any ideal can be written as a linear combinations of a finite set of elements, with coefficients in the ring. Many commonly considered commutative rings are Noetherian, in particular, every field, the ring of the integer, and every polynomial ring in one or several indeterminates over them. The fact that polynomial rings over a field are Noetherian is called Hilbert's basis theorem. Moreover, many ring constructions preserve the Noetherian property. In particular, if a commutative ring is Noetherian, the same is true for every polynomial ring over it, and for every quotient ring, localization, or completion of the ring. The importance of the Noetherian property lies in its ubiquity and also in the fact that many important theorems of commutative algebra require that the involved rings are Noetherian, This is the case, in particular of Lasker–Noether theorem, the Krull intersection theorem, and Nakayama's lemma. Furthermore, if a ring is Noetherian, then it satisfies the descending chain condition on prime ideals, which implies that every Noetherian local ring has a finite Krull dimension. Primary decomposition An ideal Q of a ring is said to be primary if Q is proper and whenever xy ∈ Q, either x ∈ Q or yn ∈ Q for some positive integer n. In Z, the primary ideals are precisely the ideals of the form (pe) where p is prime and e is a positive integer. Thus, a primary decomposition of (n) corresponds to representing (n) as the intersection of finitely many primary ideals. The Lasker–Noether theorem, given here, may be seen as a certain generalization of the fundamental theorem of arithmetic: For any primary decomposition of I, the set of all radicals, that is, the set {Rad(Q1), ..., Rad(Qt)} remains the same by the Lasker–Noether theorem. In fact, it turns out that (for a Noetherian ring) the set is precisely the assassinator of the module R/I; that is, the set of all annihilators of R/I (viewed as a module over R) that are prime. Localization The localization is a formal way to introduce the "denominators" to a given ring or a module. That is, it introduces a new ring/module out of an existing one so that it consists of fractions . where the denominators s range in a given subset S of R. The archetypal example is the construction of the ring Q of rational numbers from the ring Z of integers. Completion A completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have simpler structure than the general ones and Hensel's lemma applies to them. Zariski topology on prime ideals The Zariski topology defines a topology on the spectrum of a ring (the set of prime ideals). In this formulation, the Zariski-closed sets are taken to be the sets where A is a fixed commutative ring and I is an ideal. This is defined in analogy with the classical Zariski topology, where closed sets in affine space are those defined by polynomial equations . To see the connection with the classical picture, note that for any set S of polynomials (over an algebraically closed field), it follows from Hilbert's Nullstellensatz that the points of V(S) (in the old sense) are exactly the tuples (a1, ..., an) such that the ideal (x1 - a1, ..., xn - an) contains S; moreover, these are maximal ideals and by the "weak" Nullstellensatz, an ideal of any affine coordinate ring is maximal if and only if it is of this form. Thus, V(S) is "the same as" the maximal ideals containing S. Grothendieck's innovation in defining Spec was to replace maximal ideals with all prime ideals; in this formulation it is natural to simply generalize this observation to the definition of a closed set in the spectrum of a ring. Connections with algebraic geometry Commutative algebra (in the form of polynomial rings and their quotients, used in the definition of algebraic varieties) has always been a part of algebraic geometry. However, in the late 1950s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra, which are locally ringed spaces, which form a category that is antiequivalent (dual) to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along the Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set-theoretic sense is then replaced by a Zariski topology in the sense of Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc. Nowadays some other examples have become prominent, including the Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions, leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks. See also List of commutative algebra topics Glossary of commutative algebra Combinatorial commutative algebra Gröbner basis Homological algebra Notes References
Commutative algebra
Mathematics
2,175
26,028,424
https://en.wikipedia.org/wiki/Knut%20Urban
Knut W. Urban (born 25 June 1941 in Stuttgart) is a German physicist. He has been the Director of the Institute of Microstructure Research at Forschungszentrum Jülich from 1987 to 2010. Knut Urban's research focuses on the field of aberration-corrected transmission electron microscopy (both regarding the further development of instruments and the control software), the examination of structural defects in oxides and the physical properties of complex metallic alloys. He also works on Josephson effects in high-temperature superconductors and the application of these effects in SQUID systems and magnetometers as well as on the application of Hilbert transform spectroscopy in examining the excitation of solids, liquids and gases on the gigahertz and terahertz scale. Besides his activities at Forschungszentrum Jülich he was also professor for experimental physics at RWTH Aachen University before retirement. Biography Urban studied physics at the University of Stuttgart and was awarded a PhD in 1972 for his dissertation on the study of the damage caused by the electron beams in a high-voltage electron microscope at low temperatures. He subsequently conducted research at the Max Planck Institute of Metals Research in Stuttgart until 1986. Amongst other tasks, he was involved in the installation of a 1.2-MV high-voltage microscope laboratory as well as in studies on the anisotropy of atomic displacement energy in crystals and on radiation-induced diffusion. In 1986 he was appointed professor of general material properties by the Department of Materials Science and Engineering at the University of Erlangen-Nuremberg. In 1987 Urban was appointed to the chair of experimental physics at RWTH Aachen University and simultaneously became the director of the Institute of Microstructure Research at Forschungszentrum Jülich. From 1996 to 1997 he was a visiting professor at the Institute for Advanced Materials Processing of Tohoku University in Sendai (Japan). Knut Urban was appointed one of two directors of the Ernst Ruska Centre for Microscopy and Spectroscopy with Electrons (ER-C) when it was founded in 2004 as a common competence platform of Forschungszentrum Jülich and RWTH Aachen University as well as a national centre for users of high-resolution transmission electron microscopes. From 2004 to 2006 he was president of the German Physical Society (DPG) which is the world's largest organisation of physicists. He is a member of several advisory bodies, boards of trustees and senate committees of scientific institutions. Knut Urban formally retired as the Director of the Institute of Microstructure Research and the Ernst Ruska Centre (ER-C) in Jülich in 2010 and was appointed a JARA senior professor at RWTH Aachen University in 2012. In 2009, he was elected a member of the North Rhine-Westphalian Academy of Sciences, Humanities and the Arts. Knut Urban is married and has three daughters. Awards and honours 1986 Acta Metallurgica Award (best paper of the year 1984) 1986 Carl Wagner Award, University of Göttingen 1996 Research Award of the Japanese Society for the Promotion of Science 1999 Heyn Medal of German Society for Materials Science (DGM) 2000 Honorary Member Materials Research Society of India 2000 Medal for Scientific Publishing of the German Physical Society (DPG) 2006 Von-Hippel Award of the US Materials Research Society (MRS) 2006 Honorary Member of US Materials Research Society (MRS) 2006 Karl Heinz Beckurts-Award for Scientific and Technical Innovation 2008 Honda Award for Ecotechnology (Honda Foundation, Japan) 2009 Appointment Member of the Academy of Arts and Sciences of the State of North Rhine-Westphalia, Germany 2009 Honorary Professor at Xi'an Jiaotong University, Xi'an, China 2011 Wolf Prize in Physics 2012 Honorary Member German Electron Microscopy Society 2014 BBVA Foundation Frontiers of Knowledge Award in Basic Sciences (BBVA Foundation, Madrid/Spain) 2015 Honorary Member German Physical Society 2015 Honorary Member Japanese Institute of Metals and Materials 2015 NIMS Award (National Institute of Materials Science, Tsukuba, Japan) 2018 Doctor honoris causa, Tel Aviv University 2020 He has been awarded the Kavli Prize in neuroscience,(together with Maximilian Haider and Harald Rose and Ondrej Krivanek). 2020 Election as Foreign Member of the Norwegian Academy of Science and Letters References External links Knut Urban (Portrait) Knut Urban (CV) Institute of Microstructure Research at Forschungszentrum Jülich Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons 1941 births Living people 20th-century German physicists Scientists from Stuttgart Wolf Prize in Physics laureates Microscopists Kavli Prize laureates in Nanoscience 21st-century German physicists Presidents of the German Physical Society Academic staff of RWTH Aachen University Jülich Research Centre
Knut Urban
Chemistry
972
635,016
https://en.wikipedia.org/wiki/Index%20of%20evolutionary%20biology%20articles
This is a list of topics in evolutionary biology. A abiogenesis – adaptation – adaptive mutation – adaptive radiation – allele – allele frequency – allochronic speciation – allopatric speciation – altruism – : anagenesis – anti-predator adaptation – applications of evolution – apomorphy – aposematism – Archaeopteryx – aquatic adaptation – artificial selection – atavism B Henry Walter Bates – biological organisation – Black Queen hypothesis – Brassica oleracea – breed C Cambrian explosion – camouflage – Sean B. Carroll – catagenesis – gene-centered view of evolution – cephalization – Sergei Chetverikov – chronobiology – chronospecies – clade – cladistics – climatic adaptation – coalescent theory – co-evolution – co-operation – coefficient of relationship – common descent – convergent evolution – creation–evolution controversy – cultivar – conspecific song preference D Darwin (unit) – Charles Darwin – Darwinism – Darwin's finches – Richard Dawkins – directed mutagenesis – Directed evolution – directional selection – disruptive selection – Theodosius Dobzhansky – dog breeding – domestication – domestication of the horse E E. coli long-term evolution experiment – ecological genetics – ecological selection – ecological speciation – Endless Forms Most Beautiful – endosymbiosis – error threshold (evolution) – evidence of common descent – evolution – evolutionary arms race – evolutionary capacitance Evolution: of ageing – of the brain – of cetaceans – of complexity – of dinosaurs – of the eye – of fish – of the horse – of insects – of human intelligence – of mammalian auditory ossicles – of mammals – of monogamy – of sex – of sirenians – of tetrapods – of the wolf evolutionary developmental biology – evolutionary dynamics – evolutionary game theory – evolutionary history of life – evolutionary history of plants – evolutionary medicine – evolutionary neuroscience – evolutionary psychology – evolutionary radiation – evolutionarily stable strategy – evolutionary taxonomy – evolutionary tree – evolvability – experimental evolution – exaptation – extinction F Joe Felsenstein – R.A. Fisher – Fisher's reproductive value – fitness – fitness landscape – fixation index (FST) – fluctuating selection – E.B. Ford – fossil – frequency-dependent selection G Galápagos Islands – gene – gene-centric view of evolution – gene duplication – gene flow – gene pool – genetic drift – genetic hitchhiking – genetic recombination – genetic variation – genotype – gene–environment correlation – gene–environment interaction – genotype–phenotype distinction – Stephen Jay Gould – gradualism – Peter and Rosemary Grant – group selection H J. B. S. Haldane – W. D. Hamilton – Hardy–Weinberg principle – heredity – hierarchy of life – history of evolutionary thought – history of speciation – homologous chromosomes – homology (biology) – horizontal gene transfer – human evolution – human evolutionary genetics – human vestigiality – Julian Huxley – Thomas Henry Huxley I inclusive fitness – insect evolution – Invertebrate paleontology (a.k.a. invertebrate paleobiology or paleozoology) K karyotype – kin selection – Motoo Kimura – koinophilia L Jean-Baptiste Lamarck – Lamarckism – landrace – language – last universal common ancestor – level of support for evolution – Richard Lewontin – list of gene families – list of human evolution fossils – life-history theory – Wen-Hsiung Li – living fossils – Charles Lyell M macroevolution – macromutation – The Major Transitions in Evolution – maladaptation – The Malay Archipelago – mass extinctions – mating systems – John Maynard Smith – Ernst Mayr – Gregor Mendel – memetics – Mendelian inheritance – Mesozoic–Cenozoic radiation – microevolution – micropaleontology ( micropaleobiology) – Miller–Urey experiment – mimicry – Mitochondrial Eve – modern evolutionary synthesis – molecular clock – molecular evolution – molecular phylogeny – molecular systematics – mosaic evolution – most recent common ancestor – Hermann Joseph Muller – Muller's ratchet – mutation – mutational meltdown N natural selection – natural genetic engineering – nature versus nurture – negative selection – Neo-Darwinism – neutral theory of molecular evolution – Baron Franz Nopcsa – "Nothing in Biology Makes Sense Except in the Light of Evolution" O Susumu Ohno – Aleksandr Oparin – On The Origin of Species – Ordovician radiation – origin of birds – origin of language – orthologous genes (orthologs) P paleoanthropology – paleobiology – paleobotany – paleontology – paleozoology (of vertebrates – of invertebrates) – parallel evolution – paralogous genes (paralogs) – parapatric speciation – paraphyletic – particulate inheritance – peppered moth – peppered moth evolution – peripatric speciation – phenotype – phylogenetics – phylogeny – phylogenetic tree – Pikaia – Plant evolution – polymorphism (biology) – population – population bottleneck – population dynamics – population genetics – preadaptation – prehistoric archaeology – Principles of Geology – George R. Price – Price equation – punctuated equilibrium Q quantum evolution – quasispecies model R race (biology) – Red Queen hypothesis – recapitulation theory – recent African origin of modern humans – recombination – Bernhard Rensch – reinforcement (speciation) – reproductive coevolution in Ficus – reproductive isolation – r/K selection theory S selection – selective breeding – selfish DNA – The Selfish Gene – sexual selection – signalling theory – sociobiology – social effects of evolutionary theory – species – speciation – species flock – sperm competition – stabilizing selection – strain (biology) – subspecies – survival of the fittest – symbiogenesis – sympatric speciation – synapomorphy – systematics – George Gaylord Simpson – G. Ledyard Stebbins T Tiktaalik – timeline of evolution – trait (biological) – transgressive phenotype – transitional fossil – transposon – tree of life – triangle of U U unit of selection V variety (botany) – vertebrate paleontology (a.k.a. vertebrate paleobiology or paleozoology) – viral evolution – The Voyage of the Beagle – vestigiality W Alfred Russel Wallace – Wallace effect – Wallace Line – Wallacea – George C. Williams (biologist) – Edward O. Wilson – Sewall Wright Y Y-chromosomal Adam – Y-DNA haplogroups by ethnic groups See also List of biology topics List of biochemistry topics Evolutionary biology
Index of evolutionary biology articles
Biology
1,424
8,401,893
https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg%20method
In mathematics, the Runge–Kutta–Fehlberg method (or Fehlberg method) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based on the large class of Runge–Kutta methods. The novelty of Fehlberg's method is that it is an embedded method from the Runge–Kutta family, meaning that it reuses the same intermediate calculations to produce two estimates of different accuracy, allowing for automatic error estimation. The method presented in Fehlberg's 1969 paper has been dubbed the RKF45 method, and is a method of order O(h4) with an error estimator of order O(h5). By performing one extra calculation, the error in the solution can be estimated and controlled by using the higher-order embedded method that allows for an adaptive stepsize to be determined automatically. Butcher tableau for Fehlberg's 4(5) method Any Runge–Kutta method is uniquely identified by its Butcher tableau. The embedded pair proposed by Fehlberg The first row of coefficients at the bottom of the table gives the fifth-order accurate method, and the second row gives the fourth-order accurate method. Implementing an RK4(5) Algorithm The coefficients found by Fehlberg for Formula 1 (derivation with his parameter α2=1/3) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: Fehlberg outlines a solution to solving a system of n differential equations of the form: to iterative solve for where h is an adaptive stepsize to be determined algorithmically: The solution is the weighted average of six increments, where each increment is the product of the size of the interval, , and an estimated slope specified by function f on the right-hand side of the differential equation. Then the weighted average is: The estimate of the truncation error is: At the completion of the step, a new stepsize is calculated: If , then replace with and repeat the step. If , then the step is completed. Replace with for the next step. The coefficients found by Fehlberg for Formula 2 (derivation with his parameter α2 = 3/8) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: In another table in Fehlberg, coefficients for an RKF4(5) derived by D. Sarafyan are given: See also List of Runge–Kutta methods Numerical methods for ordinary differential equations Runge–Kutta methods Notes References Fehlberg, Erwin (1968) Classical fifth-, sixth-, seventh-, and eighth-order Runge-Kutta formulas with stepsize control. NASA Technical Report 287. https://ntrs.nasa.gov/api/citations/19680027281/downloads/19680027281.pdf Fehlberg, Erwin (1969) Low-order classical Runge-Kutta formulas with stepsize control and their application to some heat transfer problems. Vol. 315. National aeronautics and space administration. Fehlberg, Erwin (1970) Some experimental results concerning the error propagation in Runge-Kutta type integration formulas. NASA Technical Report R-352. https://ntrs.nasa.gov/api/citations/19700031412/downloads/19700031412.pdf Fehlberg, Erwin (1970). "Klassische Runge-Kutta-Formeln vierter und niedrigerer Ordnung mit Schrittweiten-Kontrolle und ihre Anwendung auf Wärmeleitungsprobleme," Computing (Arch. Elektron. Rechnen), vol. 6, pp. 61–71. Sarafyan, Diran (1966) Error Estimation for Runge-Kutta Methods Through Pseudo-Iterative Formulas. Technical Report No. 14, Louisiana State University in New Orleans, May 1966. Further reading . . . Numerical differential equations Runge–Kutta methods Numerical analysis
Runge–Kutta–Fehlberg method
Mathematics
871
66,510,748
https://en.wikipedia.org/wiki/Emily%20Dawson
Emily Dawson is a teacher in the field of Science and Technology Studies. Career Dawson's research focuses on how people encounter and engage with science, with an emphasis on equity and social justice. She previously taught at King's College London, at the Royal College of Art and the University of the West of England. Publications Equity, Exclusion and Everyday Science Learning: The Experiences of Minoritised Groups (2020) Awards Dawson was honoured with Philip Leverhulme Prize in 2020 from the Leverhulme Trust. Dawson received the prize for her work on sociology of science and education, focusing on how structural inequalities affect science experiences outside school, in everyday, popular culture settings. References Year of birth missing (living people) Living people 21st-century British social scientists Academics of University College London Academics of the University of the West of England, Bristol Science and technology studies scholars
Emily Dawson
Technology
176
5,106,169
https://en.wikipedia.org/wiki/Manure-derived%20synthetic%20crude%20oil
Manure-derived synthetic crude oil is a synthetic bio-oil chemically engineered (converted) from animal or human manure. Research into the production of manure-derived synthetic fuel began with pig manure in 1996 at the University of Illinois at Urbana–Champaign by the research team led by professors Yuanhui Zhang and Lance Schideman. They developed a method for converting raw pig manure into bio-oil through thermal depolymerization (thermochemical conversion). This process uses a thermochemical conversion reactor to apply heat and pressure for breaking down carbohydrate materials. As a result, bio-oil, methane and carbon dioxide are produced. With further research, large-scale chemical processing in a refinery-style environment could help process millions of gallons of "pig biocrude" per day. However, this technology is still in its infancy and could produce only of oil per of manure. In 2006, preparations for a construction of a pilot plant started. It is developed by Snapshot Energy, a start-up firm. According to the tests conducted by the National Institute of Standards and Technology pig manure biocrude produced by current technology contains 15% water, sulfur and char waste containing heavy metals, which should be removed to improve the quality of oil. See also Alternative fuels Energy and the environment Poultry litter References Feces Sustainable technologies Biofuels Synthetic fuels
Manure-derived synthetic crude oil
Biology
286
44,847,374
https://en.wikipedia.org/wiki/Penicillium%20brefeldianum
Penicillium brefeldianum is an anamorph fungus species of the genus of Penicillium which produces brefeldin A a fungal metabolite. See also List of Penicillium species Further reading References brefeldianum Fungi described in 1967 Fungus species
Penicillium brefeldianum
Biology
63
57,623,219
https://en.wikipedia.org/wiki/Hypercolor%20%28physics%29
In particle physics, hypercolor is a hypothetical attractive force that binds prequarks together by the exchange of hypergluons, analogous to the exchange of gluons by the color force, which binds quarks together. See also Technicolor (physics) References Quantum chromodynamics
Hypercolor (physics)
Physics
64
4,973,543
https://en.wikipedia.org/wiki/Beta%20Camelopardalis
Beta Camelopardalis, Latinised from β Camelopardalis, is the brightest star in the northern constellation of Camelopardalis. It is bright enough to be faintly visible to the naked eye, having an apparent visual magnitude of 4.02. Based upon an annual parallax shift of 3.74 mas as seen from Earth, it is located roughly 870 light-years from the Sun. It is moving closer with a radial velocity of −1.90 km/s and is most likely a single star. This is a yellow-hued G-type supergiant/bright giant with a stellar classification of G1 Ib–IIa. It is an estimated 60 million years old and is spinning with a projected rotational velocity of 11.7 km/s. This is an unusually high rate of rotation for an evolved star of this type. One possible explanation is that it may have engulfed a nearby giant planet, such as a hot Jupiter. Beta Camelopardalis has 6.5 times the mass of the Sun and has expanded to around 58 the Sun's radius. The star is radiating 1,592 times the Sun's luminosity from its enlarged photosphere at an effective temperature of . It is a source of X-ray emission. β Cam has two visual companions: a 7th-magnitude A5-class star at an angular separation of 84 arcseconds; and a 12th-magnitude star at 15 arcseconds. References External links HR 1603 CCDM J05034+6026 Image Beta Camelopardalis G-type supergiants G-type bright giants Double stars Camelopardalis Camelopardalis, Beta BD+60 0856 Camelopardalis, 10 031910 023522 1603
Beta Camelopardalis
Astronomy
363
63,735,602
https://en.wikipedia.org/wiki/Prix%20Paul%20Doistau%E2%80%93%C3%89mile%20Blutet
The Prix Paul Doistau–Émile Blutet is a biennial prize awarded by the French Academy of Sciences in the fields of mathematics and physical sciences since 1954. Each recipient receives 3000 euros. The prize is also awarded quadrennially in biology. The award is also occasionally awarded in other disciplines. List of laureates Mathematics 1958 Marc Krasner 1980 Jean-Michel Bony 1982 Jean-Pierre Ramis 1982 Gérard Maugin 1985 Dominique Foata 1986 Pierre-Louis Lions 1987 Pierre Bérard 1987 Lucien Szpiro 1999 Wendelin Werner 2001 Hélène Esnault 2004 Laurent Stolovitch 2006 Alice Guionnet 2008 Isabelle Gallagher 2010 Yves André 2012 Serge Cantat 2014 Sébastien Boucksom 2016 Hajer Bahouri 2018 Physical sciences 2002 2005 Mustapha Besbes 2007 2009 Hasnaa Chennaoui-Aoudjehane 2011 Henri-Claude Nataf 2013 2015 Philippe André 2019 Integrative biology 2000 Jérôme Giraudat 2004 Marie-Claire Verdus 2008 Hélène Barbier-Brygoo 2012 Olivier Hamant Mechanical and computational science 2000 Annie Raoult 2002 Gilles Francfort 2002 Jean-Jacques Marigo 2006 Hubert Maigre 2006 Andreï Constantinescu 2008 Pierre Comte 2010 Nicolas Triantafyllidis 2012 Élisabeth Guazzelli 2014 Jacques Magnaudet 2019 Denis Sipp Other disciplines 1967 Jacques Blamont 1975 1976 Martial Ducloy 1976 Arlette Nougarède 1981 Christian Bordé 1988 2019 References Awards of the French Academy of Sciences Awards established in the 1950s Mathematics awards
Prix Paul Doistau–Émile Blutet
Technology
308
54,061,735
https://en.wikipedia.org/wiki/Head%20%28hydrology%29
In hydrology, the head is the point on a watercourse up to which it has been artificially broadened and/or raised by an impoundment. Above the head of the reservoir natural conditions prevail; below it the water level above the riverbed has been raised by the impoundment and its flow rate reduced, unless and until banks, barrages, weir sluices or dams are overcome (overtopped), whereby a less frictional than natural course will exist (mid-level and surface rather than bed and bank currents) resulting in flash flooding below. In principle, a distinction must be drawn between the head of a reservoir impounded by a dam, and the head of a works resulting from a barrage or canal locks. Head of a reservoir A head's location varies with the height of the water level against the dam. Since there is only an extremely low flow within the reservoir so no water level gradient, the head can be clearly seen: where the farthest watercourse discharges into the reservoir. Upstream of the actual reservoir is likely to be a pre-dam, which typically have a constant water level so the head is reinforced. The term does not apply to embankment (storage/settling) reservoirs, to which water is pumped from below. Head of a works On large rivers in all but arid climates, the head of a works is rarely fixed rigidly, as, within the impounded reach a significant flow rate and water gradient is sometimes seen. The head can only be found by calculation or defined by observations with and without impoundment. Depending on the flow rate and control of the barrage, locks or weir, position will greatly vary and will not necessarily be where the so-called headworks are. Many rivers (such as the Moselle) are barraged many times to make them navigable and/or to avoid uncontrolled flooding. In such a case only the higher stretches of river are uninfluenced by impoundment. As to the other stretches the river has long "level" pounds but no or few natural heads, instead having artificial structures until the top head. Ideal management of the higher heads will allow headroom to keep back some flood meadow water so as not to compound heavy precipitation and resultant run-off downstream; corollary channels with spare capacity are a further mitigation where land is at a premium (such as the Jubilee River). Ideal management of the lowest head will allow daily timed openings, at least in flood events, to coincide with an outgoing (ebb), rather than flood tide. See also Hydraulic head Staff (head) gauge Hydraulic engineering Hydrology
Head (hydrology)
Physics,Chemistry,Engineering,Environmental_science
540
42,248,073
https://en.wikipedia.org/wiki/Bell%20roof
A bell roof (bell-shaped roof, ogee roof, Philibert de l'Orme roof) is a roof form resembling the shape of a bell. Shapes Bell roofs may be round, multi-sided or square. A similar-sounding feature added to other roof forms at the eaves or walls is bell-cast, sprocketed or flared eaves, the roof flairs upward resembling the common shape of the bottom of a bell. Gallery See also Roof pitch Bochka roof References Roofs
Bell roof
Technology,Engineering
106
1,501,989
https://en.wikipedia.org/wiki/Orbital%20plane%20of%20reference
In celestial mechanics, the orbital plane of reference (or orbital reference plane) is the plane used to define orbital elements (positions). The two main orbital elements that are measured with respect to the plane of reference are the inclination and the longitude of the ascending node. Depending on the type of body being described, there are four different kinds of reference planes that are typically used: The ecliptic or invariable plane for planets, asteroids, comets, etc. within the Solar System, as these bodies generally have orbits that lie close to the ecliptic. The equatorial plane of the orbited body for satellites orbiting with small semi-major axes The local Laplace plane for satellites orbiting with intermediate-to-large semi-major axes The plane tangent to celestial sphere for extrasolar objects On the plane of reference, a zero-point must be defined from which the angles of longitude are measured. This is usually defined as the point on the celestial sphere where the plane crosses the prime hour circle (the hour circle occupied by the First Point of Aries), also known as the equinox. See also Fundamental plane Plane (geometry) References Reference Planes Spherical astronomy Orbits Planes (geometry)
Orbital plane of reference
Mathematics
241
5,221,143
https://en.wikipedia.org/wiki/Leave%20the%20gate%20as%20you%20found%20it
Leave the gate as you found it (or leave all gates as found) is an important rule of courtesy in rural areas throughout the world. If a gate is found open, it should be left open, and if it is closed, it should be left closed. If a closed gate absolutely must be traversed, it should be closed again afterwards. It applies to visitors travelling onto or across farms, ranches, and stations. In low-rainfall areas, closing gates can cut livestock off from water supplies. For example, most of the land used for grazing in Australia has no natural water supplies, so drinking water for the stock must be supplied by the farmer or landowner, often by using a windmill to pump groundwater. Even visitors who know how a stock water system works may be unaware of breakdowns. During hot weather, cattle require large quantities of water to drink and can die in less than a day if they do not get it. Sheep need less water and can survive longer without it, but will die if cut off from water for several hot days. In all agricultural areas, farmers need to keep groups of livestock separate, for reasons including breeding for disease resistance and increased production, pest control, and controlling when ewes deliver their lambs. Unwanted mingling of flocks or herds can deprive a farmer of significant income. The original versions of the United Kingdom's Country Code advised visitors to always close gates. The revised Countryside Code now suggests that gates should be left as found. See also Leave No Trace Leaving the world a better place References External links Rules of the Trail Advice from the International Mountain Bicycling Association Camping on BLM Land A regional office of the U.S. Bureau of Land Management asks visitors to "Tread Lightly" Agriculture in society Environmental ethics
Leave the gate as you found it
Environmental_science
359
4,695,819
https://en.wikipedia.org/wiki/Gliese%20687
Gliese 687, or GJ 687 (Gliese–Jahreiß 687) is a red dwarf in the constellation Draco. This is one of the closest stars to the Sun and lies at a distance of . Even though it is close by, it has an apparent magnitude of about 9, so it can only be seen through a moderately sized telescope. Gliese 687 has a high proper motion, advancing 1.304 arcseconds per year across the sky. It has a net relative velocity of about 39 km/s. It is known to have a Neptune-mass planet. Old books and articles refer to it as Argelander Oeltzen 17415. Properties Gliese 687 has about 40% of the Sun's mass and nearly 50% of the Sun's radius. Compared to the Sun, it has a slightly higher proportion of elements with higher atomic numbers than helium. It seems to rotate every 60 days and exhibit some chromospheric activity. It displays no excess of infrared radiation that would indicate orbiting dust. Gliese 687 is a solitary red dwarf that emits X-rays. Planetary system In 2014, Gliese 687 was discovered to have a planet, Gliese 687 b, with a minimum mass of 18.394 Earth masses (which makes it comparable to Neptune), an orbital period of 38.14 days, a low orbital eccentricity and inside the habitable zone. Another Neptune-mass planet candidate was discovered in 2020, in a further out and much colder orbit. See also List of nearest stars and brown dwarfs List of exoplanets discovered in 2014 - Gliese 687 b List of exoplanets discovered in 2020 - Gliese 687 c References External links NEXXUS page Draco (constellation) Local Bubble M-type main-sequence stars 086162 0687 Planetary systems with two confirmed planets BD+68 0946
Gliese 687
Astronomy
406
11,421,308
https://en.wikipedia.org/wiki/Mir-BART1%20microRNA%20precursor%20family
The mir-BART1 microRNA precursor is found in Human herpesvirus 4 (Epstein–Barr virus) and Cercopithicine herpesvirus 15. mir-BART1 is found in all stages of infection but expression is significantly elevated in the lytic stage. In Epstein-Barr virus, mir-BART1 is found in the intronic regions of the BART (Bam HI-A region rightward transcript) gene whose function is unknown. The mature sequence is excised from the 5' arm of the hairpin. References External links MicroRNA MicroRNA precursor families
Mir-BART1 microRNA precursor family
Chemistry
119
11,724,761
https://en.wikipedia.org/wiki/Trophic%20level
The trophic level of an organism is the position it occupies in a food web. Within a food web, a food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a part of a wider food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. History The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview The three basic ways in which organisms get food are as producers, consumers, and decomposers. Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis. Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores. Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into the ecosystem for recycling. Decomposers, such as bacteria and fungi (mushrooms), feed on waste and dead matter, converting it into inorganic chemicals that can be recycled as mineral nutrients for plants to use again. Trophic levels can be represented by numbers, starting at level 1 with plants. Further trophic levels are numbered subsequently according to how far the organism is along the food chain. Level 1 Plants and algae make their own food and are called producers. Level 2 Herbivores eat plants and are called primary consumers. Level 3 Carnivores that eat herbivores are called secondary consumers. Level 4 Carnivores that eat other carnivores are called tertiary consumers. Apex predator By definition, healthy adult apex predators have no predators (with members of their own species a possible exception) and are at the highest numbered level of their food web. In real-world ecosystems, there is more than one food chain for most organisms, since most organisms eat more than one kind of food or are eaten by more than one type of predator. A diagram that sets out the intricate network of intersecting and overlapping food chains for an ecosystem is called its food web. Decomposers are often left off food webs, but if included, they mark the end of a food chain. Thus food chains start with primary producers and end with decay and decomposers. Since decomposers recycle nutrients, leaving them so they can be reused by primary producers, they are sometimes regarded as occupying their own trophic level. The trophic level of a species may vary if it has a choice of diet. Virtually all plants and phytoplankton are purely phototrophic and are at exactly level 1.0. Many worms are at around 2.1; insects 2.2; jellyfish 3.0; birds 3.6. A 2013 study estimates the average trophic level of human beings at 2.21, similar to pigs or anchovies. This is only an average, and plainly both modern and ancient human eating habits are complex and vary greatly. For example, a traditional Inuit living on a diet consisting primarily of seals would have a trophic level of nearly 5. Biomass transfer efficiency In general, each trophic level relates to the one below it by absorbing some of the energy it consumes, and in this way can be regarded as resting on, or supported by, the next lower trophic level. Food chains can be diagrammed to illustrate the amount of energy that moves from one feeding level to the next in a food chain. This is called an energy pyramid. The energy transferred between levels can also be thought of as approximating to a transfer in biomass, so energy pyramids can also be viewed as biomass pyramids, picturing the amount of biomass that results at higher levels from biomass consumed at lower levels. However, when primary producers grow rapidly and are consumed rapidly, the biomass at any one moment may be low; for example, phytoplankton (producer) biomass can be low compared to the zooplankton (consumer) biomass in the same area of ocean. The efficiency with which energy or biomass is transferred from one trophic level to the next is called the ecological efficiency. Consumers at each level convert on average only about 10% of the chemical energy in their food to their own organic tissue (the ten-per cent law). For this reason, food chains rarely extend for more than 5 or 6 levels. At the lowest trophic level (the bottom of the food chain), plants convert about 1% of the sunlight they receive into chemical energy. It follows from this that the total energy originally present in the incident sunlight that is finally embodied in a tertiary consumer is about 0.001% Evolution Both the number of trophic levels and the complexity of relationships between them evolve as life diversifies through time, the exception being intermittent mass extinction events. Fractional trophic levels Food webs largely define ecosystems, and the trophic levels define the position of organisms within the webs. But these trophic levels are not always simple integers, because organisms often feed at more than one trophic level. For example, some carnivores also eat plants, and some plants are carnivores. A large carnivore may eat both smaller carnivores and herbivores; the bobcat eats rabbits, but the mountain lion eats both bobcats and rabbits. Animals can also eat each other; the bullfrog eats crayfish and crayfish eat young bullfrogs. The feeding habits of a juvenile animal, and, as a consequence, its trophic level, can change as it grows up. The fisheries scientist Daniel Pauly sets the values of trophic levels to one in plants and detritus, two in herbivores and detritivores (primary consumers), three in secondary consumers, and so on. The definition of the trophic level, TL, for any consumer species is: where is the fractional trophic level of the prey j, and represents the fraction of j in the diet of i. That is, the consumer trophic level is one plus the weighted average of how much different trophic levels contribute to its food. In the case of marine ecosystems, the trophic level of most fish and other marine consumers takes a value between 2.0 and 5.0. The upper value, 5.0, is unusual, even for large fish, though it occurs in apex predators of marine mammals, such as polar bears and orcas. In addition to observational studies of animal behavior, and quantification of animal stomach contents, trophic level can be quantified through stable isotope analysis of animal tissues such as muscle, skin, hair, bone collagen. This is because there is a consistent increase in the nitrogen isotopic composition at each trophic level caused by fractionations that occur with the synthesis of biomolecules; the magnitude of this increase in nitrogen isotopic composition is approximately 3–4‰. Mean trophic level In fisheries, the mean trophic level for the fisheries catch across an entire area or ecosystem is calculated for year as: where is the annual catch of the species or group in year , and is the trophic level for species as defined above. Fish at higher trophic levels usually have a higher economic value, which can result in overfishing at the higher trophic levels. Earlier reports found precipitous declines in mean trophic level of fisheries catch, in a process known as fishing down the food web. However, more recent work finds no relation between economic value and trophic level; and that mean trophic levels in catches, surveys and stock assessments have not in fact declined, suggesting that fishing down the food web is not a global phenomenon. However Pauly et al. note that trophic levels peaked at 3.4 in 1970 in the northwest and west-central Atlantic, followed by a subsequent decline to 2.9 in 1994. They report a shift away from long-lived, piscivorous, high-trophic-level bottom fishes, such as cod and haddock, to short-lived, planktivorous, low-trophic-level invertebrates (e.g., shrimp) and small, pelagic fish (e.g., herring). This shift from high-trophic-level fishes to low-trophic-level invertebrates and fishes is a response to changes in the relative abundance of the preferred catch. They consider that this is part of the global fishery collapse, which finds an echo in the overfished Mediterranean Sea. Humans have a mean trophic level of about 2.21, about the same as a pig or an anchovy. FiB index Since biomass transfer efficiencies are only about 10%, it follows that the rate of biological production is much greater at lower trophic levels than it is at higher levels. Fisheries catch, at least, to begin with, will tend to increase as the trophic level declines. At this point the fisheries will target species lower in the food web. In 2000, this led Pauly and others to construct a "Fisheries in Balance" index, usually called the FiB index. The FiB index is defined, for any year y, by where is the catch at year y, is the mean trophic level of the catch at year y, is the catch, the mean trophic level of the catch at the start of the series being analyzed, and is the transfer efficiency of biomass or energy between trophic levels. The FiB index is stable (zero) over periods of time when changes in trophic levels are matched by appropriate changes in the catch in the opposite direction. The index increases if catches increase for any reason, e.g. higher fish biomass, or geographic expansion. Such decreases explain the "backward-bending" plots of trophic level versus catch originally observed by Pauly and others in 1998. Tritrophic and other interactions One aspect of trophic levels is called tritrophic interaction. Ecologists often restrict their research to two trophic levels as a way of simplifying the analysis; however, this can be misleading if tritrophic interactions (such as plant–herbivore–predator) are not easily understood by simply adding pairwise interactions (plant-herbivore plus herbivore–predator, for example). Significant interactions can occur between the first trophic level (plant) and the third trophic level (a predator) in determining herbivore population growth, for example. Simple genetic changes may yield morphological variants in plants that then differ in their resistance to herbivores because of the effects of the plant architecture on enemies of the herbivore. Plants can also develop defenses against herbivores such as chemical defenses. See also Cascade effect Energy flow (ecology) Marine trophic level Mesopredator release hypothesis Trophic cascade Trophic dynamics – Food web Trophic state index – applied to lakes References External links Trophic levels BBC. Last updated March 2004. Fisheries science Food chains Ecology
Trophic level
Biology
2,571
1,418,996
https://en.wikipedia.org/wiki/Grog%20%28clay%29
Grog, also known as firesand and chamotte, is a raw material usually made from crushed and ground potsherds, reintroduced into crude clay to temper it before making ceramic ware. It has a high percentage of silica and alumina. It is normally available as a powder or chippings, and is an important ingredient in Coade stone. Production It can be produced by firing selected fire clays to high temperatures before grinding and screening to specific particle sizes. An alternate method of production uses pitchers. The particle size distribution is generally coarser in size than the other raw materials used to prepare clay bodies. It tends to be porous and have low density. Properties Grog is composed of 40% minimum alumina, 30% minimum silica, 4% maximum iron(III) oxide, up to 2% calcium oxide and magnesium oxide combined. Its melting point is approximately . Its water absorption is maximum 7%. Its thermal expansion coefficient is 5.2 mm/m and thermal conductivity is 0.8 W/(m·K) at 100 °C and 1.0 W/(m·K) at 1000 °C. It is not easily wetted by steel. Applications Grog is used in pottery and sculpture to add a gritty, rustic texture called "tooth"; it reduces shrinkage and aids even drying. This prevents defects such as cracking, crows feet, patterning, and lamination. The coarse particles open the green clay body to allow gases to escape. Grog adds structural strength to hand-built and thrown pottery during shaping, although it can diminish fired strength. The finer the particles, the closer the clay bond, and the denser and stronger the fired product. "The strength in the dry state increases with grog down as fine as that passing the 100-mesh sieve, but decreases with material passing the 200-mesh sieve." About 20% grog is added to crude clay (in the dry form) before mixing with water. Adding grog to clay serves two primary functions: 1) It helps prevent cracking of the clay when the ceramic piece is being worked and when it dries, by reducing its plasticity; 2) it protects the ceramic piece from thermal shock while firing, particularly, at the sudden rise or lowering of temperature, and which, if not added, can cause breakage. Substitutes for grog used in pottery are dried and sifted horse manure, or sand collected from dry riverbeds (which has been sifted through a screen), or finely ground schist. Others make use of volcanic ash. Some natural clays already contain an admixture of some "natural temper," for which cause the potters who make use of the clay do not add any temper of their own. In Middle and South Europe, grog is used to create fire-resistant chamotte type bricks and mortar for construction of fireplaces, old-style and industrial furnaces, and as component of high temperature application sealants and adhesives. A typical example of domestic use is a pizza stone made from chamotte. Because the stone can absorb heat, you can bake pizza or bread on the stone in a regular domestic oven. The advantage is supposed to be a more even heat. A normal commercial domestic oven cools down when the door is opened. The stone however remains hot, creating a more even bake. Another advantage is the fact that the stone can absorb some moisture making for a drier bake. Archaeology In archaeology, "grog" is crushed, fired pottery of any type that is added as a temper to unfired clay. Several pottery types from the European Bronze Age are typologized on the basis of their grog inclusions. The practice of adding grog to clay as a temper was widespread throughout many cultures and is mentioned in the writings of Hai Gaon (939–1038), who wrote in his commentary on the Mishnah, compiled in 189 CE: "ḥarsit [= grog], that which they grind [of potsherds] and make therewith clay is called [in Hebrew] ḥarsit." See also Grogg References . . External links What is Grog in Pottery? Ceramic materials Refractory materials Silicates
Grog (clay)
Physics,Engineering
882
65,447,939
https://en.wikipedia.org/wiki/GIHOC%20Distilleries
GIHOC Distilleries Company Limited was first modern distillery company to be established in West Africa. It was established by the pre-independence Industrial Development Corporation (IDC) in 1958 as the State Distilleries Corporation for the manufacture of alcoholic beverages. The managing director of GIHOC Distilleries Company Limited is Maxwell Kofi Jumah. History GIHOC Distilleries Company Limited was established in 1958 as the State Distilleries Corporation for the manufacture of alcoholic beverages. After a decade of establishing GIHOC, in 1968, it became a division of the then Ghana Industrial Holding Corporation (GIHOC). In 1993, GIHOC Distilleries became and still remains a limited liability company wholly owned by the Government of Ghana. In 2014, Mrs Kay Kwao-Simmonds who worked as the Managing Director of GIHOC distilleries company Ltd, (Apr 2010 – Jun 2017), stated that the company hope to expand into neighbouring Togo and Cote D’Ivoire by the end of the year 2015. Additionally, she said ”the company already has three of its products on the Nigeria market”. Operations In the opening month of the year 2020, Ghana Industrial Holding Corporation (GIHOC) has its operation in 16 regions within Ghana. Currently, Ghana Industrial Holding Corporation (GIHOC) has its operations in Liberia, Nigeria and Cote d'Ivoire. In January 2020, the managing director of GIHOC Distilleries Company Limited, Maxwell Kofi Jumah informed the press that: "GIHOC's South Africa will become operational in the month of February 2020 as processes leading to the official commissioning of the China and USA offices near completion." References External links Official website Drink companies of Ghana Distilleries
GIHOC Distilleries
Chemistry
366
8,294,746
https://en.wikipedia.org/wiki/Smale%27s%20problems
Smale's problems is a list of eighteen unsolved problems in mathematics proposed by Steve Smale in 1998 and republished in 1999. Smale composed this list in reply to a request from Vladimir Arnold, then vice-president of the International Mathematical Union, who asked several mathematicians to propose a list of problems for the 21st century. Arnold's inspiration came from the list of Hilbert's problems that had been published at the beginning of the 20th century. Table of problems In later versions, Smale also listed three additional problems, "that don't seem important enough to merit a place on our main list, but it would still be nice to solve them:" Mean value problem Is the three-sphere a minimal set (Gottschalk's conjecture)? Is an Anosov diffeomorphism of a compact manifold topologically the same as the Lie group model of John Franks? See also Millennium Prize Problems Simon problems Taniyama's problems Hilbert's problems Thurston's 24 questions References Unsolved problems in mathematics
Smale's problems
Mathematics
216
21,739,576
https://en.wikipedia.org/wiki/Pilling%E2%80%93Bedworth%20ratio
In corrosion of metals, the Pilling–Bedworth ratio (P–B ratio) is the ratio of the volume of the elementary cell of a metal oxide to the volume of the elementary cell of the corresponding metal (from which the oxide is created). On the basis of the P–B ratio, it can be judged whether the metal is likely to passivate in dry air by creation of a protective oxide layer. Definition The P–B ratio is defined as where is the atomic or molecular mass, is the number of atoms of metal per molecule of the oxide, is the density, is the molar volume. History N.B. Pilling and R.E. Bedworth suggested in 1923 that metals can be classed into two categories: those that form protective oxides, and those that cannot. They ascribed the protectiveness of the oxide to the volume the oxide takes in comparison to the volume of the metal used to produce this oxide in a corrosion process in dry air. The oxide layer would be unprotective if the ratio is less than unity because the film that forms on the metal surface is porous and/or cracked. Conversely, the metals with the ratio higher than 1 tend to be protective because they form an effective barrier that prevents the gas from further oxidizing the metal. Application On the basis of measurements, the following connection can be shown: RPB < 1: the oxide coating layer is too thin, likely broken and provides no protective effect (for example magnesium) RPB > 2: the oxide coating chips off and provides no protective effect (example iron) 1 < RPB < 2: the oxide coating is passivating and provides a protecting effect against further surface oxidation (examples aluminium, titanium, chromium-containing steels). However, the exceptions to the above P–B ratio rules are numerous. Many of the exceptions can be attributed to the mechanism of the oxide growth: the underlying assumption in the P–B ratio is that oxygen needs to diffuse through the oxide layer to the metal surface; in reality, it is often the metal ion that diffuses to the air-oxide interface. The P–B ratio is important when modelling the oxidation of nuclear fuel cladding tubes, which are typically made of Zirconium alloys, as it defines how much of the cladding that is consumed and weakened due to oxidation. The P–B ratio of Zirconium alloys can vary between 1.48 and 1.56, meaning that the oxide is more voluminous than the consumed metal. Values References Corrosion prevention
Pilling–Bedworth ratio
Chemistry
523
62,670,862
https://en.wikipedia.org/wiki/Glenn%20Pool%20Oil%20Reserve
The discovery of the Glenn Pool Oil Reserve in 1905 brought the first major oil pipelines into Oklahoma, and instigated the first large scale oil boom in the state. Located near what was—at the time—the small town of Tulsa, Oklahoma, the resultant establishment of the oil fields in the area contributed greatly to the early growth and success of the city, as Tulsa became the petroleum and transportation center of the state, and the world. During the boom, several Creek Indian land allotment owners became millionaires; Oklahoma became the world's largest oil producer for years; and the area benefited from the generation of more wealth than the California Gold Rush and Nevada Silver Rush combined, as well as the increased investment capital and industrial infrastructure the boom brought with it. The town of Glenpool, Oklahoma was founded in 1906 as a direct result of the oil reserve's discovery. History Background Oil speculation was already rampant in the Tulsa region following the Red Fork discovery in 1901. One of the early successes was Galbreath's 125 barrel per day well, northeast of Red Fork. On 3 July 1901, Galbreath camped on the Glenn farm, when Bob Glenn showed Galbreath a limestone outcrop with traces of oil. Further progress awaited federal approval of an oil lease. At the turn of the 20th century, the federal government dissolved tribal land claims of the Indian Territory in favor the distribution of parcels to private owners. Robert Galbreath, a speculator and wildcatter, began prospecting in the area in 1901, and made an initial agreement that year with the recipient of one of these land allotments, Ida Glenn (she being a Creek native) and her husband, Robert, to drill for oil on their farmland. Due to federal regulations of the time, however, it would be years before such drilling commenced. Following the change of oil leasing regulations affecting Native American land allotments enacted due to Oklahoma's pending statehood, Galbreath and a partner, Frank Chesley, finally began drilling on the Ida E. Glenn Number One drill-site in the autumn of 1905. Roy Dodd and Shorty Miller made up the cable-tool drilling crew. Discovery After almost giving up and conceding the well to probably be a "dry hole", Galbreath noticed signs of gas flow in early November and continued drilling. Due to the depth they had drilled by mid-November, the success of the well was doubtful. After seeing signs of oil in the well debris, however, the pair were encouraged and, once more, continued on. On November 22, at 5 AM, with the well deep into the layer of Bartlesville (or "Glenn") sandstone of the Boggy Formation, the two struck oil at a depth of . The oil soon flowed over the top of the derrick, and the "gusher" marked the discovery of Oklahoma's first major oil field. Galbreath, Chesley, Charles Colcord, and John Mitchell then formed the Creek Oil Company, and Chesley soon leased an additional 600 acres. Galbreath went on to drill 69 successful wells, with only one dry hole. The Ida E. Glenn Number One soon regularly produced 75–85 barrels of light, sweet crude oil a day. Gilbreath, a veteran of the earlier Red Fork boom, wished to avoid the chaos which had followed that prior discovery and attempted to keep the drilling and subsequent discovery a secret, but to no avail. Several other speculators operating in the area noticed the activity at the farm. The area was immediately swarmed by oil and land speculators. Within a year, the approximately Glenn Pool held over 125 oil or gas producing wells. Characteristics Wildcat drilling took place over a wide area, which had the effect of quickly defining the core lay-out of the reserve, an area roughly four miles by two miles with a slope of about 40 feet per mile, and an average field thickness of 100 feet. The Glenn Pool Oil Reserve held an estimated 1 billion bbls of oil in place, with ultimate recoverable reserves of 400+ million bbls. The field grew from 80 acres to 8,000 acres during the first year. By 1907, natural flowing oil production ranged from to per year. Gas depletion caused by massive venting, however, decreased the gas pressure over the same period and the pumping for oil collection then became necessary. Total field production by 1907 exceeded , making Oklahoma that year the leading producer of oil, not only in the US, but any country in the world. The area experienced a huge economic boom. Prices for basic goods and services, however, soared in the area. Consequences Oil spills, due to a lack of storage facilities, were common early on. Often, open pits were dug and filled with the oil, forming huge "oil lakes" which sometimes escaped their banks and flooded the countryside. During thunderstorms, these "lakes" sometimes caught fire following lightning strikes. Due to this unclean method of storage, the product from the reserve often sold for as little as 25 cents per barrel. Oklahoma Natural Gas, Prairie Oil and Gas Company, Gulf Oil Company, and the Texas Company quickly built large-diameter pipelines into the area which by 1908 alleviated much of the infrastructure problems the rapid boom had caused. The Oklahoma oil boom created more wealth for speculators than the California gold rush and Colorado silver rush combined. Several of the Creek Nation land allotment owners in the vicinity became rich, almost overnight, and received regular royalty payments of over a million dollars a year following the discovery. One next-door neighbor of the Glenns, Thomas Gilcrease, became a multi-millionaire as a result of the oil production, and had 32 producing wells on his farm by 1917. Aftermath Harry Ford Sinclair (founder of Sinclair Oil and Refining Company) and J. Paul Getty (founder of Getty Oil Company) both got their start during the Glenn Pool boom. The town of Glenpool, Oklahoma, was founded in 1906 as support for the fledgling oil industry in the area, and had over 500 inhabitants by 1910. Glenpool today calls itself "...the town that made Tulsa famous..." The Glenns sold their farm and moved to California. Galbreath bought out Colcord and Mitchell, before Galbreath and Chesley sold their interests to J. Edgar Crosbie. Galbreath then focussed on wildcatting the Bald Hill Field near Haskell, Oklahoma. The original well, the Ida E. Pool #1, was abandoned and filled in 1964 by Texaco. In the 21st century As of 2019, the field has produced more than of oil. The Glenn Pool Oil Reserve boundaries have shifted about one mile to the west of the original perimeter. The reserve is still producing flow from legacy wells, although at a significantly lesser volume. Newer tight oil wells, especially since the introduction of "fracking" and "flooding" techniques for oil extraction, continue to regularly produce oil to this day. References Oil reserves Petroleum History of Tulsa County, Oklahoma Glenpool, Oklahoma
Glenn Pool Oil Reserve
Chemistry
1,440
11,421,219
https://en.wikipedia.org/wiki/Mir-166%20microRNA%20precursor
The plant mir-166 microRNA precursor is a small non-coding RNA gene. This microRNA (miRNA) has now been predicted or experimentally confirmed in a wide range of plant species. microRNAs are transcribed as ~70 nucleotide precursors and subsequently processed by the Dicer enzyme to give a ~22 nucleotide product. In this case the mature sequence comes from the 3' arm of the precursor, and both Arabidopsis thaliana and rice genomes contain a number of related miRNA precursors which give rise to almost identical mature sequences. The mature products are thought to have regulatory roles through complementarity to messenger RNA. References External links MIPF0000004 MicroRNA
Mir-166 microRNA precursor
Chemistry
147
72,012,955
https://en.wikipedia.org/wiki/Ganga%20Water%20Supply%20Scheme
Ganga Water Lift Project () is a multi-phase drinking water project in Bihar, India. It is the ambitious project of Chief Minister of Bihar, Nitish Kumar to supply safe drinking water to the water-alarmed towns like Gaya, Rajgir and Nawada, located in southern part of the state through pipeline by lifting water from Ganga river near Hathidah Ghat in Mokama in Patna district. The cost of first phase of this project was initially approved with , the cost was later revised to . Government of Bihar approved the first phase of the coveted Ganga Water Lift Scheme (GWLS) of Water Resource Department (WRD) in December 2019. Ganga Water Lift Project is part of Nitish Kumar's ‘Jal-Jivan-Hariyali Abhiyan' which is aimed to minimize the bad effects of climate change. Project details The total length of pipeline that supply Ganga waters to three towns is 190.90 km. The Ganga water is lifted near Hathidah Ghat in Mokama and the pipeline crosses alongside the national highways and state highways. The main pipeline runs from Hathidah to Giriyak via Sarmera and Barbigha. From Giriyak, one pipeline goes to Rajgir, while another to Nawada. The water from Ganga is brought to Motnaje water treatment plant in Nawada district through a pipeline. In Gaya, urban development & housing department (UDHD) will ensure supply of water to the households through pipeline. The Public health & engineering department (PHED) will be responsible for Ganga water supply in Nawada. The length of the pipeline on Hathidah-Motnaje-Tetar-Abgilla pipe route is 150 km. Third pipeline goes to Manpur (near Gaya) via Vanganga, Tapovan and Jethia. A major water storage point is constructed near Manpur. Similar storage point is to be constructed for other towns too. The project is completed in three phases. Ganga water will be supplied to Gaya, Bodhgaya and Rajgir in the first phase of the project. Nawada town would be covered in the second phase. Hyderabad-based infrastructure firm Megha Engineering & Infrastructures Ltd (MEIL) completed phase 1 of Ganga Water Lift Project in 2022. See also Kaleshwaram Lift Irrigation Project Colorado River Aqueduct References Water treatment
Ganga Water Supply Scheme
Chemistry,Engineering,Environmental_science
497
11,215,528
https://en.wikipedia.org/wiki/Nadir%20Afonso%20artworks
This is a list of Nadir Afonso artworks: paintings, engravings, and architecture. All data was sourced from websites (linked to) and from the books and catalogues listed in the main Nadir Afonso article. Paintings Titles are shown in their original language as conceived by the artist, and translated if needed, except the titles made of place names, which are shown in the English form. Multiple-source inconsistencies are shown in the Notes column. The artist's method usually involves one or more studies before the final artwork, which sometimes is revealed only years later; the artist also sometimes returns to work on artworks after they are publicly exhibited. This explains some of the multiple or alternative dates. Early works Expressionism Surreal period Iris period Espacillimité series All paintings in this series are independent of each other, despite their common titles. Baroque period Brazil period Egyptian period Geometry Cities series Women series Unidentified Engravings Lithography Lithograph is on paper and the measures are given for the printed area. Serigraphy All serigraphs are on paper and all measures are given for the printed area unless otherwise stated. Others Architecture Own works Projects by Nadir Afonso. Le Corbusier Projects by Le Corbusier, collaboration by Nadir Afonso. Abstract art Architecture lists Lists of works of art
Nadir Afonso artworks
Engineering
274
3,599,995
https://en.wikipedia.org/wiki/Institut%20de%20Chimie%20des%20Substances%20Naturelles
The Institut de Chimie des Substances Naturelles ("Institute for the chemistry of natural substances"), or ICSN, is part of the Centre national de la recherche scientifique, France's most prominent public research organization. Located at Gif-sur-Yvette, near Paris, ICSN is France's largest state-run chemistry research institute. Built in 1959, it employs over 300 people and focuses on four research areas: Synthetic and methodological approaches in Organic Chemistry Natural products and medicinal chemistry Structural chemistry and structural biology Chemistry and biology of therapeutic targets References External links ICSN Official website (in French and English) Chemical industry in France Chemical research institutes French National Centre for Scientific Research Government agencies of France Research institutes in France
Institut de Chimie des Substances Naturelles
Chemistry
157
58,751,479
https://en.wikipedia.org/wiki/Electronic%20health%20records%20in%20the%20United%20States
Federal and state governments, insurance companies and other large medical institutions are heavily promoting the adoption of electronic health records. The US Congress included a formula of both incentives (up to $44,000 per physician under Medicare, or up to $65,000 over six years under Medicaid) and penalties (i.e. decreased Medicare and Medicaid reimbursements to doctors who fail to use EMRs by 2015, for covered patients) for EMR/EHR adoption versus continued use of paper records as part of the Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of the, American Recovery and Reinvestment Act of 2009. The 21st Century Cures Act, passed in 2016, prohibited information blocking, which had slowed interoperability. In 2018, the Trump administration announced the MyHealthEData initiative to further allow for patients to receive their health records. The federal Office of the National Coordinator for Health Information Technology leads these efforts. One VA study estimates its electronic medical record system may improve overall efficiency by 6% per year, and the monthly cost of an EMR may (depending on the cost of the EMR) be offset by the cost of only a few "unnecessary" tests or admissions. Jerome Groopman disputed these results, publicly asking "how such dramatic claims of cost-saving and quality improvement could be true". A 2014 survey of the American College of Physicians member sample, however, found that family practice physicians spent 48 minutes more per day when using EMRs. 90% reported that at least 1 data management function was slower after EMRs were adopted, and 64% reported that note writing took longer. A third (34%) reported that it took longer to find and review medical record data, and 32% reported that it was slower to read other clinicians' notes. Coverage In a 2008 survey by DesRoches et al. of 4484 physicians (62% response rate), 83% of all physicians, 80% of primary care physicians, and 86% of non-primary care physicians had no EHRs. "Among the 83% of respondents who did not have electronic health records, 16%" had bought, but not implemented an EHR system yet. The 2009 National Ambulatory Medical Care Survey of 5200 physicians (70% response rate) by the National Center for Health Statistics showed that 51.7% of office-based physicians did not use any EMR/EHR system. In the United States, the CDC reported that the EMR adoption rate had steadily risen to 48.3 percent at the end of 2009. This is an increase over 2008 when only 38.4% of office-based physicians reported using fully or partially electronic medical record systems (EMR) in 2008. However, the same study found that only 20.4% of all physicians reported using a system described as minimally functional and including the following features: orders for prescriptions, orders for tests, viewing laboratory or imaging results, and clinical progress notes. As of 2013, 78 percent of office physicians are using basic electronic medical records. As of 2014, more than 80 percent of hospitals in the U.S.have adopted some type of EHR. Though within a hospital, the type of EHR data and mix varies significantly. Types of EHR data used in hospitals include structured data (e.g., medication information) and unstructured data (e.g., clinical notes). The healthcare industry spends only 2% of gross revenues on Health Information Technology (HIT), which is low compared to other information intensive industries such as finance, which spend upwards of 10%. The usage of electronic medical records can vary depending on who the user is and how they are using it. Electronic medical records can help improve the quality of medical care given to patients. Many doctors and office-based physicians refuse to get rid of traditional paper records. Harvard University has conducted an experiment in which they tested how doctors and nurses use electronic medical records to keep their patients' information up to date. The studies found that electronic medical records were very useful; a doctor or a nurse was able to find a patient's information fast and easy just by typing their name; even if it was misspelled. The usage of electronic medical records increases in some workplaces due to the ease of use of the system; whereas the president of the Canadian Family Practice Nurses Association says that using electronic medical records can be time-consuming, and it isn't very helpful due to the complexity of the system. Beth Israel Deaconess Medical Center reported that doctors and nurses prefer to use a much more friendly user software due to the difficulty and time it takes for medical staff to input the information as well as to find a patient's information. A study was done and the amount of information that was recorded in the EMRs was recorded; about 44% of the patient's information was recorded in the EMRs. This shows that EMRs are not very efficient most of the time. The cost of implementing an EMR system for smaller practices has also been criticized; data produced by the Robert Wood Johnson Foundation demonstrates that the first-year investment for an average five-person practice is $162,000 followed by about $85,000 in maintenance fees. Despite this, tighter regulations regarding meaningful use criteria and national laws (Health Information Technology for Economic and Clinical Health Act and the Affordable Care Act) have resulted in more physicians and facilities adopting EMR systems: Software, hardware and other services for EMR system implementation are provided for cost by various companies including Dell. Open source EMR systems exist but have not seen widespread adoption of open-source EMR system software. Beyond financial concerns there are a number of legal and ethical dilemmas created by increasing EMR use, including the risk of medical malpractice due to user error, server glitches that result in the EMR not being accessible, and increased vulnerability to hackers. Legal status Electronic medical records, like other medical records, must be kept in unaltered form and authenticated by the creator. Under data protection legislation, the responsibility for patient records (irrespective of the form they are kept in) is always on the creator and custodian of the record, usually a health care practice or facility. This role has been said to require changes such that the sole medico-legal record should be held elsewhere. The physical medical records are the property of the medical provider (or facility) that prepares them. This includes films and tracings from diagnostic imaging procedures such as X-ray, CT, PET, MRI, ultrasound, etc. The patient, however, according to HIPAA, has a right to view the originals, and to obtain copies under law. The Health Information Technology for Economic and Clinical Health Act (HITECH) (,§2.A.III & B.4) (a part of the 2009 stimulus package) set meaningful use of interoperable EHR adoption in the health care system as a critical national goal and incentivized EHR adoption. The "goal is not adoption alone but 'meaningful use' of EHRs—that is, their use by providers to achieve significant improvements in care." Title IV of the act promises maximum incentive payments for Medicaid to those who adopt and use "certified EHRs" of $63,750 over 6 years beginning in 2011. Eligible professionals must begin receiving payments by 2016 to qualify for the program. For Medicare the maximum payments are $44,000 over 5 years. Doctors who do not adopt an EHR by 2015 will be penalized 1% of Medicare payments, increasing to 3% over 3 years. In order to receive the EHR stimulus money, the HITECH Act requires doctors to show "meaningful use" of an EHR system. As of June 2010, there were no penalty provisions for Medicaid. In 2017 the government announced its first False Claims Act settlement with an electronic health records vendor for misrepresenting its ability to meet “meaningful use” standards and therefore receive incentive payments. eClinicalWorks paid $155 million to settle charges that it had failed to meet all government requirements, failed to adequately test its software, failed to fix certain bugs, failed to ensure data portability, and failed to reliably record laboratory and diagnostic imaging orders. The government also alleged that eClinicalWorks paid kickbacks to influential customers who recommended its products. The case marks the first time the government applied the federal Anti-Kickback Statute law to the promotion and sale of an electronic health records system. The False Claims Act lawsuit was brought by a whistleblower who was a New York City employee implementing eClinicalWorks’ system at Rikers Island Correctional Facility when he became aware of the software flaws. His “qui tam” case was later joined by the government. Notably, CMS has said it will not punish eClinicalWorks clients that "in good faith" attested to using the software. Health information exchange (HIE) has emerged as a core capability for hospitals and physicians to achieve "meaningful use" and receive stimulus funding. Healthcare vendors are pushing HIE as a way to allow EHR systems to pull disparate data and function on a more interoperable level. Starting in 2015, hospitals and doctors will be subject to financial penalties under Medicare if they are not using electronic health records. Goals and objectives Improve care quality, safety, efficiency, and reduce health disparities Quality and safety measurement Clinical decision support (automated advice) for providers Patient registries (e.g., "a directory of patients with diabetes") Improve care coordination Engage patients and families in their care Improve population and public health Electronic laboratory reporting for reportable conditions (hospitals) Immunization reporting to immunization registries Syndromic surveillance (health event awareness) Ensure adequate privacy and security protections Predict future health conditions through machine learning before diagnoses Quality Studies call into question whether, in real life, EMRs improve the quality of care. 2009 produced several articles raising doubts about EMR benefits. A major concern is the reduction of physician-patient interaction due to formatting constraints. For example, some doctors have reported that the use of check-boxes has led to fewer open-ended questions. Meaningful use The main components of meaningful use are: The use of a certified EHR in a meaningful manner, such as e-prescribing. The use of certified EHR technology for the electronic exchange of health information to improve the quality of health care. The use of certified EHR technology to submit clinical quality and other measures. In other words, providers need to show they're using certified EHR technology in ways that can be measured significantly in quality and in quantity. The meaningful use of EHRs intended by the US government incentives is categorized as follows: Improve care coordination Reduce healthcare disparities Engage patients and their families Improve population and public health Ensure adequate privacy and security The Obama Administration's Health IT program intends to use federal investments to stimulate the market of electronic health records: Incentives: to providers who use IT Strict and open standards: To ensure users and sellers of EHRs work towards the same goal Certification of software: To provide assurance that the EHRs meet basic quality, safety, and efficiency standards The detailed definition of "meaningful use" is to be rolled out in 3 stages over a period of time until 2017. Details of each stage are hotly debated by various groups. Meaningful use Stage 1 The first steps in achieving meaningful use are to have a certified electronic health record (EHR) and to be able to demonstrate that it is being used to meet the requirements. Stage 1 contains 25 objectives/measures for Eligible Providers (EPs) and 24 objectives/measures for eligible hospitals. The objectives/measures have been divided into a core set and menu set. EPs and eligible hospitals must meet all objectives/measures in the core set (15 for EPs and 14 for eligible hospitals). EPs must meet 5 of the 10 menu-set items during Stage 1, one of which must be a public health objective. Full list of the Core Requirements and a full list of the Menu Requirements. Core Requirements: Use computerized order entry for medication orders. Implement drug-drug, drug-allergy checks. Generate and transmit permissible prescriptions electronically. Record demographics. Maintain an up-to-date problem list of current and active diagnoses. Maintain active medication list. Maintain active medication allergy list. Record and chart changes in vital signs. Record smoking status for patients 13 years old or older. Implement one clinical decision support rule. Report ambulatory quality measures to CMS or the States. Provide patients with an electronic copy of their health information upon request. Provide clinical summaries to patients for each office visit. Capability to exchange key clinical information electronically among providers and patient authorized entities. Protect electronic health information (privacy & security) Menu Requirements: Implement drug-formulary checks. Incorporate clinical lab-test results into certified EHR as structured data. Generate lists of patients by specific conditions to use for quality improvement, reduction of disparities, research, and outreach. Send reminders to patients per patient preference for preventive/ follow-up care Provide patients with timely electronic access to their health information (including lab results, problem list, medication lists, allergies) Use certified EHR to identify patient-specific education resources and provide to the patient if appropriate. Perform medication reconciliation as relevant Provide a summary care record for transitions in care or referrals. Capability to submit electronic data to immunization registries and actual submission. Capability to provide electronic syndromic surveillance data to public health agencies and actual transmission. To receive federal incentive money, CMS requires participants in the Medicare EHR Incentive Program to "attest" that during a 90-day reporting period, they used a certified EHR and met Stage 1 criteria for meaningful use objectives and clinical quality measures. For the Medicaid EHR Incentive Program, providers follow a similar process using their state's attestation system. Meaningful use Stage 2 The government released its final ruling on achieving Stage 2 of meaningful use in August 2012. Eligible providers will need to meet 17 of 20 core objectives in Stage 2, and fulfill three out of six menu objectives. The required percentage of patient encounters that meet each objective has generally increased over the Stage 1 objectives. While Stage 2 focuses more on information exchange and patient engagement, many large EHR systems have this type of functionality built into their software, making it easier to achieve compliance. Also, for those eligible providers who have successfully attested to Stage 1, meeting Stage 2 should not be as difficult, as it builds incrementally on the requirements for the first stage. Meaningful use Stage 3 On March 20, 2015 CMS released its proposed rule for Stage 3 meaningful use. These new rules focus on some of the tougher aspects of Stage 2 and require healthcare providers to vastly improve their EHR adoption and care delivery by 2018. Barriers to adoption Costs The price of EMR and provider uncertainty regarding the value they will derive from adoption in the form of return on investment have a significant influence on EMR adoption. In a project initiated by the Office of the National Coordinator for Health Information, surveyors found that hospital administrators and physicians who had adopted EMR noted that any gains in efficiency were offset by reduced productivity as the technology was implemented, as well as the need to increase information technology staff to maintain the system. The U.S. Congressional Budget Office concluded that the cost savings may occur only in large integrated institutions like Kaiser Permanente, and not in small physician offices. They challenged the Rand Corporation's estimates of savings. Office-based physicians in particular may see no benefit if they purchase such a product—and may even suffer financial harm. Even though the use of health IT could generate cost savings for the health system at large that might offset the EMR's cost, many physicians might not be able to reduce their office expenses or increase their revenue sufficiently to pay for it. For example. the use of health IT could reduce the number of duplicated diagnostic tests. However, that improvement in efficiency would be unlikely to increase the income of many physicians. ...Given the ease at which information can be exchanged between health IT systems, patients whose physicians use them may feel that their privacy is more at risk than if paper records were used. Doubts have been raised about cost saving from EMRs by researchers at Harvard University, the Wharton School of the University of Pennsylvania, Stanford University, and others. Start-up costs In a survey by DesRoches et al. (2008), 66% of physicians without EHRs cited capital costs as a barrier to adoption, while 50% were uncertain about the investment. Around 56% of physicians without EHRs stated that financial incentives to purchase and/or use EHRs would facilitate adoption. In 2002, initial costs were estimated to be $50,000–70,000 per physician in a 3-physician practice. Since then, costs have decreased with increasing adoption. A 2011 survey estimated a cost of $32,000 per physician in a 5-physician practice during the first 60 days of implementation. One case study by Miller et al. (2005) of 14 small primary-care practices found that the average practice paid for the initial and ongoing costs within 2.5 years. A 2003 cost-benefit analysis found that using EMRs for 5 years created a net benefit of $86,000 per provider. Some physicians are skeptical of the positive claims and believe the data is skewed by vendors and others with an interest in EHR implementation. Brigham and Women's Hospital in Boston, Massachusetts, estimated it achieved net savings of $5 million to $10 million per year following installation of a computerized physician order entry system that reduced serious medication errors by 55 percent. Another large hospital generated about $8.6 million in annual savings by replacing paper medical charts with EHRs for outpatients and about $2.8 million annually by establishing electronic access to laboratory results and reports. Maintenance costs Maintenance costs can be high. Miller et al. found the average estimated maintenance cost was $8500 per FTE health-care provider per year. Furthermore, software technology advances at a rapid pace. Most software systems require frequent updates, sometimes even server upgrades, and often at a significant ongoing cost. Some types of software and operating systems require full-scale re-implementation periodically, which disrupts not only the budget but also workflow. Costs for upgrades and associated regression testing can be particularly high where the applications are governed by FDA regulations (e.g. Clinical Laboratory systems). Physicians desire modular upgrades and ability to continually customize, without large-scale reimplementation. Training costs Training of employees to use an EHR system is costly, just as for training in the use of any other hospital system. New employees, permanent or temporary, will also require training as they are hired. In the United States, a substantial majority of healthcare providers train at a VA facility sometime during their career. With the widespread adoption of the Veterans Health Information Systems and Technology Architecture (VistA) electronic health record system at all VA facilities, fewer recently-trained medical professionals will be inexperienced in electronic health record systems. Older practitioners who are less experienced in the use of electronic health record systems will retire over time. Software quality and usability deficiencies The Healthcare Information and Management Systems Society, a very large U.S. health care IT industry trade group, observed that EMR adoption rates "have been slower than expected in the United States, especially in comparison to other industry sectors and other developed countries. A key reason, aside from initial costs and lost productivity during EMR implementation, is lack of efficiency and usability of EMRs currently available." The U.S. National Institute of Standards and Technology of the Department of Commerce studied usability in 2011 and lists a number of specific issues that have been reported by health care workers. The U.S. military's EMR "AHLTA" was reported to have significant usability issues. Lack of semantic interoperability In the United States, there are no standards for semantic interoperability of health care data; there are only syntactic standards. This means that while data may be packaged in a standard format (using the pipe notation of HL7, or the bracket notation of XML), it lacks definition, or linkage to a common shared dictionary. The addition of layers of complex information models (such as the HL7 v3 RIM) does not resolve this fundamental issue. As of 2018, Fast Healthcare Interoperability Resources was a leading interoperability standard, and the Argonaut Project is a privately sponsored interoperability initiative. In 2017, Epic Systems announced Share Everywhere, which lets providers access medical information through a portal; their platform was described as "closed" in 2014, with competitors sponsoring the CommonWell Health Alliance. The economics of sharing have been blamed for the lack of interoperability, as limited data sharing can help providers retain customers. Implementations In the United States, the Department of Veterans Affairs (VA) has the largest enterprise-wide health information system that includes an electronic medical record, known as the Veterans Health Information Systems and Technology Architecture (VistA). A key component in VistA is their VistA imaging System which provides a comprehensive multimedia data from many specialties, including cardiology, radiology, and orthopedics. A graphical user interface known as the Computerized Patient Record System (CPRS) allows health care providers to review and update a patient's electronic medical record at any of the VA's over 1,000 healthcare facilities. CPRS includes the ability for Licensed Practitioners to place orders, including medications, special procedures, X-rays, patient care nursing orders, diets, and laboratory tests. The 2003 National Defense Authorization Act (NDAA) ensured that the VA and DoD would work together to establish a bidirectional exchange of reference quality medical images. Initially, demonstrations were only worked in El Paso, Texas, but capabilities have been expanded to six different locations of VA and DoD facilities. These facilities include VA polytrauma centers in Tampa and Richmond, Denver, North Chicago, Biloxi, and the National Capitol Area medical facilities. Radiological images such as CT scans, MRIs, and x-rays are being shared using the BHIE. Goals of the VA and DoD in the near future are to use several image sharing solutions (VistA Imaging and DoD Picture Archiving & Communications System (PACS) solutions). Clinical Data Repository/Health Data Repository (CDHR) is a database that allows for the sharing of patient records, especially allergy and pharmaceutical information, between the Department of Veteran Affairs (VA) and the Department of Defense (DoD) in the United States. The program shares data by translating the various vocabularies of the information being transmitted, allowing all of the VA facilities to access and interpret the patient records. The Laboratory Data Sharing and Interoperability (LDSI) application is a new program being implemented to allow sharing at certain sites between the VA and DoD of "chemistry and hematology laboratory tests". Unlike the CHDR, the LDSI is currently limited in its scope. One attribute for the start of implementing EHRs in the States is the development of the Nationwide Health Information Network which is a work in progress and still being developed. This started with the North Carolina Healthcare Information and Communication Alliance founded in 1994 and who received funding from Department of Health and Human Services. The Department of Veterans Affairs and Kaiser Permanente has a pilot program to share health records between their systems VistA and HealthConnect, respectively. This software called 'CONNECT' uses Nationwide Health Information Network standards and governance to make sure that health information exchanges are compatible with other exchanges being set up throughout the country. CONNECT is an open-source software solution that supports electronic health information exchange. The CONNECT initiative is a Federal Health Architecture project that was conceived in 2007 and initially built by 20 various federal agencies and now comprises more than 500 organizations including federal agencies, states, healthcare providers, insurers, and health IT vendors. The US Indian Health Service uses an EHR similar to Vista called RPMS. VistA Imaging is also being used to integrate images and co-ordinate PACS into the EHR system. In Alaska, use of the EHR by the Kodiak Area Native Association has improved screening services and helped the organization reach all 21 clinical performance measures defined by the Indian Health Service as required by the Government Performance and Results Act. Privacy and confidentiality In the United States in 2011 there were 380 major data breaches involving 500 or more patients' records listed on the website kept by the United States Department of Health and Human Services (HHS) Office for Civil Rights. So far, from the first wall postings in September 2009 through the latest on 8 December 2012, there have been 18,059,831 "individuals affected," and even that massive number is an undercount of the breach problem. The civil rights office has not released all of the records of tens of thousands of breaches in the United States, it has received under a federal reporting mandate on breaches affecting fewer than 500 patients per incident. Privacy concerns in healthcare apply to both paper and electronic records. According to the Los Angeles Times, roughly 150 people (from doctors and nurses to technicians and billing clerks) have access to at least part of a patient's records during a hospitalization, and 600,000 payers, providers and other entities that handle providers' billing data have some access also. Recent revelations of "secure" data breaches at centralized data repositories, in banking and other financial institutions, in the retail industry, and from government databases, have caused concern about storing electronic medical records in a central location. Records that are exchanged over the Internet are subject to the same security concerns as any other type of data transaction over the Internet. The Health Insurance Portability and Accountability Act (HIPAA) was passed in the US in 1996 to establish rules for access, authentications, storage and auditing, and transmittal of electronic medical records. This standard made restrictions for electronic records more stringent than those for paper records. However, there are concerns as to the adequacy of these standards. In the United States, information in electronic medical records is referred to as Protected Health Information (PHI) and its management is addressed under the Health Insurance Portability and Accountability Act (HIPAA) as well as many local laws. The HIPAA protects a patient's information; the information that is protected under this act are: information doctors and nurses input into the electronic medical record, conversations between a doctor and a patient that may have been recorded, as well as billing information. Under this act there is a limit as to how much information can be disclosed, and as well as who can see a patient's information. Patients also get to have a copy of their records if they desire, and get notified if their information is ever to be shared with third parties. Covered entities may disclose protected health information to law enforcement officials for law enforcement purposes as required by law (including court orders, court-ordered warrants, subpoenas) and administrative requests; or to identify or locate a suspect, fugitive, material witness, or missing person. Medical and health care providers experienced 767 security breaches resulting in the compromised confidential health information of 23,625,933 patients during the period of 2006–2012. One major issue that has risen on the privacy of the US network for electronic health records is the strategy to secure the privacy of patients. Former US president George W. Bush called for the creation of networks, but federal investigators report that there is no clear strategy to protect the privacy of patients as the promotions of the electronic medical records expands throughout the United States. In 2007, the Government Accountability Office reports that there is a "jumble of studies and vague policy statements but no overall strategy to ensure that privacy protections would be built into computer networks linking insurers, doctors, hospitals and other health care providers." The privacy threat posed by the interoperability of a national network is a key concern. One of the most vocal critics of EMRs, New York University Professor Jacob M. Appel, has claimed that the number of people who will need to have access to such a truly interoperable national system, which he estimates to be 12 million, will inevitably lead to breaches of privacy on a massive scale. Appel has written that while "hospitals keep careful tabs on who accesses the charts of VIP patients," they are powerless to act against "a meddlesome pharmacist in Alaska" who "looks up the urine toxicology on his daughter's fiance in Florida, to check if the fellow has a cocaine habit." This is a significant barrier for the adoption of an EHR. Accountability among all the parties that are involved in the processing of electronic transactions including the patient, physician office staff, and insurance companies, is the key to successful advancement of the EHR in the US Supporters of EHRs have argued that there needs to be a fundamental shift in "attitudes, awareness, habits, and capabilities in the areas of privacy and security" of individual's health records if adoption of an EHR is to occur. According to The Wall Street Journal, the DHHS takes no action on complaints under HIPAA, and medical records are disclosed under court orders in legal actions such as claims arising from automobile accidents. HIPAA has special restrictions on psychotherapy records, but psychotherapy records can also be disclosed without the client's knowledge or permission, according to the Journal. For example, Patricia Galvin, a lawyer in San Francisco, saw a psychologist at Stanford Hospital & Clinics after her fiance committed suicide. Her therapist had assured her that her records would be confidential. But after she applied for disability benefits, Stanford gave the insurer her therapy notes, and the insurer denied her benefits based on what Galvin claims was a misinterpretation of the notes. Within the private sector, many companies are moving forward in the development, establishment, and implementation of medical record banks and health information exchange. By law, companies are required to follow all HIPAA standards and adopt the same information-handling practices that have been in effect for the federal government for years. This includes two ideas, standardized formatting of data electronically exchanged and federalization of security and privacy practices among the private sector. Private companies have promised to have "stringent privacy policies and procedures." If protection and security are not part of the systems developed, people will not trust the technology nor will they participate in it. There is also debate over ownership of data, where private companies tend to value and protect data rights, but the patients referenced in these records may not have knowledge that their information is being used for commercial purposes. In 2013, reports based on documents released by Edward Snowden revealed that the NSA had succeeded in breaking the encryption codes protecting electronic health records, among other databases. In 2015, 4.5 million health records were hacked at UCLA Medical Center. In 2018, Social Indicators Research published the scientific evidence of 173,398,820 (over 173 million) individuals affected in USA from October 2008 (when the data were collected) to September 2017 (when the data was uploaded for the statistical analysis). Regulatory compliance Health Level 7 In the United States, reimbursement for many healthcare services is based upon the extent to which specific work by healthcare providers is documented in the patient's medical record. Enforcement authorities in the United States have become concerned that functionality available in many electronic health records, especially copy-and-paste, may enable fraudulent claims for reimbursement. The authorities are concerned that healthcare providers may easily use these systems to create documentation of medical care that did not actually occur. These concerns came to the forefront in 2012, in a joint letter from the U.S. Departments of Justice and Health and Human Services to the American hospital community. The American Hospital Association responded, focusing on the need for clear guidance from the government regarding permissible and prohibited conduct using electronic health records. In a December 2013 audit report, the U.S. HHS Office of the Inspector General (OIG) issued an audit report reiterating that vulnerabilities continue to exist in the operation of electronic health records. The OIG's 2014 Workplan indicates an enhanced focus on providers' use of electronic health records. Medical data breach The Security Rule, according to Health and Human Services (HHS), establishes a security framework for small practices as well as large institutions. All covered entities must have a written security plan. The HHS identifies three components as necessary for the security plan: administrative safeguards, physical safeguards, and technical safeguards. However, medical and healthcare providers have experienced 767 security breaches resulting in the compromised confidential health information of 23,625,933 patients during the period of 2006–2012. The Health Insurance Portability and Accessibility Act requires safeguards to limit the number of people who have access to personal information. However, given the number of people who may have access to your information as part of the operations and business of the health care provider or plan, there is no realistic way to estimate the number of people who may come across your records. Additionally, law enforcement access is authorized under the act. In some cases, medical information may be disclosed without a warrant or court order. Breach notification The Security Rule that was adopted in 2005 did not require breach notification. However, notice might be required by state laws that apply to a variety of industries, including health care providers. In California, a law has been in place since 2003 requiring that a HIPAA covered organization's breach could have triggered a notice even though notice was not required by the HIPAA Security Rule. Since 1 January 2009, California residents are required to receive notice of a health information breach. Federal law and regulations now provide rights to notice of a breach of health information. The Health Information Technology for Economic and Clinical Health (HITECH) Act requires HHS and the Federal Trade Commission (FTC) to jointly study and report on privacy and data security of personal health information. HITECH also requires the agencies to issue breach notification rules that apply to HIPAA covered entities and Web-based vendors that store health information electronically. The FTC has adopted rules regarding breach notification for internet-based vendors. Vendors Vendors often focus on software for specific healthcare providers, including acute hospitals or ambulatory care. In the hospital market, Epic, Cerner, MEDITECH, and CSPI (Evident Thrive) had the top market share at 28%, 26%, 9%, and 6% in 2018. For large hospitals with over 500 beds, Epic and Cerner had over 85% market share in 2019. In ambulatory care, Practice Fusion had the highest satisfaction, while in acute hospital care Epic scored relatively well. Interoperability is a focus for systems; in 2018, Epic and athenahealth were rated highly for interoperability. Interoperability has been lacking, but is enhanced by certain compatibility features (e.g., Epic interoperates with itself via CareEverywhere) or in some cases regional or national networks, such as EHealth Exchange, CommonWell Health Alliance, and Carequality. Vendors may use anonymized data for their own business or research purposes; for example, as of 2019 Cerner and AWS partnered using data for a machine learning tool. History As of 2006, systems with a computerized provider order entry (CPOE) had existed for more than 30 years, but by 2006 only 10% of hospitals had a fully integrated system. See also iMedicor Electronic health record References Healthcare in the United States Electronic health records
Electronic health records in the United States
Technology
7,281
65,811,071
https://en.wikipedia.org/wiki/Hunter%20Lab
Hunter Lab (also known as Hunter L,a,b) is a color space defined in 1948 by Richard S. Hunter. It was designed to be computed via simple formulas from the CIEXYZ space, but to be more perceptually uniform. Hunter named his coordinates L, a and b. Hunter Lab was a precursor to CIELAB, created in 1976 by the International Commission on Illumination (CIE), which named the coordinates for CIELAB as L*, a*, b* to distinguish them from Hunter's coordinates. Formulation L is a correlate of lightness and is computed from the Y tristimulus value using Priest's approximation to Munsell value: where Yn is the Y tristimulus value of a specified white object. For surface-color applications, the specified white object is usually (though not always) a hypothetical material with unit reflectance that follows Lambert's law. The resulting L will be scaled between 0 (black) and 100 (white); roughly ten times the Munsell value. Note that a medium lightness of 50 is produced by a luminance of 25, due to the square root proportionality. a and b are termed opponent color axes. a represents, roughly, Redness (positive) versus Greenness (negative). It is computed as: where Ka is a coefficient that depends upon the illuminant (for D65, Ka is 172.30; see approximate formula below) and Xn is the X tristimulus value of the specified white object. The other opponent color axis, b, is positive for yellow colors and negative for blue colors. It is computed as: where Kb is a coefficient that depends upon the illuminant (for D65, Kb is 67.20; see approximate formula below) and Zn is the Z tristimulus value of the specified white object. Both a and b will be zero for objects that have the same chromaticity coordinates as the specified white objects (i.e., achromatic, grey, objects). Approximate formulas for Ka and Kb In the previous version of the Hunter Lab color space, Ka was 175 and Kb was 70. Hunter Associates Lab discovered that better agreement could be obtained with other color difference metrics, such as CIELAB (see above) by allowing these coefficients to depend upon the illuminants. Approximate formulae are: which result in the original values for Illuminant C, the original illuminant with which the Lab color space was used. As an Adams chromatic valence space Adams chromatic valence color spaces are based on two elements: a (relatively) uniform lightness scale and a (relatively) uniform chromaticity scale. If we take as the uniform lightness scale Priest's approximation to the Munsell Value scale, which would be written in modern notation as: and, as the uniform chromaticity coordinates: where ke is a tuning coefficient, we obtain the two chromatic axes: and which is identical to the Hunter Lab formulas given above if we select and . Therefore, the Hunter Lab color space is an Adams chromatic valence color space. References Color space 1948 introductions
Hunter Lab
Mathematics
658
12,122,436
https://en.wikipedia.org/wiki/C2H4O
The molecular formula (molar mass: 44.05 g/mol, exact mass: 44.0262 u) may refer to: Acetaldehyde (ethanal) Ethenol (vinyl alcohol) Ethylene oxide (epoxyethane, oxirane)
C2H4O
Chemistry
62
22,826,039
https://en.wikipedia.org/wiki/Tibor%20Szele
Tibor Szele (21 June 1918 – 5 April 1955) Hungarian mathematician, working in combinatorics and abstract algebra. Szele was born in Debrecen. After graduating at the Debrecen University, he became a researcher at the Szeged University in 1946, then he went back at the Debrecen University in 1948 where he became full professor in 1952. He worked especially in the theory of Abelian groups and ring theory. He generalized Hajós's theorem. He founded the Hungarian school of algebra. Tibor Szele received the Kossuth Prize in 1952. He died in Szeged. References A panorama of Hungarian Mathematics in the Twentieth Century, p. 601. External links Algebraists Combinatorialists Probability theorists 1918 births 1955 deaths University of Debrecen alumni Academic staff of the University of Debrecen 20th-century Hungarian mathematicians
Tibor Szele
Mathematics
181
8,906,418
https://en.wikipedia.org/wiki/Steel%20fence%20post
A steel fence post, also called (depending on design or country) a T-post, a Y-post, or variants on star post, is a type of fence post or picket. They are made of steel and are sometimes manufactured using durable rail steel. They can be used to support various types of wire or wire mesh. The end view of the post creates an obvious T, Y, or other shape. The posts are driven into the ground with a manual or pneumatic post pounder. All along the post, along the spine, there are studs or nubs that prevent the barbed wire or mesh from sliding up or down the post. They are generally designated as 1.01, 1.25 or 1.33, referring to the weight in pounds per lineal foot. They are commonly painted with a white tip on top; white improves the visibility of the fence line. When driving the post with a post pounder the white top paint is a visual means to ensure the user doesn’t raise the pounder too high while pounding. Raising the pounder too high allows it to lean towards the user and could lean to striking them in the head. While T-Posts are more common in the United States, Y-posts are more common in Australia and New Zealand where they are sometimes called either star pickets or "Waratahs", after the company which registered a patent for them in 1926. In New Zealand Waratahs are often used for trail blazing. In areas (such as the British Isles) where treated timber is relatively inexpensive, wooden fence-posts are used and steel ones are unusual for agricultural purposes. In the British Isles steel posts are however often used for fencing into solid rock. In this case a hole is drilled into the rock, and the post is fixed using cement or epoxy. In Australia these are normally called a star picket and sizing is by length, normally one notch on the top and holes down the length. They are often covered in a black bituminous coating. See also Agricultural fencing Stanchion References Building materials Metal fences Fence Steel objects
Steel fence post
Physics,Engineering
424
68,483,779
https://en.wikipedia.org/wiki/SWIM%20Protocol
The Scalable Weakly Consistent Infection-style Process Group Membership (SWIM) Protocol is a group membership protocol based on "outsourced heartbeats" used in distributed systems, first introduced by Indranil Gupta in 2001. It is a hybrid algorithm which combines failure detection with group membership dissemination. Protocol The protocol has two components, the Failure Detector Component and the Dissemination Component. The Failure Detector Component functions as follows: Every T''' time units, each node () sends a ping to random other node () in its membership list. If receives a response from , is decided to be healthy and N1 updates its "last heard from" timestamp for to be the current time. If does not receive a response, contacts k other nodes on its list (), and requests that they ping . If after T units of time: if no successful response is received, marks as failed. The Dissemination Component functions as follows: Upon detecting a failed node , sends a multicast message to the rest of the nodes in its membership list, with information about the failed node. Voluntary requests for a node to enter/leave the group are also sent via multicast. Properties The protocol provides the following guarantees: Strong Completeness: Full completeness is guaranteed (e.g. the crash-failure of any node in the group is eventually detected by all live nodes). Detection Time: The expected value of detection time (from node failure to detection) is , where is the length of the protocol period, and is the fraction of non-faulty nodes in the group. Extensions The original SWIM paper lists the following extensions to make the protocol more robust: Suspicion: Nodes that are unresponsive to ping messages are not initially marked as failed. Instead, they are marked as "suspicious"; nodes which discover a "suspicious" node still send a multicast to all other nodes including this mechanism. If a "suspicious" node responds to a ping before some time-out threshold, an "alive" message is sent via multicast to remove the "suspicious" label from the node. Infection-Style Dissemination: Instead of propagating node failure information via multicast, protocol messages are piggybacked on the ping messages used to determine node liveness. This is equivalent to gossip dissemination. Round-Robin Probe Target Selection''': Instead of randomly picking a node to probe during each protocol time step, the protocol is modified so that each node performs a round-robin selection of probe target. This bounds the worst-case detection time of the protocol, without degrading the average detection time. See also Failure detector Crash (computing) References Fault-tolerant computer systems Distributed algorithms Distributed computing
SWIM Protocol
Technology,Engineering
544
9,721,493
https://en.wikipedia.org/wiki/Jeanne%20Guillemin
Jeanne Harley Guillemin (March 6, 1943 - November 15, 2019) was an American medical anthropologist and author, who for 25 years taught at Boston College as a professor of Sociology and for over ten years was a senior fellow in the Security Studies Program at Massachusetts Institute of Technology. She was an authority on biological weapons and published four books on the topic. Biography Born (March 6, 1943) Jean Elizabeth Garrigan in Brooklyn, New York City, she was raised in Rutherford, New Jersey and received a bachelor's degree (1968) in social psychology from Harvard University. In 1973, she completed a PhD in sociology and anthropology at Brandeis University. She taught at Boston College from 1972 until 2005. While at Boston College, Guillemin did extensive research on hospital technology and medical ethics, receiving fellowships to work on the U.S. Senate Finance Committee staff and at the Hastings Center for the Study of Ethics. She was also co-head of the National Library of Medicine's HealthAware Project, a joint project with Harvard Medical School to test how the internet could be used to educate people about preventive health measures. She married Robert Guillemin and had two sons, Robert and John. She divorced her first husband and in 1986, married Matthew Meselson. In the 1980s, Guillemin became interested in the misuse of biomedical science by government weapons programs. She involved herself in two of her husband's investigations of alleged violations of international arms control agreements by the Soviet Union which involved germ weapons. The first was the "yellow rain" accusation by the United States against the USSR, to the effect that the Soviets enabled the Laotian army to use deadly mycotoxins to attack Hmong refugees allied with the US during the Vietnam War. This accusation had been disputed by Meselson in 1983 when he argued that the yellow material was actually bee feces mistaken for a biological weapon by those under attack and by certain US government scientists. (The issue remains disputed and the US government has not withdrawn the allegations, arguing that the controversy has not been fully resolved.) In 1992, Guillemin became part of Meselson's investigation into another Cold War controversy, the 1979 outbreak of anthrax in Sverdlovsk, a closed Soviet city in the Ural Mountains. The Soviet government claimed the cause was infected meat. Guillemin's interviews with the families of victims (64 people were recorded as dying) resulted in an epidemiological map showing the source to be an air-borne release of anthrax spores from a military facility where, in violation of the 1972 Biological Weapons Convention, the testing of anthrax weapons had been in process. In 1994, the results of this research were published in Science and in 1999 her book on this research was published (Anthrax: The Investigation of a Deadly Outbreak, U of California Press). After 9/11, with the advent of the anthrax letter attacks, Guillemin was frequently asked by the media to explain the disease, based on her experience in Russia. In 2005 she published Biological Weapons: From State-sponsored Programs to Contemporary Bioterorism, Columbia U Press) which offers a concise, comprehensive history of how anthrax and other microbes were developed as weapons over the course of the 20th century, resulting in potential bioterrorism. She turned her attention to the 2001 anthrax letter attacks after the 2008 suicide of the FBI's prime suspect, an anthrax scientist who worked for the U.S. Army at Fort Detrick, Maryland. Her third book on biological weapons is about the letters and their impact on victims and government organizations. It is called American Anthrax: Fear, Crime, and the Investigation of the Nation's Deadliest Bioterror Attack, (Macmillan/Holt/Times, 2011). Guillemin joined the MIT Center for International Studies in 2006 as a research associate and senior advisor. In October 2019, she established an endowed fund to provide financial support to female PhD candidates studying international affairs. Guillemin died on November 15, 2019, at the age of 76. Books Guillemin, Jeanne, Urban Renegades: The Cultural Strategy of American Indians, Columbia University Press, 19?? (New edition, 1975). Guillemin, Jeanne Harley and Lynda Lytle Holmstrom, Mixed Blessings: Intensive Care for Newborns, Oxford University Press, 1986. Guillemin, Jeanne, Anthrax: The Investigation of a Deadly Outbreak, Berkeley, University of California Press, 1999. Guillemin, Jeanne, Anthrax and Smallpox: Comparison of Two Outbreaks, National Technical Information Service, 2002. Guillemin, Jeanne, Biological Weapons: From the Invention of State-sponsored Programs to Contemporary Bioterrorism, Columbia University Press, 2005. Guillemin, Jeanne, American Anthrax, Henry Holt and Company, LLC, 2011. Guillemin, Jeanne, Hidden Atrocities: Japanese Germ Warfare and American Obstruction of Justice at the Tokyo Trial, Columbia University Press, 2017. Guillemin wrote introductions to new editions of: Mead, Margaret, Kinship in the Admiralty Islands, In Anthropological Papers of the American Museum of Natural History, Volume 34, Issue 2 pages 181–358; American Museum of Natural History ( AMNH ), New York, 1934 [Transaction Publishers edition, 2001]. Brown, Fredric Joseph, Chemical Warfare: A Study in Restraints, Princeton University Press, 1968; [Transaction Publishers edition, 2005]. Guillemin edited: Guillemin, Jeanne (ed.), Anthropological Realities: Readings in the Science of Culture, Transaction Publishers, 1980. References Academics from Brooklyn Writers from Brooklyn American women anthropologists Medical anthropologists Boston College faculty People related to biological warfare 1943 births 2019 deaths Massachusetts Institute of Technology alumni American social sciences writers Brandeis University alumni Harvard College alumni People from Rutherford, New Jersey
Jeanne Guillemin
Biology
1,206
37,943,969
https://en.wikipedia.org/wiki/Plastic%20Disclosure%20Project
The Plastic Disclosure Project (PDP) is an organization founded to reduce the environmental impact caused by the rising use of plastics in products and packaging. It is listed as an entity of the Ocean Recovery Alliance, a 501(c)(3) organization in the United States. Similar to the Carbon Disclosure Project, the PDP encourages the measurement, disclosure, and management of plastics, as well as holding companies and individuals accountable for their use of plastics. Foundation The PDP was announced at the opening plenary session of the Clinton Global Initiative in 2010 as a preventative project that aims to address the issue of global plastic waste. Main goals The PDP specifies the following four main goals: Creation of an environment in which plastic applications are devoid of adverse environmental consequences. Implementation of regular annual reporting and evaluation of production and waste generation processes to enhance effective management strategies. Advocacy for the adoption of sustainable business practices in the realm of plastic utilization. Encouragement of innovative design approaches and inventive solutions for plastic-based products and packaging. Working process The PDP tells businesses to measure, manage, reduce, and benefit from plastic waste to create a world where plastic benefits consumers and businesses without negatively impacting the environment. It is based on the principle that to effectively manage and improve efficiency in plastic use, reuse, and recycling, businesses must first quantify their use of plastics. Annual disclosure requests are sent to companies that use plastic for goods and/or services on behalf of socially conscious investors and community stakeholders. It aims to connect solution providers with prospective companies to facilitate design and innovation. All types of organizations are invited to participate in PDP and commit to reducing their plastic footprint. Company disclosures The Plastic Disclosure Project (PDP) is an initiative that aims to track and reduce plastic waste generated by companies and institutions. Lush was the first participant to disclose its plastic waste data in 2011, followed by UC Berkeley in 2012, which was the first university to join the initiative. The project is managed by Campus Recycling and Refuse Services, along with the Office of Sustainability, with plans to assign interns to monitor plastic waste leaving the campus. Interest in this project has been expressed by companies from various countries. During the Plasticity Forum Rio '12, an alliance was formed between the Plastic Pollution Coalition and the PDP to collaborate on reducing plastic waste on university campuses worldwide. References External links Plastic Disclosure Project Ocean Recovery Alliance 501(c)(3) organizations Nature conservation organizations based in the United States Ocean pollution Plastics and the environment
Plastic Disclosure Project
Chemistry,Environmental_science
503
3,031,620
https://en.wikipedia.org/wiki/Hohlraum
In radiation thermodynamics, a hohlraum (; a non-specific German word for a "hollow space", "empty room", or "cavity") is a cavity whose walls are in radiative equilibrium with the radiant energy within the cavity. First proposed by Gustav Kirchhoff in 1860 and used in the study of black-body radiation (hohlraumstrahlung), this idealized cavity can be approximated in practice by a hollow container of any opaque material. The radiation escaping through a small perforation in the wall of such a container will be a good approximation of black-body radiation at the temperature of the interior of the container. Indeed, a hohlraum can even be constructed from cardboard, as shown by Purcell's Black Body Box, a hohlraum demonstrator. In spectroscopy, the Hohlraum effect occurs when an object achieves thermodynamic equilibrium with an enclosing hohlraum. As a consequence of Kirchhoff’s law, everything optically blends together and contrast between the walls and the object effectively disappears. Applications Hohlraums are used in High Energy Density Physics (HEDP) and Inertial Confinement Fusion (ICF) experiments to convert laser energy to thermal x-rays for imploding capsules, heating targets, and generating thermal radiation waves. They may also be used in Nuclear Weapon designs. Inertial confinement fusion The indirect drive approach to inertial confinement fusion is as follows: the fusion fuel capsule is held inside a cylindrical hohlraum. The hohlraum body is manufactured using a high-Z (high atomic number) element, usually gold or uranium. Inside the hohlraum is a fuel capsule containing deuterium and tritium (D-T) fuel. A frozen layer of D-T ice adheres inside the fuel capsule. The fuel capsule wall is synthesized using light elements such as plastic, beryllium, or high density carbon, i.e. diamond. The outer portion of the fuel capsule explodes outward when ablated by the x-rays produced by the hohlraum wall upon irradiation by lasers. Due to Newton's third law, the inner portion of the fuel capsule implodes, causing the D-T fuel to be supercompressed, activating a fusion reaction. The radiation source (e.g., laser) is pointed at the interior of the hohlraum rather than at the fuel capsule itself. The hohlraum absorbs and re-radiates the energy as X-rays, a process known as indirect drive. The advantage to this approach, compared to direct drive, is that high mode structures from the laser spot are smoothed out when the energy is re-radiated from the hohlraum walls. The disadvantage to this approach is that low mode asymmetries are harder to control. It is important to be able to control both high mode and low mode asymmetries to achieve a uniform implosion. The hohlraum walls must have surface roughness less than 1 micron, and hence accurate machining is required during fabrication. Any imperfection of the hohlraum wall during fabrication will cause uneven and non-symmetrical compression of the fuel capsule inside the hohlraum during inertial confinement fusion. Hence imperfection is to be carefully prevented so surface finishing is extremely important, as during ICF laser shots, due to intense pressure and temperature, results are highly susceptible to hohlraum texture roughness. The fuel capsule must be precisely spherical, with texture roughness less than one nanometer, for fusion ignition to start. Otherwise, instability will cause fusion to fizzle. The fuel capsule contains a small fill hole with less than 5 microns diameter to inject the capsule with D-T gas. The X-ray intensity around the capsule must be very symmetrical to avoid hydrodynamic instabilities during compression. Earlier designs had radiators at the ends of the hohlraum, but it proved difficult to maintain adequate X-ray symmetry with this geometry. By the end of the 1990s, target physicists developed a new family of designs in which the ion beams are absorbed in the hohlraum walls, so that X-rays are radiated from a large fraction of the solid angle surrounding the capsule. With a judicious choice of absorbing materials, this arrangement, referred to as a "distributed-radiator" target, gives better X-ray symmetry and target gain in simulations than earlier designs. Nuclear weapon design The term hohlraum is also used to describe the casing of a thermonuclear bomb following the Teller-Ulam design. The casing's purpose is to contain and focus the energy of the primary (fission) stage in order to implode the secondary (fusion) stage. Notes and references External links NIF Hohlraum – High resolution picture at Lawrence Livermore National Laboratory. Electromagnetic radiation Inertial confinement fusion
Hohlraum
Physics
1,038
12,928,899
https://en.wikipedia.org/wiki/Pollard%27s%20kangaroo%20algorithm
In computational number theory and computational algebra, Pollard's kangaroo algorithm (also Pollard's lambda algorithm, see Naming below) is an algorithm for solving the discrete logarithm problem. The algorithm was introduced in 1978 by the number theorist John M. Pollard, in the same paper as his better-known Pollard's rho algorithm for solving the same problem. Although Pollard described the application of his algorithm to the discrete logarithm problem in the multiplicative group of units modulo a prime p, it is in fact a generic discrete logarithm algorithm—it will work in any finite cyclic group. Algorithm Suppose is a finite cyclic group of order which is generated by the element , and we seek to find the discrete logarithm of the element to the base . In other words, one seeks such that . The lambda algorithm allows one to search for in some interval . One may search the entire range of possible logarithms by setting and . 1. Choose a set of positive integers of mean roughly and define a pseudorandom map . 2. Choose an integer and compute a sequence of group elements according to: 3. Compute Observe that: 4. Begin computing a second sequence of group elements according to: and a corresponding sequence of integers according to: . Observe that: 5. Stop computing terms of and when either of the following conditions are met: A) for some . If the sequences and "collide" in this manner, then we have: and so we are done. B) . If this occurs, then the algorithm has failed to find . Subsequent attempts can be made by changing the choice of and/or . Complexity Pollard gives the time complexity of the algorithm as , using a probabilistic argument based on the assumption that acts pseudorandomly. Since can be represented using bits, this is exponential in the problem size (though still a significant improvement over the trivial brute-force algorithm that takes time ). For an example of a subexponential time discrete logarithm algorithm, see the index calculus algorithm. Naming The algorithm is well known by two names. The first is "Pollard's kangaroo algorithm". This name is a reference to an analogy used in the paper presenting the algorithm, where the algorithm is explained in terms of using a tame kangaroo to trap a wild kangaroo. Pollard has explained that this analogy was inspired by a "fascinating" article published in the same issue of Scientific American as an exposition of the RSA public key cryptosystem. The article described an experiment in which a kangaroo's "energetic cost of locomotion, measured in terms of oxygen consumption at various speeds, was determined by placing kangaroos on a treadmill". The second is "Pollard's lambda algorithm". Much like the name of another of Pollard's discrete logarithm algorithms, Pollard's rho algorithm, this name refers to the similarity between a visualisation of the algorithm and the Greek letter lambda (). The shorter stroke of the letter lambda corresponds to the sequence , since it starts from the position b to the right of x. Accordingly, the longer stroke corresponds to the sequence , which "collides with" the first sequence (just like the strokes of a lambda intersect) and then follows it subsequently. Pollard has expressed a preference for the name "kangaroo algorithm", as this avoids confusion with some parallel versions of his rho algorithm, which have also been called "lambda algorithms". See also Dynkin's card trick Kruskal count Rainbow table References Further reading Number theoretic algorithms Computer algebra Logarithms
Pollard's kangaroo algorithm
Mathematics,Technology
736
22,079,789
https://en.wikipedia.org/wiki/Living%20Streets%20Aotearoa
Living Streets Aotearoa Inc. is the New Zealand organisation for people on foot, promoting walking-friendly communities. Living Streets Aotearoa is the national walking advocacy group with the vision of "more people choosing to walk more often." It promotes the concept of living streets, the use of roads for functions other than just vehicle access. The organisation is a voting member of the International Federation of Pedestrians. History Celia Wade-Brown, the inaugural President from 2002 to 2009, noticed that drivers, cyclists and government agencies met to discuss road safety, modal shift and funding but that pedestrians and trampers were not part of the discussion. The organisation was founded to ensure that the voice of people on foot was heard, and evolved from Walk Wellington which was set up in 1998 by a group of Wellingtonians with an interest in the rights of pedestrians and the benefits of walking. Living Streets Aotearoa was incorporated in 2002. The joint advocacy of Living Streets Aotearoa and cycle groups was pivotal in creating Getting there - on foot, by cycle - the New Zealand Walking and Cycling Strategy in 2005 and its subsequent (although at present only partial) implementation. There is a national executive committee and several local groups which advocate walking. The organisation received direct government funding until a change of government in 2008. It now relies entirely on subscription and grants. Funding for walking and pedestrian improvements is only available at local government level in New Zealand, and competes with many other priorities. The Local Government New Zealand discussion of the complex arrangements for funding transport sets out some of the issues. Main activities Living Streets works to develop walking-friendly communities throughout New Zealand and to promote the social, environmental, health and economic benefits of walking as a means of transport and recreation. Living Streets exists because the diverse needs and aspirations of people on foot are often overlooked. Walking is not consistently or fully integrated into decision-making in transport, urban design, public health and community development planning. Submissions on many policies and plans are a key activity and are made to national, local and other agencies to promote walkability and pedestrian friendly environments. Promotion of the International Walking Charter has resulted in several local government councils adopting it and agreed Walking Policies or plans. A biennial NZ walking conference was held in 2006, 2008 and 2010. The conference was combined with the NZ Cycling Conference in 2012, with subsequent joint "2WALKandCYCLE" conferences held in 2014, 2016, 2018, 2021 and 2024. Biennial Golden Foot Awards have been presented at the biennial conferences. Several campaigns to improve walkability include calls to - change the legislation to make vehicle users give way to pedestrians when turning - re-signpost roads so that pedestrian exits are clearly marked Living Streets Aotearoa supports the proposed new walkway across the Auckland Harbour Bridge - Skypath. See also Public transport in New Zealand Te Araroa national walkway Federated Mountain Clubs of NZ References External links Official website Blind Foundation - includes mobility for people who are blind or have low vision CCSDisability - includes mobility for the physically disabled Walking Access Commission for Access Maps Living streets Medical and health organisations based in New Zealand Sustainable transport Traffic calming Transportation planning Walking
Living Streets Aotearoa
Physics
634
11,711,216
https://en.wikipedia.org/wiki/Tinsel%20wire
Tinsel wire is a type of electrical wire used for applications that require high mechanical flexibility but low current-carrying capacity. Tinsel wire is commonly used in cords of telephones, handsets, headphones, and small electrical appliances. It is far more resistant to metal fatigue failure than either stranded wire or solid wire. Construction Tinsel wire is produced by wrapping several strands of thin metal foil around a flexible nylon or textile core. Because the foil is very thin, the bend radius imposed on the foil is much greater than the thickness of the foil, leading to a low probability of metal fatigue. Meanwhile, the core provides high tensile strength without impairing flexibility. Typically, multiple tinsel wires are jacketed with an insulating layer to form one conductor. A cord is formed from several conductors in either a round profile or a flat cable. Connections Tinsel wire is commonly connected to equipment with crimped terminal lugs that pierce the insulation to make contact with the metal ribbons, rather than stripping insulation. Separated from the core, the individual ribbons are relatively fragile, and the core can be damaged by high temperatures. These factors make it difficult or impractical to terminate tinsel wire by soldering during equipment manufacture, although soldering is possible, with some difficulty, to repair a failed connection. However, the conductors tend to break at their junction with the rigid solder. Applications Tinsel wires or cords are used for telephony and audio applications in which frequent bending of electric cords occurs, such as for headsets and telephone handsets. It is also used in power cords for small appliances such as electric shavers or clocks, where stranded cable conductors of adequate mechanical size would be too stiff. Tinsel cords are recognized as type TPT or TST in the US and Canadian electrical codes, and are rated at 0.5 amperes. Manufacturers and suppliers Maeden Dacon Systems, Inc. Gavitt Wire & Cable Co., Inc. See also Litz wire References Electrical wiring Telephony equipment
Tinsel wire
Physics,Engineering
408
38,937,162
https://en.wikipedia.org/wiki/Guide%20to%20Pharmacology
The IUPHAR/BPS Guide to PHARMACOLOGY is an open-access website, acting as a portal to information on the biological targets of licensed drugs and other small molecules. The Guide to PHARMACOLOGY (with GtoPdb being the standard abbreviation) is developed as a joint venture between the International Union of Basic and Clinical Pharmacology (IUPHAR) and the British Pharmacological Society (BPS). This replaces and expands upon the original 2009 IUPHAR Database (standard abbreviation IUPHAR-DB). The Guide to PHARMACOLOGY aims to provide a concise overview of all pharmacological targets, accessible to all members of the scientific and clinical communities and the interested public, with links to details on a selected set of targets. The information featured includes pharmacological data, target, and gene nomenclature, as well as curated chemical information for ligands. Overviews and commentaries on each target family are included, with links to key references. Background and development The Guide to PHARMACOLOGY was initially made available online in December 2011 with additional material released in July 2012. Maintained by a team of curators based at the University of Edinburgh, the Guide to PHARMACOLOGY is developed by an international network of contributors, including the editors of the Concise Guide to PHARMACOLOGY. As with the original IUPHAR-DB, the International Union of Basic and Clinical Pharmacology (IUPHAR) Committee on Receptor Nomenclature and Drug Classification (NC-IUPHAR), acts as the scientific advisory and editorial board for the database. Its network of over 500 specialist advisors (organized into ~90 subcommittees) contribute expertise and data. The current PI and Grant holder of the GtoPdb project is Prof. Jamie A. Davies. The development and release of the first version of the GtoPdb in 2012 were described in an editorial published in the British Journal of Pharmacology entitled 'Guide to Pharmacology.org- an update'. The IUPHAR-DB is no longer being developed and all the information contained within this site is now available through the Guide to PHARMACOLOGY (IUPHAR-DB links should now re-direct). Content and features The target groups currently included on the Guide to PHARMACOLOGY are: Catalytic receptors Enzymes G protein-coupled receptors Ion channels Kinases Nuclear receptors Transporters Other protein targets including fatty acid-binding proteins, sigma receptors and adiponectin receptors Information for each target group is subdivided into families based on classification, with a separate data page for each family. Within each page, targets are arranged into lists of tables, with each table including the protein and gene nomenclature for the target with links to gene nomenclature databases, and listing selected ligands with activity at the target, including agonists, antagonists, inhibitors and radioligands. Pharmacological data and references are given and each ligand is hyperlinked to a ligand page displaying nomenclature and a chemical structure or peptide sequence, along with synonyms and relevant database links. The Guide to PHARMACOLOGY also includes a list of all ligand molecules included on the site, subdivided into categories including small organic molecules (including mammalian metabolites, hormones and neurotransmitters), synthetic organic molecules, natural products, peptides, inorganic molecules and antibodies. A complete list of all the approved drugs included on the website is also available via the ligand list. The Guide to PHARMACOLOGY is being expanded to include clinical information on targets and ligands, in addition to educational resources. Search features on the website include quick and advanced search options, and receptor and ligand searches, including support for ligand structures using chemical structures. Other features include 'Hot topic' news items and a recent receptor-ligand pairing list. IUPHAR Guide to IMMUNOPHARMACOLOGY Between November 2015 and October 2018, the Wellcome Trust supported a project to develop the IUPHAR Guide to IMMUNOPHARMACOLOGY (GtoImmuPdb), based on the GtoPdb schema. The GtoImmuPdb is an open-access resource that brings an immunological perspective to the high-quality, expert-curated pharmacological data found in the existing IUPHAR/BPS Guide to PHARMACOLOGY. Protein targets and ligands relevant to immunopharmacology have been tagged and curated into GtoImmuPdb. These have also been associated with new immunological data types such as immunological processes, cell types, and disease. GtoImmuPdb provides a knowledge base that connects immunology with pharmacology, bringing added value and supporting research and development of drugs targeted at modulating immune, inflammatory or infectious components of the disease. The Concise Guide to PHARMACOLOGY The Guide to PHARMACOLOGY includes an online, open-access database version of the Concise Guide to PHARMACOLOGY, previously "The Guide to Receptors and Channels" available in HTML, PDF and printed formats. A hard copy summary of the online database is published as The Concise Guide to Pharmacology 2017/2018 as a series of papers as a bi-annual supplement to the British Journal of Pharmacology. Database links The Guide to PHARMACOLOGY includes links to other relevant resources via target and ligand pages on both the concise and detailed view pages. Many of these resources maintain reciprocal links with the relevant Guide to PHARMACOLOGY pages. HUGO Gene Nomenclature Committee Mouse Genome Informatics Rat genome database Ensembl UniProt Entrez PubChem ChemSpider ChEMBL ChEBI KEGG Online Mendelian Inheritance in Man (OMIM) DrugBank Protein Data Bank Future directions Following funding from the Wellcome Trust, from 2012 to 2015 the Guide to PHARMACOLOGY was expanded to include the biological targets of all prescription drugs and other likely targets of future small molecule drugs. Overviews of the key features of a wide range of targets are provided on the summary view pages, with detailed view pages providing more in-depth information on the properties of a selected subset of targets. As of January 2018 the Medicines for Malaria Venture is supporting a new extension to develop the Guide to Malaria Pharmacology. The core GtoPdb continues to be supported by the British Pharmacological Society. See also International Union of Basic and Clinical Pharmacology British Pharmacological Society Biological target British Journal of Pharmacology References External links IUPHAR/BPS Guide to PHARMACOLOGY 2021 edition of the IUPHAR/BPS Guide to PHARMACOLOGY (published in BJP) 2024 edition of the IUPHAR/BPS Concise Guide to PHARMACOLOGY (published in BJP) International Union of Basic and Clinical Pharmacology (IUPHAR) British Pharmacological Society (BPS) British Journal of Pharmacology Concise Guide to PHARMACOLOGY Wellcome Trust Medicines for Malaria Venture Biological databases Pharmacology Pharmacological classification systems
Guide to Pharmacology
Chemistry
1,448
11,127,931
https://en.wikipedia.org/wiki/Lepteutypa%20cupressi
Lepteutypa cupressi is a plant pathogen which causes a disease ("Cypress canker") in Cupressus, Thuja, and related conifer types. The name Seiridium cupressi (formerly Coryneum cupressi) is for the anamorph of this fungus, that is, it is used for the asexual form. Now that it is known to have a sexual stage the genus name Lepteutypa should take precedence. References External links USDA ARS Fungal Database Xylariales Fungal tree pathogens and diseases Fungi described in 1973 Fungus species
Lepteutypa cupressi
Biology
127
979,488
https://en.wikipedia.org/wiki/Ice%20rink
An ice rink (or ice skating rink) is a frozen body of water or an artificial sheet of ice where people can ice skate or play winter sports. Ice rinks are also used for exhibitions, contests and ice shows. The growth and increasing popularity of ice skating during the 1800s marked a rise in the deliberate construction of ice rinks in numerous areas of the world. The word "rink" is a word of Scottish origin meaning "course", used to describe the ice surface used in the sport of curling, but was kept in use once the winter team sport of ice hockey became established. There are two types of ice rinks in prevalent use today: natural ice rinks, where freezing occurs from cold ambient temperatures, and artificial ice rinks (or mechanically frozen), where a coolant produces cold temperatures underneath the water body (on which the game is played), causing the water body to freeze and then stay frozen. There are also synthetic ice rinks where skating surfaces are made out of plastics. Besides recreational ice skating, some of its uses include: ice hockey, sledge hockey ( "Para ice hockey", or "sled hockey"), spongee ( sponge hockey), bandy, rink bandy, rinkball, ringette, broomball (both indoor and outdoor versions), Moscow broomball, speed skating, figure skating, ice stock sport, curling, and crokicurl. However, Moscow broomball is typically played on a tarmac tennis court that has been flooded with water and allowed to freeze. The sports of broomball, curling, ice stock sport, spongee, Moscow broomball, and the game of crokicurl, do not use ice skates of any kind. While technically not an ice rink, ice tracks and trails, such as those used in the sport of speed skating and recreational or pleasure skating are sometimes referred to as "ice rinks". Etymology Rink, a Scottish word meaning 'course', was used as the name of a place where curling was played. As curling is played on ice, the name has been retained for the construction of ice areas for other sports and uses. History Great Britain London, England Early attempts in the construction of artificial ice rinks were first made in the 'rink mania' of 1841–44. The technology for the maintenance of natural ice did not exist, therefore these early rinks used a substitute consisting of a mixture of hog's lard and various salts. An item in the May 8, 1844 issue of Eliakim Littell's Living Age headed "The Glaciarium" reported that "This establishment, which has been removed to Grafton street East' Tottenham Court Road, was opened on Monday afternoon. The area of artificial ice is extremely convenient for such as may be desirous of engaging in the graceful and manly pastime of skating". By 1844, these venues fell out of fashion as customers grew tired of the 'smelly' ice substitute. It wasn't until thirty years later that refrigeration technology developed to the point where natural ice could finally be feasibly used in the rink. The world's first mechanically frozen ice rink was the Glaciarium, opened by John Gamgee, a British veterinarian and inventor, in a tent in a small building just off the Kings Road in Chelsea, London, on 7 January 1876. Gamgee had become fascinated by the refrigeration technology he encountered during a study trip to America to look at Texas fever in cattle. In March of that same year it moved to a permanent venue at 379 Kings Road, where a rink measuring was established. The rink was based on a concrete surface, with layers of earth, cow hair and timber planks. Atop these were laid oval copper pipes carrying a solution of glycerine with ether, nitrogen peroxide and water. The pipes were covered by water and the solution was pumped through, freezing the water into ice. Gamgee discovered the process while attempting to develop a method to freeze meat for import from Australia and New Zealand, and patented it as early as 1870. Gamgee operated the rink on a membership-only basis and attempted to attract a wealthy clientele, experienced in open-air ice skating during winters in the Alps. He installed an orchestra gallery, which could also be used by spectators, and decorated the walls with views of the Swiss Alps. The rink initially proved a success, and Gamgee opened two further rinks later in the year: at Rusholme in Manchester and the "Floating Glaciarium" at Charing Cross in London, this last significantly larger at . The Southport Glaciarium opened in 1879, using Gamgee's method. The Fens, England In the marshlands of The Fens, skating was developed early as a pastime during winter where there were plenty of natural ice surfaces. This is the origin of the Fen skating and is said to be the birthplace of bandy. The Great Britain Bandy Association has its home in the area. Hungary In Austria-Hungary, the first artificial ice skating rink opened in 1870 in The City Park of Budapest, which is still in operation to this day and is considered one of the largest in Europe. Germany In Germany, the first ice skating rink opened in 1882 in Frankfurt during a patent exhibition. It covered and operated for two months; the refrigeration system was designed by Jahre Linde, and was probably the first skating rink where ammonia was used as a refrigerant. Ten years later, a larger rink was permanently installed on the same site. United States Early indoor ice rinks Ice skating quickly became a favorite pastime and craze in several American cities around the mid 1800s spawning a construction period of several ice rinks. Two early indoor ice rinks made of mechanically frozen ice in the United States opened in 1894, the North Avenue Ice Palace in Baltimore, Maryland, and the Ice Palace in New York City. The St. Nicholas Rink, ( "St. Nicholas Arena"), was an indoor ice rink in New York City which existed from 1896 until its demolition in the 1980s. It was one of the earliest American indoor ice rinks made of mechanically frozen ice in North America and gave ice skaters the opportunity to enjoy an extended skating season. The rink was used for pleasure skating, ice hockey, and ice skating, and was an important rink involved in the development of the sports of ice hockey and boxing in the United States. Oldest indoor artificial ice rink in use The oldest indoor artificial ice rink still in use in the United States is Boston, Massachusetts's, Matthews Arena (formerly Boston Arena) which was built between 1909 and 1910. The rink is located on the campus of Northeastern University. This American rink is the original home of the National Hockey League (NHL) Boston Bruins. The Bruins are the only remaining NHL team who are members of the NHL's Original Six with their original home arena still in existence. Contemporary The Guidant John Rose Minnesota Oval is an outdoor ice rink in Roseville, Minnesota, that is large enough to allow ice skaters to play the sport of bandy. Its perimeter is used as an oval speed skating track. The facility was constructed between June and December 1993. It is the only regulation-sized bandy field in North America and serves as the home of USA Bandy and its national bandy teams. The $3.9 million renovation project planned for the Guidant John Rose Minnesota Oval was set to be completed before the opening of the rink's 29th season on November 18, 2022. The oval measures at 400 meters long and 200 meters wide, which makes it the largest artificial outdoor refrigerated sheet of ice in North America. It is a world-class facility that is primarily used for ice sports such as ice skating, ice hockey, speed skating, and bandy. The oval hosts several national and international competitions throughout the year, including the USA Cup in bandy. Canada The first building in Canada to be electrified was the Victoria Skating Rink which opened in 1862 in Montreal, Quebec, Canada. The rink was created using natural ice. At the start of the twentieth century it had been described as "one of the finest covered rinks in the world" and was used during winter for pleasure skating, ice hockey, and skating sports. In summer months, the building was used for various other events. Types Natural ice Many ice rinks consist of, or are found on, open bodies of water such as lakes, ponds, canals, and sometimes rivers; these can be used only in the winter in climates where the surface freezes thickly enough to support human weight. Rinks can also be made in cold climates by enclosing a level area of ground, filling it with water, and letting it freeze. Snow may be packed to use as a containment material. An example of this type of "rink", which is a body of water converted into a skating trail during winter, is the Rideau Canal Skateway in Ottawa, Ontario. Artificial ice In any climate, an arena ice surface can be installed in a properly built space. This consists of a bed of sand or occasionally a slab of concrete, through (or on top of) which pipes run. The pipes carry a chilled fluid (usually either a salt brine or water with antifreeze, or in the case of smaller rinks, refrigerant) which can lower the temperature of the slab so that water placed atop will freeze. This method is known as 'artificial ice' to differentiate from ice rinks made by simply freezing water in a cold climate, indoors or outdoors, although both types are of frozen water. A more proper technical term is 'mechanically frozen' ice. An example of this type of rink is the outdoor rink at Rockefeller Center in New York. Construction Modern rinks have a specific procedure for preparing the surface. With the pipes cold, a thin layer of water is sprayed on the sand or concrete to seal and level it (or in the case of concrete, to keep it from being marked). This thin layer is painted white or pale blue for better contrast; markings necessary for hockey or curling are also placed, along with logos or other decorations. Another thin layer of water is sprayed on top of this. The ice is built up to a thickness of . Synthetic Synthetic rinks are constructed from a solid polymer material designed for skating using normal metal-bladed ice skates. High density polyethelene (HDPE) and ultra-high molecular weight polyethylene (UHMW) are the only materials that offer reasonable skating characteristics, with UHMW synthetic rinks offering the most ice-like skating but also being the most expensive. A typical synthetic rink will consist of many panels of thin surface material assembled on top of a sturdy, level and smooth sub-floor (anything from concrete to wood or even dirt or grass) to create a large skating area. Operation Periodically after the ice has been used, it is resurfaced using a machine called an ice resurfacer (sometimes colloquially referred to as a Zamboni – referring to a major manufacturer of such machinery). For curling, the surface is 'pebbled' by allowing loose drops of cold water to fall onto the ice and freeze into rounded peaks. Between events, especially if the arena is being used without need for the ice surface, it is either covered with a heavily insulated floor or melted by allowing the fluid in the pipes below the ice to warm. A highly specialized form of rink is used for speed skating; this is a large oval (or ring) much like an athletic track. Because of their limited use, speed skating ovals are far less common than hockey or curling rinks. Those skilled at preparing arena ice are often in demand for major events where ice quality is critical. The popularity of the sport of hockey in Canada has led its icemakers to be particularly sought after. One such team of professionals was responsible for placing a loonie coin under center ice at the 2002 Winter Olympics in Salt Lake City, Utah; as both Canadian teams (men's and women's) won their respective hockey gold medals, the coin was christened "lucky" and is now in the possession of the Hockey Hall of Fame after having been retrieved from beneath the ice. Standard rink sizes Bandy In bandy, the size of the playing field is x . For internationals, the size must not be smaller than . The variety rink bandy is played on ice hockey rinks. Figure skating The size of figure skating rinks can be quite variable, but the International Skating Union prefers Olympic-sized rinks for figure skating competitions, particularly for major events. These are . The ISU specifies that competition rinks must not be larger than this and not smaller than . Ice hockey Although there is a great deal of variation in the dimensions of actual ice rinks, there are basically two rink sizes in use at the highest levels of ice hockey. Historically, earlier ice rinks were smaller than today. Official National Hockey League rinks are . The dimensions originate from the size of the Victoria Skating Rink in Montreal, Quebec, Canada. Official Olympic and International ice hockey rinks have dimensions of . Para ice hockey Sledge hockey ( "Para ice hockey", or "sled hockey"), uses the same rink dimensions used by ice hockey rinks. Ringette Ringette utilizes most of the standard ice hockey markings used by Hockey Canada, but the ringette rink uses additional free-pass dots in each of the attacking zones and centre zone areas as well as a larger goal crease area. Two additional free-play lines (one in each attacking zone) are also required. A ringette rink is an ice rink designed for ice hockey which has been modified to enable ringette to be played. Though some ice surfaces are designed strictly for ringette, these ice rinks with exclusive lines and markings for ringette are usually created only at venues hosting major ringette competitions and events. Most ringette rinks are found in Canada and Finland. Playing area, size, lines and markings for the standard Canadian ringette rink are similar to the average ice hockey rink in Canada with certain modifications. Early in its history, ringette was played mostly on rinks constructed for ice hockey, broomball, figure skating, and recreational skating, and was mostly played on outdoor rinks since few indoor ice rinks were available at the time. Broomball The organized format of broomball uses the rink dimensions defined by a standard Canadian ice hockey rink. Spongee The sport of spongee, "sponge hockey", does not use ice skates. A skateless outdoor winter variant of ice hockey, spongee has its own rules codes and is played strictly within the Canadian city of Winnipeg as a cult sport. The sport generally uses the rink dimensions defined by a standard Canadian ice hockey rink. Rinkball Rinkball rinks today typically use the measurements of an ice hockey rink, though may be slightly larger due to the sport having originated in Europe where the bandy field influenced the size and development of smaller ice rinks. Tracks and trails Tracks and trails are occasionally referred to as ice rinks in spite of their differences. Ice skating tracks and ice skating trails are used for recreational exercise and sporting activities during the winter season including distance ice skating. Ice trails are created by natural bodies of water such as rivers, which freeze during winter, though some trails are created by removing snow to create skating lanes on large frozen lakes for ice skaters. Ice trails are usually used for pleasure skating, though the sport and recreational activity of Tour skating can involve ice skaters passing over ice trails and open areas created by frozen lakes. To date, speed skating and ice cross downhill are the only winter activities or sports whereby ice skaters use tracks and lanes designed to include bends rather than using a simple straightway. Some ice rinks are constructed in a manner allowing for a speed skating rink to be created around its outside perimeter. Tracks Speed skating track Speed skating tracks or "rinks" can either be created naturally or artificially and are made either outdoors or inside indoor facilities. Tracks may be created by having the lanes surround the exterior of an ice rink. The sport requires the use of a special type of racing skate, the speed skating ice skate. In speed skating, for short track, the official Olympic rink size is , with an oval ice track of in circumference. In long track speed skating the oval ice track is usually in circumference. Ice skating marathon tracks An ice skating marathon is a long distance speed skating race which may be held on natural ice on canals and bodies of water such as lakes and rivers. Marathon is a discipline of speed skating, which is founded in the Netherlands. The races concern speed skating by at least five skaters who start all together on an ice rink with a minimum length of 333.33 meters or on a track: Minimum distance longer than 6.4 kilometers and up to 200 kilometers for skaters who have reached the age of 17 prior to the skating season on July 1. Minimum distance longer than 4 kilometers and up to 20 kilometers for skaters who have reached the age of or the age of 13, but have not yet reached the age of 17 before July 1 preceding the skating season. Minimum distance of 2 kilometers and up to 10 kilometers for skaters who have not yet reached the age of 13 before July 1 preceding the skating season. Dutch skating tracks The Netherlands is home of Elfstedentocht, a 200 km distance skating race of which the tracks leads through the 11 different cities in Friesland which is a northern province of the Netherlands. Skate tracks on natural ice are maintained by the towns and communities, who take care of the safety of the tracks. Ice cross downhill tracks Ice cross downhill, (formerly known as "Red Bull Crashed Ice" or "Crashed Ice"), is a winter extreme sporting event involving direct competitive downhill skating. Skaters race down a walled track which features sharp turns and high vertical drops. Trails Rideau Canal Skateway An example of an ice skating trail, or "rink", is the Rideau Canal Skateway in Ottawa, Ontario, Canada, estimated at and long, which is equivalent to 90 Olympic-size skating rinks. The rink is prepared by lowering the canal's water level and letting the canal water freeze. The rink is then resurfaced nightly by cleaning the ice of snow and flooding it with water from below the ice. The rink is recognized as the "world's largest naturally frozen ice rink" by the Guinness Book of World Records because "its entire length receives daily maintenance such as sweeping, ice thickness checks and there are toilet and recreational facilities along its entire length". Longest trail The longest ice skating trail is in Invermere, British Columbia, Canada, on Lake Windermere Whiteway. The naturally frozen trail measures . Combined Outdoor ice skating activities and competitions involving a goal of distance travel for recreation, exercise, competition and adventure, can involve frozen lakes, rivers, and canals. Tour skating The sport and recreational activity, Tour skating ( "Nordic skating" in North America), is strictly an outdoor activity for ice skaters. Nordic skating originated during the 1900s in Sweden. Ice skaters traverse naturally frozen bodies of water, which sometimes, but not always, includes interconnected ice trails as well as frozen ponds, lakes, and even marsh areas. Tour skaters use a special ice skate with long blades. Elfstedentocht (Eleven cities tour) The Elfstedentocht (Eleven Cities Tour) is a long-distance tour skating event on natural ice, almost long, which is held both as a speed skating competition (with 300 contestants) and a leisure tour (with 16,000 skaters). It is the biggest ice-skating tour in the world and held in the province of Friesland in the north of the Netherlands. The event leads past all eleven historical cities of the province and is held at most once a year, only when the natural ice along the entire course is at least thick. It is sometimes held on consecutive years, while at other times, gaps between the touring years have exceeded 20 years. When the ice is suitable, the tour is announced and starts within 48 hours. The last Elfstedentocht was held in 1997. Laneways The sports of curling and Ice stock sport are played on either ice rinks or simple ice surfaces with lanes marked out for play. Curling The sport of curling uses an ice rink known as a "curling rink" or curling sheet. Curling does not involve ice skating. Curling uses lanes. The curling sheet is a carefully prepared rectangular area of ice created to be as flat and level as possible. The ice surface dimensions are in length by in width. A curling sheet includes areas marked off in a manner specific to the sport, including the house, the button, hog lines, hacks, and shorter borders along the ends of the sheet called the backboards. The dimensions of an official curling sheet is defined by the World Curling Federation Rules of Curling. At major events, ice preparation and maintenance is extremely important. Curling clubs usually have an ice maker whose main job is to care for the ice. Ice stock sport Ice stock sport (sometimes spelt "Icestocksport" or "Bavarian curling") is a winter sport comparable to curling. It's called Eisstockschießen in German. Although the sport is typically played on ice, summer competitions are performed on asphalt. Other Crokicurl Crokicurl is a Canadian winter sport and is a large scale hybrid of curling and the board game Crokinole. It is played outdoors by teams consisting of two players who take turns trying to score points on a quadrant shaped area with the playing area marked off on a sheet of ice. The quadrant includes posts, starting line, wooden edge side-rail, and a 20-point "button". Depending on the area involved, players can score 5, 10, or 15 points. Outdoor ice Outdoor ice rinks and frozen ponds, rivers, and canals, serve several purposes, allowing for physical activities during the winter season such as recreational ice skating and figure skating, and also function as an affordable place for players to engage in team winter sports such as ice hockey, bandy, rinkball, ringette, broomball, and spongee, as a pastime. These areas and facilities also help individuals, youth sporting organizations, and families, offset the expensive cost of indoor ice-time. They are also used as a part of outdoor winter festivals and to host pond hockey tournaments and the like. Decline Rinks The length of outdoor ice skating season began to experience a noticeable decline in North America in the early part of the 21st century. One of the correlated factors involved has been attributed to climate change. One of the consequences involved includes reducing access to outdoor facilities needed by youth who require opportunities to participate in ice-based sports at length and with low-cost, a problematic development considering winter sports become increasingly expensive over time resulting in economic exclusion. RinkWatch RinkWatch is a citizen science program in Canada run by researchers at Wilfrid Laurier University in Waterloo, Ontario. Beginning in 2013 the program started collecting data on outdoor rinks and frozen ponds across North America. The objective is to better understand how climate change may be impacting the outdoor skating season. Tracks and trails Elfstedentocht, the world's biggest ice-skating tour involving tour skating and speed skating, has been declared to be in danger of "extinction" due to climate change. The last Elfstedentocht was held in 1997. See also Bandy field Figure skating rink Ice hockey rink Speed skating rink Curling sheet Synthetic ice List of ice hockey arenas by capacity References External links The Ice Rink – A Brief History RinkWatch is a citizen science research initiative that asks people to help environmental scientists monitor winter weather conditions and study the long-term impacts of climate change. Comprehensive list of ice skating rinks in the U.S. and Canada Backyard Ice Rink Builder Community Playing field surfaces Sports venues Figure skating Bandy Ice hockey Sledge hockey Ringette Speed skating Broomball Sports rules and regulations
Ice rink
Engineering
4,903
61,024,303
https://en.wikipedia.org/wiki/C25H25NO2
{{DISPLAYTITLE:C25H25NO2}} The molecular formula C25H25NO2 (molar mass: 371.47 g/mol, exact mass: 371.1885 u) may refer to: JWH-081 JWH-164 Molecular formulas
C25H25NO2
Physics,Chemistry
65
16,875,690
https://en.wikipedia.org/wiki/Ingratiation
Ingratiating is a psychological technique in which an individual attempts to influence another person by becoming more likeable to their target. This term was coined by social psychologist Edward E. Jones, who further defined ingratiating as "a class of strategic behaviors illicitly designed to influence a particular other person concerning the attractiveness of one's personal qualities." Ingratiation research has identified some specific tactics of employing ingratiation: Complimentary Other-Enhancement: the act of using compliments or flattery to improve the esteem of another individual. Conformity in Opinion, Judgment, and Behavior: altering the expression of one's personal opinions to match the opinion(s) of another individual. Self-Presentation or Self-Promotion: explicit presentation of an individual's own characteristics, typically done in a favorable manner. Rendering Favors: Performing helpful requests for another individual. Modesty: Moderating the estimation of one's own abilities, sometimes seen as self-deprecation. Expression of Humour: any event shared by an individual with the target individual that is intended to be amusing. Instrumental Dependency: the act of convincing the target individual that the ingratiator is completely dependent upon them. Name-dropping: the act of referencing one or more other individuals in a conversation with the intent of using the reference(s) to increase perceived attractiveness or credibility. Research has also identified three distinct types of ingratiation, each defined by their ultimate goal. Regardless of the goal of ingratiation, the tactics of employment remain the same: Acquisitive ingratiation: ingratiation with the goal of obtaining some form of resource or reward from a target individual. Protective Ingratiation: ingratiation used to prevent possible sanctions or other negative consequences elicited from a target individual. Significance ingratiation: ingratiation designed to cultivate respect and/or approval from a target individual, rather than an explicit reward. Ingratiation has been confused with another social psychological term, Impression management. Impression management is defined as "the process by which people control the impressions others form of them." While these terms may seem similar, it is important to note that impression management represents a larger construct of which ingratiation is a component. In other words, ingratiation is a method of impression management. Edward E. Jones: the Father of Ingratiation Ingratiation, as a topic in social psychology, was first defined and analyzed by social psychologist Edward E. Jones. In addition to his pioneering studies on ingratiation, Jones also helped develop some of the fundamental theories of social psychology such as the fundamental attribution error and the actor-observer bias. Jones' first extensive studies of ingratiation were published in his 1964 book Ingratiation: A Social Psychological Analysis. In citing his reasons for studying ingratiation, Jones reasoned that ingratiation was an important phenomenon to study because it elucidated some of the central mysteries of social interaction and was also the stepping stone towards understanding other common social phenomena such as group cohesiveness. Tactics of ingratiation Complimentary Other enhancement is said to "involve communication of directly enhancing, evaluative statements" and is most correlated to the practice of flattery. Most often, other enhancement is achieved when the ingratiator exaggerates the positive qualities of the target while leaving out the negative qualities. According to Jones, this form of ingratiation is effective based on the Gestaltian axiom that it is hard for a person to dislike someone that thinks highly of them. In addition to this, other enhancement seems to be most effective when compliments are directed at the target's sources of self-doubt. To shield the obviousness of the flattery, the ingratiator may first talk negatively about qualities the target knows are weaknesses and then compliment him/her on a weak quality the target is unsure of. Conformity in Opinion, Judgment, and Behavior is based on the tenet that people like those whose values and beliefs are similar to their own. According to Jones, ingratiation in the form of conformity can "range from simple agreement with expressed opinions to the most complex forms of behavior imitation and identification." Similar to other enhancement, conformity is thought to be most effective when there is a change of opinion. When the ingratiator switches from a divergent opinion to an agreeing one, the target assumes the ingratiator values his/her opinion enough to change, in turn strengthening the positive feelings the target has for the ingratiator. With this, the target person is likely to be most appreciative of agreement when he wants to believe that something is true but is not sure that it is. Jones argues, therefore, that it is best to start by disagreeing in trivial issue and agreeing on issues that the target person needs affirmation. Self-Presentation or Self-Promotion is the "explicit presentation or description of one's own attributes to increase the likelihood of being judged attractively". The ingratiator is one who models himself along the lines of the target person's suggested ideals. Self-presentation is said to be most effective by exaggerating strengths and minimizing weaknesses. This tactic, however, seems to be dependent of the normal self-image of the ingratiator. For example, those who are of high esteem are considered with more favor if they are modest and those who are not are seen as more favorable when they exaggerate their strengths. One can also present weakness in order to impress the target. By revealing weaknesses, one implies a sense of respect and trust of the target. Interview responses such as "I am the kind of person who...", "You can count on me to..." are examples of self-presentation techniques. Rendering Favors is the act of performing helpful requests for another individual. This is a positive ingratiation tactic, as "persons are likely to be attracted to those who do nice things for them." By providing favors or gifts, the ingratiator promotes attraction in the target by making him/herself appear more favorable. In some instances, people may use favors or gifts with the goal of "...influencing others to give us the things we want more than they do, but giving them the things they want more than we do." Modesty is the act of moderating the estimation of one's own abilities. Modesty is seen as an effective ingratiation strategy because it provides a relatively less transparent format for the ingratiator to promote likeability. Modesty can sometimes take the form of self-deprecation, or Deprecation directed toward one's self, which is the opposite of self-promotion. Instead of the ingratiator making him/herself seem more attractive in the eyes of the target individual, the goal of self-deprecation is to decrease the perceived attractiveness of the ingratiator. In doing so, the ingratiator hopes to receive pity from the target individual, and is thus able to enact persuasion via such pity. Expression of humor is the intentional use of humor to create a positive affect with the target individual. The expression of humor is best implicated when the ingratiator is of higher status than the target individual, such as from supervisor to employee. "As long as the target perceives the individual's joke as appropriate, funny, and has no alternative implications, than the joke will be taken in a positive as opposed to a negative manner." When humor is used by an individual of lower status within the setting, it may prove to be risky, inappropriate, and distracting, and may damage likeability as opposed to promoting likeability. Instrumental Dependency is the act of instilling the impression upon the target individual that the ingratiator is completely dependent upon that individual. Similar to modesty, instrumental dependency works by creating a sense of pity for the ingratiator. While instrumental dependency as a process is similar to modesty or self-deprecation, it is defined separately due to the notion that instrumental dependency is typically task-dependent, meaning the ingratiator would insinuate that he/she is dependent upon the target individual for the completion of a specific task or goal. Name-dropping is the act of using the name of an influential person(s) as reference(s) while communicating with the target individual. Typically, name-dropping is done strategically in a manner that the reference(s) in question will be known and respected by the target individual. As a result, the target individual is likely to see the ingratiator as more attractive. Major empirical findings In business Seiter conducted a study that looked into the effect of ingratiation tactics on tipping behavior in the restaurant business. The study was done at two restaurants in Northern Utah, and the participant pool was 94 dining parties of 2 people each, equaling 188 participants in total. In order to ensure that the person paying the bill was complimented, the experimenters were told to genuinely compliment both members of the party. The data was collected by two female communication students, both the age of 22, who worked part-time as waitresses. The results of the experiment supported the initial hypothesis that customers receiving compliments on their choice of dish would tip larger amounts than customers who received no compliment after ordering. A one-way ANOVA test was performed, and this test found significant differences in tipping behavior between the two conditions. Customers who received compliments left larger tips (M = 18.94) than those who were not the recipients of ingratiation tactics (M = 16.41). Treadway, Ferris, Duke, Adams, and Thatcher wanted to explore how the role of subordinate ingratiation and political skill on supervisors’ impressions and ratings of interpersonal facilitation. Specifically, the researchers wanted to see if political skill and ingratiation interact in the business setting. "Political skill refer to the ability to exercise influence through the use of persuasion, manipulation, and negotiation" They hypothesized that employees who used high rates of ingratiation, and had low levels of political skill would have motivations more easily detectable by their supervisors. Treadway et al. found that ingratiation was only effective if the motivation was not discovered by the supervisor. In addition, the researchers found that when supervisors rating of an employee's use of ingratiation increased, their rating of an employee's use of interpersonal facilitation decreased. In conversation and interviews Godfrey conducted a study that looked into the difference between self-promoters and ingratiators. The study subjects consisted of 50 pairs of unacquainted, same sex students from Princeton University (25 male pairs, 25 female pairs). The pairs of students participated in two sessions of videotaped, 20-minute conversations, spaced one week apart. The first session was an unstructured conversation where the two subjects just talked about arbitrary topics. After the first conversation, one subject was randomly assigned to be the presenter. The presenter was asked to fill out a two-question survey that rated the likability and the competency of the other subject on a scale from 1 to 10. The second subject was assigned the role of the target, and was instructed to fill out a much longer survey about the other subject, which included the likability and competency scale, 41 trait attributes, and 7 emotions. In the second session, the presenters were asked to participate as an ingratiator or a self-promoter. They were both given specific directions: ingratiators were told to try to make the target like them, while the self-promoters were instructed to make the targets view them as extremely competent. The results show that the presenters only partly achieved their goal. Partners of ingratiators rated them as somewhat more likable after the second conversation than after the first conversation (Ms = 7.35 vs. 6.55) but no more competent (Ms = 5.80 vs. 5.85), whereas partners of self-promoters rated them as no more competent after the second conversation than after the first conversation (Ms = 5.25 vs. 5.05) but somewhat less likable (Ms = 5.15 vs. 5.85). Ingratiators gained in likability without sacrificing perceived competence, whereas self-promoters sacrificed likability with no gain in competency. Applications When ingratiation works Ingratiation can be a hard tactic to implicate, without having the target individual realize what you are trying to do. The tactics of ingratiation works well in different situations and settings. For example, “Tactics that match role expectations of low-status subordinates, such as opinion conformity, would appear to be better suited to exchanges between low-status ingratiators and high-status targets." Or, “The tactic of other enhancement would appear to be more appropriate for exchanges between high-status ingratiators and low-status targets because judgment and evaluation are congruent with a high-status supervisory role." Within a work setting, it is best to evaluate the situation to figure out which method of ingratiation is best to use. The ingratiator should also have some transparency to their method, so that the target individual is not suspicious of their motives. For example, ingratiating a target individual when it is uncharacteristic of your behavior or making it obvious that you are trying to ingratiate. “Given the strength of reciprocity as a social norm, it is possible that in situations in which the ingratiation attempt is interpreted by the target as 'ingratiation,' the most appropriate response might be to reciprocate the 'feigned' liking while forming more negative judgments and evaluations of the ingratiator.". Self-esteem and stress Ingratiation is a method that can be used to cope with job-related stress. Decreased self-esteem coupled with stress may cause an individual to use coping mechanisms, such as ingratiation. Self-affirmation and image maintenance are likely reactions when there is a threat to self-image. "Since self-esteem is a resource for coping with stress, it becomes depleted in this coping process and the individual becomes more likely to use ingratiation to protect, repair, or even boost self-image." There are two models that are presented to describe self-esteem in relation to ingratiatory behaviors. The self-esteem moderator model is when stress leads to ingratiatory behavior and self-esteem impacts this relationship. Then there is the mediation model that suggests that stress leads to decreased self-esteem, which increases ingratiatory behaviors to uplift one's self-image (a linear model). Research supports the mediation model, while literature supports the moderator model. Self-monitoring Within Turnely and Boino's study, "They had students complete a self-monitoring scale at the beginning of the project. At the conclusion of the project, participants indicated the extent to which they had engaged in each of the five impression-management tactics. Four days (two class periods) later, participants provided their perceptions of each of the other three members of their group. Each member of the four-person team, then, was evaluated by three teammates. Thus, given that there were 171 participants in the study, there were a total of 513 (171 X 3) student-student dyads. All of this information was collected before students received their grade on the project." Results revealed that high self-monitors were better able to use ingratiation, self-promotion, and exemplification to achieve favorable images among their colleagues successfully than their low self-monitor peers. “Specifically, when high self-monitors used these tactics, they were more likely to be seen as likeable, competent, and dedicated by the other members of their work groups. In contrast, low self-monitors appear to be less effective at using these tactics to obtain favorable images. In fact, the more low self-monitors used such tactics, the more likely they were to be seen as a sycophant, to be perceived as conceited, or to be perceived as egotistical by their work group colleagues.” High self-monitors are better able to use impression management tactics, such as ingratiation, than low self-monitors. Social rejection Ingratiation can be applied to many real world situations. As mentioned previously, research has delved into the areas of tipping in the restaurant business and conversations. More research shows how ingratiation is applicable in the online dating community and job interviews. In a study of social rejection in the online dating community, researchers tested whether ingratiation or hostility would be the first reaction of the rejected individual and whether men or women would be most likely to ingratiate in different situations. The study showed that cases in which the woman had felt “close” to a potential dating partner from the mutual sharing of information and was rejected, she was more likely than men to engage in ingratiation. Furthermore, men were shown to be more likely to be willing to pay for a date (as prompted by the researchers, not for the date itself) with a woman who had previously harshly rejected him over a woman who had mildly rejected him. Both cases show that while men and women have different social and emotional investments, they are equally likely to ingratiate in a situation which is self-defining to them. In the workplace In another study in the context of an interview, research showed that a combination of ingratiation and self-promotion tactics was more effective than using either one by itself or neither when trying to get hired by a potential employer. The most positive reviews and recommendations came from interviewers whose interviewees had used such a combination, and they were also most likely to be given a job offer. However, when compared by themselves, self-promotion was more effective in producing such an outcome than ingratiation; this may be due to how the nature of an interview requires the individual being considered for the job to talk about their positive qualities and what they would add to the company. See also Impression management Reinforcement Superficial charm References Persuasion techniques Conformity
Ingratiation
Biology
3,738
19,680,342
https://en.wikipedia.org/wiki/Meyer%27s%20law
Meyer's law is an empirical relation between the size of a hardness test indentation and the load required to leave the indentation. The formula was devised by Eugene Meyer of the Materials Testing Laboratory at the Imperial School of Technology, Charlottenburg, Germany, circa 1908. Equation It takes the form: where P is the pressure in megapascals k is the resistance of the material to initial penetration n is Meyer's index, a measure of the effect of the deformation on the hardness of the material d is the chordal diameter (diameter of the indentation) The index n usually lies between the values of 2, for fully strain hardened materials, and 2.5, for fully annealed materials. It is roughly related to the strain hardening coefficient in the equation for the true stress-true strain curve by adding 2. Note, however, that below approximately d =  the value of n can surpass 3. Because of this, Meyer's law is often restricted to values of d greater than 0.5 mm up to the diameter of the indenter. The variables k and n are also dependent on the size of the indenter. Despite this, it has been found that the values can be related using the equation: Meyer's law is often used to relate hardness values based on the fact that if the weight is quartered, the diameter of the indenter is halved. For instance, the hardness values are the same for a test load of 3000 kgf with a 10 mm indenter and for a test load of 750 kgf with a 5 mm diameter indenter. This relationship isn't perfect, but its percent error is relatively small. A modified form of this equation was put forth by Onitsch: See also Meyer hardness test References Notes Bibliography . Hardness tests
Meyer's law
Materials_science
364
41,323,976
https://en.wikipedia.org/wiki/Clostridium%20leptum
Clostridium leptum is a bacterium species in the genus Clostridium. It forms a subgroup of human fecal microbiota. Its reduction relative to other members of the gut microbiota has been observed in patients with inflammatory bowel disease. The genome of C. leptum has been sequenced. References External links Type strain of Clostridium leptum at BacDive - the Bacterial Diversity Metadatabase Gut flora bacteria Bacteria described in 1976 leptum
Clostridium leptum
Biology
102
17,527,026
https://en.wikipedia.org/wiki/Diboron%20tetrafluoride
Diboron tetrafluoride is the inorganic compound with the formula (BF2)2. A colorless gas, the compound has a halflife of days at room temperature. It is the most stable of the diboron tetrahalides, and does not appreciably decompose under standard conditions. Structure and bonding Diboron tetrafluoride is a planar molecule with a B-B bond distance of 172 pm. Although it is electron-deficient, the unsaturated boron centers are stabilized by pi-bonding with the terminal fluoride ligands. The compound is isoelectronic with oxalate. Synthesis and reactions Diboron tetrafluoride can be formed by treating boron monofluoride with boron trifluoride at low temperatures, taking care not to form higher polymers. Alternatively, diboron tetrachloride can be fluorinated with antimony trifluoride. Addition of diboron tetrafluoride to Vaska's complex was employed to produce an early example of a transition metal boryl complex: 2B2F4 + IrCl(CO)(PPh3)2 → Ir(BF2)3(CO)(PPh3)2 + ClBF2 Historical literature References External links Diboron tetrafluoride at webelements Fluorides Boron compounds Nonmetal halides Boron halides
Diboron tetrafluoride
Chemistry
304
7,813,665
https://en.wikipedia.org/wiki/Jet%20blast
Jet blast is the phenomenon of rapid air movement produced by the jet engines of aircraft, particularly on or before takeoff. A large jet-engine aircraft can produce winds of up to as far away as behind it at 40% maximum rated power. Jet blast can be a hazard to people or other unsecured objects, and can reach wind speeds comparable to those of a Category 5 hurricane, causing roof failure or total collapse in buildings, and severely damaging or destroying things like mobile homes, utility buildings, and trees. Despite the power and potentially destructive nature of jet blast, there are relatively few jet blast incidents. Due to the invisible nature of jet blast and the aerodynamic properties of light aircraft, light aircraft moving about airports are particularly vulnerable. Pilots of light aircraft frequently stay off to the side of the runway, rather than follow in the centre, to negate the effect of the blast. Occasionally, when the ground surface is badly chosen, jet blast from aircraft can rip up sections of asphalt weighing up to tens of kilograms (double-digits of pounds), damaging the aircraft. Maho Beach in Sint Maarten is famous for its unique proximity to the runway of Princess Juliana International Airport, allowing people to experience jet blast, a practice that is discouraged by the local authorities. A tourist was killed on 12 July 2017 when she was blown away by jet blast, which caused her head to smash into concrete. Skiathos Airport in Greece similarly allows people to experience jet blast, as its runway is located near a public road. Propeller planes are also capable of generating significant rearwards winds, known as prop wash. See also Maho Beach, a beach in Saint Maarten popular for experiencing jet blast Air Moorea Flight 1121, a plane crash in 2007 that investigators suspect jet blast was a factor References Aircraft aerodynamics Aircraft engines Aviation safety Aviation risks Jet engines
Jet blast
Technology
375
58,576,147
https://en.wikipedia.org/wiki/Extraterrestrial%20Sample%20Curation%20Center
The (PMSCF), commonly known as the Extraterrestrial Sample Curation Center (ESCuC, 地球外試料キュレーションセンター) is the facility where Japan Aerospace Exploration Agency (JAXA) conducts the curation works of extraterrestrial materials retrieved by some sample-return missions. They work closely with Japan's Astromaterials Science Research Group. Its objectives include documentation, preservation, preparation, and distribution of samples. All samples collected are made available for international distribution upon request. Overview The conceptual studies for JAXA's curation facility begun in 2005, the specifications were decided in 2007, and the facility was completed in 2008 in time to receive the asteroid samples retrieved by the Hayabusa mission. The facility is composed of several cleanrooms rated from ISO7 (for gowning and cleaning rooms) to the cleanest ISO5 (for sample handling and storage). The key feature of JAXA's ESCuC curation facility is the ability to observe, take out a portion and preserve a precious returned sample, without being exposed to the terrestrial atmosphere and other contaminants. Due to the nature of the Hayabusa returned samples, the facility developed the capability to handle particles as small as 10 μm by using a system based on electrostatic micromanipulation within a clean chamber in contact with either vacuum or an inert gas (usually nitrogen). The facility also features a wide variety of laboratories and analyzers, including XCT/XRD, TEM/STEM, EPMA, SIMS, FTIR, Raman, NAA, noble-gas-MS, and ToF-SIMS. The "Monitoring and Meeting Room" has recently been retrofitted to host the samples returned by the Hayabusa2 mission from the carbonaceous asteroid Ryugu. Catalogue Samples include: Asteroid 25143 Itokawa, retrieved by the Hayabusa mission. Meteorites and standard samples Samples collected by the Tanpopo orbital experiment. Asteroid 162173 Ryugu, to be retrieved by Hayabusa2. Returned to Earth in December 2020. Future samples expected Asteroid 101955 Bennu, to be retrieved by the NASA OSIRIS-REx mission. Returned to Earth on September 24, 2023. Similar facilities Other facilities dedicated to the curation of pristine returned extraterrestrial samples are the NASA Johnson Space Center Astromaterials Acquisition and Curation Office, the Russian Vernadsky Institute of Geochemistry and Analytical Chemistry in Moscow for Luna samples and the CNSA curatorial facility for Chang'e 5 lunar samples. There are currently no pristine returned samples curatorial facility in Europe, even though preparatory studies have been conducted in the recent past. See also Astrobiology Biocontainment safety level Extraterrestrial materials Molecules detected in outer space Planetary protection Planetary science References Meteorite mineralogy and petrology Planetary science 2008 establishments in Japan JAXA facilities
Extraterrestrial Sample Curation Center
Astronomy
602
9,910,614
https://en.wikipedia.org/wiki/WARP%20%28information%20security%29
Warning, Advice and Reporting Point (WARP) is a community or internal company-based service to share advice and information on computer-based threats and vulnerabilities. WARPs typically provide: Warning – A filtered warning service, where subscribers receive alerts and advisory information on only the subjects relevant to them. Advice – An advice brokering service, where members can ask and respond to questions in a trusted secure environment. Reporting – Central collection of information on incidents and problems in a trusted secure environment. The collected information may then be anonymised and shared amongst the membership. See also Information security management system British cyber security community External links UK WARP Official homepage with downloadable toolbox Computer security organizations
WARP (information security)
Technology
142
11,830,303
https://en.wikipedia.org/wiki/Dissipative%20soliton
Dissipative solitons (DSs) are stable solitary localized structures that arise in nonlinear spatially extended dissipative systems due to mechanisms of self-organization. They can be considered as an extension of the classical soliton concept in conservative systems. An alternative terminology includes autosolitons, spots and pulses. Apart from aspects similar to the behavior of classical particles like the formation of bound states, DSs exhibit interesting behavior – e.g. scattering, creation and annihilation – all without the constraints of energy or momentum conservation. The excitation of internal degrees of freedom may result in a dynamically stabilized intrinsic speed, or periodic oscillations of the shape. Historical development Origin of the soliton concept DSs have been experimentally observed for a long time. Helmholtz measured the propagation velocity of nerve pulses in 1850. In 1902, Lehmann found the formation of localized anode spots in long gas-discharge tubes. Nevertheless, the term "soliton" was originally developed in a different context. The starting point was the experimental detection of "solitary water waves" by Russell in 1834. These observations initiated the theoretical work of Rayleigh and Boussinesq around 1870, which finally led to the approximate description of such waves by Korteweg and de Vries in 1895; that description is known today as the (conservative) KdV equation. On this background the term "soliton" was coined by Zabusky and Kruskal in 1965. These authors investigated certain well localised solitary solutions of the KdV equation and named these objects solitons. Among other things they demonstrated that in 1-dimensional space solitons exist, e.g. in the form of two unidirectionally propagating pulses with different size and speed and exhibiting the remarkable property that number, shape and size are the same before and after collision. Gardner et al. introduced the inverse scattering technique for solving the KdV equation and proved that this equation is completely integrable. In 1972 Zakharov and Shabat found another integrable equation and finally it turned out that the inverse scattering technique can be applied successfully to a whole class of equations (e.g. the nonlinear Schrödinger and sine-Gordon equations). From 1965 up to about 1975, a common agreement was reached: to reserve the term soliton to pulse-like solitary solutions of conservative nonlinear partial differential equations that can be solved by using the inverse scattering technique. Weakly and strongly dissipative systems With increasing knowledge of classical solitons, possible technical applicability came into perspective, with the most promising one at present being the transmission of optical solitons via glass fibers for the purpose of data transmission. In contrast to conservative systems, solitons in fibers dissipate energy and this cannot be neglected on an intermediate and long time scale. Nevertheless, the concept of a classical soliton can still be used in the sense that on a short time scale dissipation of energy can be neglected. On an intermediate time scale one has to take small energy losses into account as a perturbation, and on a long scale the amplitude of the soliton will decay and finally vanish. There are however various types of systems which are capable of producing solitary structures and in which dissipation plays an essential role for their formation and stabilization. Although research on certain types of these DSs has been carried out for a long time (for example, see the research on nerve pulses culminating in the work of Hodgkin and Huxley in 1952), since 1990 the amount of research has significantly increased (see e.g.) Possible reasons are improved experimental devices and analytical techniques, as well as the availability of more powerful computers for numerical computations. Nowadays, it is common to use the term dissipative solitons for solitary structures in strongly dissipative systems. Experimental observations Today, DSs can be found in many different experimental set-ups. Examples include Gas-discharge systems: plasmas confined in a discharge space which often has a lateral extension large compared to the main discharge length. DSs arise as current filaments between the electrodes and were found in DC systems with a high-ohmic barrier, AC systems with a dielectric barrier, and as anode spots, as well as in an obstructed discharge with metallic electrodes. Semiconductor systems: these are similar to gas-discharges; however, instead of a gas, semiconductor material is sandwiched between two planar or spherical electrodes. Set-ups include Si and GaAs pin diodes, n-GaAs, and Si p+−n+−p−n−, and ZnS:Mn structures. Nonlinear optical systems: a light beam of high intensity interacts with a nonlinear medium. Typically the medium reacts on rather slow time scales compared to the beam propagation time. Often, the output is fed back into the input system via single-mirror feedback or a feedback loop. DSs may arise as bright spots in a two-dimensional plane orthogonal to the beam propagation direction; one may, however, also exploit other effects like polarization. DSs have been observed for saturable absorbers, degenerate optical parametric oscillators (DOPOs), liquid crystal light valves (LCLVs), alkali vapor systems, photorefractive media, and semiconductor microresonators. If the vectorial properties of DSs are considered, vector dissipative soliton could also be observed in a fiber laser passively mode locked through saturable absorber, In addition, multiwavelength dissipative soliton in an all normal dispersion fiber laser passively mode-locked with a SESAM has been obtained. It is confirmed that depending on the cavity birefringence, stable single-, dual- and triple-wavelength dissipative soliton can be formed in the laser. Its generation mechanism can be traced back to the nature of dissipative soliton. Chemical systems: realized either as one- and two-dimensional reactors or via catalytic surfaces, DSs appear as pulses (often as propagating pulses) of increased concentration or temperature. Typical reactions are the Belousov–Zhabotinsky reaction, the ferrocyanide-iodate-sulphite reaction as well as the oxidation of hydrogen, CO, or iron. Nerve pulses or migraine aura waves also belong to this class of systems. Vibrated media: vertically shaken granular media, colloidal suspensions, and Newtonian fluids produce harmonically or sub-harmonically oscillating heaps of material, which are usually called oscillons. Hydrodynamic systems: the most prominent realization of DSs are domains of convection rolls on a conducting background state in binary liquids. Another example is a film dragging in a rotating cylindric pipe filled with oil. Electrical networks: large one- or two-dimensional arrays of coupled cells with a nonlinear current–voltage characteristic. DSs are characterized by a locally increased current through the cells. Remarkably enough, phenomenologically the dynamics of the DSs in many of the above systems are similar in spite of the microscopic differences. Typical observations are (intrinsic) propagation, scattering, formation of bound states and clusters, drift in gradients, interpenetration, generation, and annihilation, as well as higher instabilities. Theoretical description Most systems showing DSs are described by nonlinear partial differential equations. Discrete difference equations and cellular automata are also used. Up to now, modeling from first principles followed by a quantitative comparison of experiment and theory has been performed only rarely and sometimes also poses severe problems because of large discrepancies between microscopic and macroscopic time and space scales. Often simplified prototype models are investigated which reflect the essential physical processes in a larger class of experimental systems. Among these are Reaction–diffusion systems, used for chemical systems, gas-discharges and semiconductors. The evolution of the state vector q(x, t) describing the concentration of the different reactants is determined by diffusion as well as local reactions: A frequently encountered example is the two-component Fitzhugh–Nagumo-type activator–inhibitor system Stationary DSs are generated by production of material in the center of the DSs, diffusive transport into the tails and depletion of material in the tails. A propagating pulse arises from production in the leading and depletion in the trailing end. Among other effects, one finds periodic oscillations of DSs ("breathing"), bound states, and collisions, merging, generation and annihilation. Ginzburg–Landau type systems for a complex scalar q(x, t) used to describe nonlinear optical systems, plasmas, Bose-Einstein condensation, liquid crystals and granular media. A frequently found example is the cubic-quintic subcritical Ginzburg–Landau equation To understand the mechanisms leading to the formation of DSs, one may consider the energy ρ = |q|2 for which one may derive the continuity equation One can thereby show that energy is generally produced in the flanks of the DSs and transported to the center and potentially to the tails where it is depleted. Dynamical phenomena include propagating DSs in 1d, propagating clusters in 2d, bound states and vortex solitons, as well as "exploding DSs". The Swift–Hohenberg equation is used in nonlinear optics and in the granular media dynamics of flames or electroconvection. Swift–Hohenberg can be considered as an extension of the Ginzburg–Landau equation. It can be written as For dr > 0 one essentially has the same mechanisms as in the Ginzburg–Landau equation. For dr < 0, in the real Swift–Hohenberg equation one finds bistability between homogeneous states and Turing patterns. DSs are stationary localized Turing domains on the homogeneous background. This also holds for the complex Swift–Hohenberg equations; however, propagating DSs as well as interaction phenomena are also possible, and observations include merging and interpenetration. Particle properties and universality DSs in many different systems show universal particle-like properties. To understand and describe the latter, one may try to derive "particle equations" for slowly varying order parameters like position, velocity or amplitude of the DSs by adiabatically eliminating all fast variables in the field description. This technique is known from linear systems, however mathematical problems arise from the nonlinear models due to a coupling of fast and slow modes. Similar to low-dimensional dynamic systems, for supercritical bifurcations of stationary DSs one finds characteristic normal forms essentially depending on the symmetries of the system. E.g., for a transition from a symmetric stationary to an intrinsically propagating DS one finds the Pitchfork normal form for the velocity v of the DS, here σ represents the bifurcation parameter and σ0 the bifurcation point. For a bifurcation to a "breathing" DS, one finds the Hopf normal form for the amplitude A of the oscillation.<ref name="gurevich"/> It is also possible to treat "weak interaction" as long as the overlap of the DSs is not too large. In this way, a comparison between experiment and theory is facilitated. Note that the above problems do not arise for classical solitons as inverse scattering theory yields complete analytical solutions. See also Clapotis Compacton, a soliton with compact support Fiber laser Freak waves may be a related phenomenon Graphene Nonlinear Schrödinger equation Nonlinear system Oscillon Peakon, a soliton with a non-differentiable peak Q-ball, a non-topological soliton Sine-Gordon equation Solitary waves in discrete media Soliton (optics) Soliton (topological) Soliton model of nerve impulse propagation Topological quantum number Vector soliton References Inline Books and overview articles N. Akhmediev and A. Ankiewicz, Dissipative Solitons, Lecture Notes in Physics, Springer, Berlin (2005) N. Akhmediev and A. Ankiewicz, Dissipative Solitons: From Optics to Biology and Medicine, Lecture Notes in Physics, Springer, Berlin (2008) H.-G. Purwins et al., Advances in Physics 59 (2010): 485 A. W. Liehr: Dissipative Solitons in Reaction Diffusion Systems. Mechanism, Dynamics, Interaction. Volume 70 of Springer Series in Synergetics, Springer, Berlin Heidelberg 2013, Solitons Self-organization Systems theory
Dissipative soliton
Mathematics
2,623
45,224,498
https://en.wikipedia.org/wiki/Eduard%20Looijenga
Eduard Jacob Neven Looijenga (born 30 September 1948, Zaandam) is a Dutch mathematician who works in algebraic geometry and the theory of algebraic groups. He was a professor of mathematics at Utrecht University until his retirement in 2013. Looijenga studied mathematics at the University of Amsterdam beginning in 1965, and earned a master's degree there in 1971. He obtained a Dutch fellowship for two years of study at the Institut des Hautes Études Scientifiques in France, and then returned to the University of Amsterdam, earning a Ph.D. in 1974 under the supervision of Nicolaas Kuiper. After postdoctoral research at the University of Liverpool, he took a faculty position at the University of Nijmegen in 1975, returned as a professor to the University of Amsterdam in 1987, and moved again to Utrecht in 1991. Since his 2013 retirement, he has also held a professorship at Tsinghua University. In 1978, Looijenga was an invited speaker at the International Congress of Mathematicians. He became a member of the Royal Netherlands Academy of Arts and Sciences in 1995, and in 2012 he became one of the inaugural fellows of the American Mathematical Society. In 2013, a conference in honor of his retirement was held at Utrecht University. Publications References External links Home page 1948 births Living people 20th-century Dutch mathematicians 21st-century Dutch mathematicians University of Amsterdam alumni Academic staff of Radboud University Nijmegen Academic staff of Utrecht University Academic staff of the University of Amsterdam Academic staff of Tsinghua University Fellows of the American Mathematical Society Members of the Royal Netherlands Academy of Arts and Sciences People from Zaanstad Topologists Algebraic geometers
Eduard Looijenga
Mathematics
337
11,421,751
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA28
In molecular biology, SNORA28 (also known as ACA28) is a member of the H/ACA class of small nucleolar RNA that guide the sites of modification of uridines to pseudouridines. References Further reading External links Small nuclear RNA
Small nucleolar RNA SNORA28
Chemistry
58
14,585,095
https://en.wikipedia.org/wiki/International%20Electrotechnical%20Exhibition
The 1891 International Electrotechnical Exhibition was held between 16 May and 19 October on the disused site of the three former (Western Railway Stations) in Frankfurt am Main, Germany. The exhibition featured the first long-distance transmission of high-power, three-phase electric current, which was generated 175 km away at Lauffen am Neckar. As a result of this successful field trial, three-phase current became established for electrical transmission networks throughout the world. History The "Elektrotechnische Gesellschaft" (Electrotechnical Society) was founded in Frankfurt in 1881 with the aim of promoting electricity and, in particular, furthering research into its application for industry and technology. Three years later, some ten manufacturers of electrical equipment had set themselves up in the city. In around 1890, some of the enterprises were established which would later become major firms in Frankfurt: Hartmann & Braun, Staudt & Voigt (from 1891 Voigt & Haefner) and W Lahmeyer & Co (from 1893 Elektrizitäts-AG, previously W Lahmeyer & Co). And it was in Frankfurt that the Second Industrial Revolution began to emerge – a revolution that would bring about fundamental changes similar to those created 100 years previously by the introduction of the steam engine to the world of work. In 1891, the German electrical industry was ready to demonstrate its capabilities to the world at the International Electrotechnical Exhibition. A site was chosen – that of the former western stations between the city and the new main station, which had been completed in 1888. Prompted by the Paris "Exposition Universelle" (World Fair) of 1889, Leopold Sonnemann, publisher of the Frankfurter Zeitung newspaper, interested the Electrotechnical Society in the idea of an exhibition. The Society expressed an interest and started preparations in the same year. However, there was another consideration apart from the setting up of an international exhibition – Frankfurt had an urgent problem to solve. The construction of a central power station had been under discussion in the city's political and technical committees since 1886. However, agreement had still to be reached over the type of current, and opinions were divided between direct current, alternating current and three-phase current. It fell to the exhibition to demonstrate a commercially viable method for the transmission of electricity. Three-phase current with a minimal loss of 25% would be transmitted at high voltage from Lauffen am Neckar to Frankfurt. This took centre stage at the exhibition and was evidenced in the large three-section entrance gate. The central section took the form of an arch bearing the inscription "Power Transmission Lauffen–Frankfurt 175 km." Rectangular panels flanked the arch: the one to the right carrying the name of the "Allgemeine Electricitätsgesellschaft" ("AEG" – General Electricity Company), which had been founded in 1887; the left-hand panel displayed the name of the "Maschinenfabrik Oerlikon" (Oerlikon Engineering Works). The entire entrance was illuminated with 1000 light bulbs and an electrically powered waterfall provided a further attraction. With 1,200,000 visitors from all over the world, the exhibition was an out-and-out success. The cost of a one-day entry ticket for an adult amounted to a considerable 15 marks. As far as Germany was concerned, the International Electrotechnical Exhibition settled once and for all the question of the most economical means of transmitting electrical energy. When the exhibition closed, the power station at Lauffen continued in operation – providing electricity for the administrative capital, Heilbronn, thus making it the first place to be equipped with a power supply using three-phase AC. The name of the local power company (ZEAG) bears testimony to this event. The Frankfurt city council constructed its own power station near the harbour; yet another was built by a private company in the suburb of Bockenheim. Equipment A hydraulic turbine at Lauffen powered a three-phase alternator with a revolving field. The alternator revolved at 150 revolutions per minute, and had a rotating field magnet with 32 poles. It was rated at 300 hp and had a terminal voltage of 55 volts. The frequency of the current was 40 Hz. Power from the alternator was stepped up to 8000 volts for transmission by oil-insulated transformers. Later tests were carried out with transmission voltage up to 25,000 volts (between phases). The transmission line was erected with the assistance of the German Post Office and used about 60 tonnes of copper wire, 4 mm in diameter. At the exhibition, the voltage was stepped down by further oil-filled transformers and connected to motors and a motor-generator system for lamps. Overall efficiency from turbine to load was an average of 75%, which resolved many doubts of the practicality of long-distance electric power transmission. Gallery See also War of the currents References Bibliography Jürgen Steen (Hg.): "Eine neue Zeit ..!", Die Internationale Elektrotechnische Ausstellung 1891. Frankfurt am Main 1991 (Ausstellungskatalog Historisches Museum Frankfurt am Main), Horst A. Wessel (Hg.): Moderne Energie für eine neue Zeit, siebtes VDE-Kolloquium am 3. und 4. September 1991 anlässlich der VDE-Jubiläumsveranstaltung "100 Jahre Drehstrom" in Frankfurt am Main (= Geschichte der Elektrotechnik, Bd.11). Berlin/Offenbach 1991, Volker Rödel: Fabrikarchitektur in Frankfurt am Main 1774–1924, Frankfurt 1986, S.30f., Sabine Hock: Mehr Licht für Frankfurt, Oskar von Miller brachte Frankfurt auf den Weg zur Elektrifizierung, Wochendienst Nr. 16 vom 26.04.2005, hg. v. Presse- und Informationsamt der Stadt Frankfurt am Main External links History of electrical engineering 1891 in Germany World's fairs in Germany 1891 in science 19th century in Frankfurt Festivals established in 1891
International Electrotechnical Exhibition
Engineering
1,259
18,832,275
https://en.wikipedia.org/wiki/Drug%20eruption
In medicine, a drug eruption is an adverse drug reaction of the skin. Most drug-induced cutaneous reactions are mild and disappear when the offending drug is withdrawn. These are called "simple" drug eruptions. However, more serious drug eruptions may be associated with organ injury such as liver or kidney damage and are categorized as "complex". Drugs can also cause hair and nail changes, affect the mucous membranes, or cause itching without outward skin changes. The use of synthetic pharmaceuticals and biopharmaceuticals in medicine has revolutionized human health, allowing us to live longer lives. Consequently, the average human adult is exposed to many drugs over longer treatment periods throughout a lifetime. This unprecedented rise in pharmaceutical use has led to an increasing number of observed adverse drug reactions. There are two broad categories of adverse drug reactions. Type A reactions are known side effects of a drug that are largely predictable and are called, pharmatoxicologic. Whereas Type B or hypersensitivity reactions, are often immune-mediated and reproducible with repeated exposure to normal dosages of a given drug. Unlike type A reactions, the mechanism of type B or hypersensitivity drug reactions is not fully elucidated. However, there is a complex interplay between a patient's inherited genetics, the pharmacotoxicology of the drug and the immune response that ultimately give rise to the manifestation of a drug eruption. Because the manifestation of a drug eruption is complex and highly individual, there are many subfields in medicine that are studying this phenomenon. For example, the field of pharmacogenomics aims to prevent the occurrence of severe adverse drug reactions by analyzing a person's inherited genetic risk. As such, there are clinical examples of inherited genetic alleles that are known to predict drug hypersensitivities and for which diagnostic testing is available. Classification Some of the most severe and life-threatening examples of drug eruptions are erythema multiforme, Stevens–Johnson syndrome (SJS), toxic epidermal necrolysis (TEN), hypersensitivity vasculitis, drug induced hypersensitivity syndrome (DIHS), erythroderma and acute generalized exanthematous pustulosis (AGEP). These severe cutaneous drug eruptions are categorized as hypersensitivity reactions and are immune-mediated. There are four types of hypersensitivity reactions and many drugs can induce one or more hypersensitivity reactions. By appearance The most common type of eruption is a morbilliform (resembling measles) or erythematous rash (approximately 90% of cases). Less commonly, the appearance may also be urticarial, papulosquamous, pustular, purpuric, bullous (with blisters) or lichenoid. Angioedema can also be drug-induced (most notably, by angiotensin converting enzyme inhibitors). By mechanism The underlying mechanism can be immunological (such as in drug allergies) or non-immunological (for example, in photodermatitis or as a side effect of anticoagulants). A fixed drug eruption is the term for a drug eruption that occurs in the same skin area every time the person is exposed to the drug. Eruptions can occur frequently with a certain drug (for example, with phenytoin), or be very rare (for example, Sweet's syndrome following the administration of colony-stimulating factors). By drug The culprit can be both a prescription drug or an over-the-counter medication. Examples of common drugs causing drug eruptions are antibiotics and other antimicrobial drugs, sulfa drugs, nonsteroidal anti-inflammatory drugs (NSAIDs), biopharmaceuticals, chemotherapy agents, anticonvulsants and psychotropic drugs. Common examples include photodermatitis due to local NSAIDs (such as piroxicam) or due to antibiotics (such as minocycline), fixed drug eruption due to acetaminophen or NSAIDs (Ibuprofen), and the rash following ampicillin in cases of mononucleosis. Certain drugs are less likely to cause drug eruptions (rates estimated to be ≤3 per 1000 patients exposed). These include: digoxin, aluminum hydroxide, multivitamins, acetaminophen, bisacodyl, aspirin, thiamine, prednisone, atropine, codeine, hydrochlorothiazide, morphine, insulin, warfarin and spironolactone. Diagnosis and screening tests Drug eruptions are diagnosed mainly from the medical history and clinical examination. However, they can mimic various other conditions, thus delaying diagnosis (for example, in drug-induced lupus erythematosus, or the acne-like rash caused by erlotinib). A skin biopsy, blood tests or immunological tests can also be useful. Drug reactions have characteristic timing. The typical amount of time it takes for a rash to appear after exposure to a drug can help categorize the type of reaction. For example, Acute generalized exanthematous pustulosis usually occurs within 4 days of starting the culprit drug. Drug Reaction with Eosinophilia and Systemic Symptoms usually occurs between 15 and 40 days after exposure. Toxic epidermal necrolysis and Stevens–Johnson syndrome typically occur 7–21 days after exposure. Anaphylaxis occurs within minutes. Simple exanthematous eruptions occur between 4 and 14 days after exposure. TEN and SJS are severe cutaneous drug reactions that involve the skin and mucous membranes. To accurately diagnose this condition, a detailed drug history is crucial. Often, several drugs may be causative and allergy testing may be helpful. Sulfa drugs are well known to induce TEN or SJS in certain people. For example, HIV patients have an increased incidence of SJS or TEN compared to the general population and have been found to express low levels of the drug metabolizing enzyme responsible for detoxifying sulfa drugs. Genetics plays an important role in predisposing certain populations to TEN and SJS. As such, there are some FDA recommended genetic screening tests available for certain drugs and ethnic populations to prevent the occurrence of a drug eruption. The most well known example is carbamezepine (an anti-convulsant used to treat seizures) hypersensitivity associated with the presence of HLA-B*5801 genetic allele in Asian populations. DIHS is a delayed onset drug eruption, often occurring a few weeks to 3 months after initiation of a drug. Worsening of systemic symptoms occurs 3–4 days after cessation of the offending drug. There are genetic risk alleles that are predictive of the development of DIHS for particular drugs and ethnic populations. The most important of which is abacavir (an anti-viral used in the treatment of HIV) hypersensitivity associated with the presence of the HLA-B*5701 allele in European and African population in the United States and Australians. AGEP is often caused by antimicrobial, anti-fungal or antimalarial drugs. Diagnosis is often carried out by patch testing. This testing should be performed within one month after resolution of the rash and patch test results are interpreted at different time points: 48 hours, 72hours and even later at 96 hours and 120 hours in order to improve the sensitivity. See also Fixed drug reaction List of cutaneous conditions List of human leukocyte antigen alleles associated with cutaneous conditions Stevens–Johnson syndrome References External links Clinical pharmacology
Drug eruption
Chemistry
1,599
2,202,538
https://en.wikipedia.org/wiki/Dimetridazole
Dimetridazole is a drug that combats protozoan infections. It is a nitroimidazole class drug. It used to be commonly added to poultry feed. This led to it being found in eggs. Because of suspicions of it being carcinogenic its use has been legally limited but it is still found in the eggs. It is now banned as a livestock feed additive in many jurisdictions, for example in the European Union, Canada. and the United States. In the US, the Food and Drug Administration bans it for extralabel use See also Metronidazole Nimorazole References Antiparasitic agents Nitroimidazole antibiotics
Dimetridazole
Biology
141
44,817,245
https://en.wikipedia.org/wiki/Ship%20load
Ship load is a United Kingdom unit of weight for coal equal to 20 keels or . External links NIST Special Publication 811, Guide for the Use of the International System of Units (SI) References Imperial units Units of mass
Ship load
Physics,Mathematics
48
8,688,139
https://en.wikipedia.org/wiki/Electronic%20circuit%20design
Electronic circuit design comprises the analysis and synthesis of electronic circuits. Methods To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Linear circuits, that is, circuits wherein the outputs are linearly dependent on the inputs, can be analyzed by hand using complex analysis. Simple nonlinear circuits can also be analyzed in this way. Specialized software has been created to analyze circuits that are either too complicated or too nonlinear to analyze by hand. Circuit simulation software allows engineers to design circuits more efficiently, reducing the time cost and risk of error involved in building circuit prototypes. Some of these make use of hardware description languages such as VHDL or Verilog. Network simulation software More complex circuits are analyzed with circuit simulation software such as SPICE and EMTP. Linearization around operating point When faced with a new circuit, the software first tries to find a steady state solution wherein all the nodes conform to Kirchhoff's Current Law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element. Once the steady state solution is found, the software can analyze the response to perturbations using piecewise approximation, harmonic balance or other methods. Piece-wise linear approximation Software such as the PLECS interface to Simulink uses piecewise linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time. Synthesis Simple circuits may be designed by connecting a number of elements or functional blocks such as integrated circuits. More complex digital circuits are typically designed with the aid of computer software. Logic circuits (and sometimes mixed mode circuits) are often described in such hardware description languages as HDL, VHDL or Verilog, then synthesized using a logic synthesis engine. See also Circuit design Integrated circuit design References Electronic design Design
Electronic circuit design
Engineering
428
496,309
https://en.wikipedia.org/wiki/Assay
An assay is an investigative (analytic) procedure in laboratory medicine, mining, pharmacology, environmental biology and molecular biology for qualitatively assessing or quantitatively measuring the presence, amount, or functional activity of a target entity. The measured entity is often called the analyte, the measurand, or the target of the assay. The analyte can be a drug, biochemical substance, chemical element or compound, or cell in an organism or organic sample. An assay usually aims to measure an analyte's intensive property and express it in the relevant measurement unit (e.g. molarity, density, functional activity in enzyme international units, degree of effect in comparison to a standard, etc.). If the assay involves exogenous reactants (the reagents), then their quantities are kept fixed (or in excess) so that the quantity and quality of the target are the only limiting factors. The difference in the assay outcome is used to deduce the unknown quality or quantity of the target in question. Some assays (e.g., biochemical assays) may be similar to chemical analysis and titration. However, assays typically involve biological material or phenomena that are intrinsically more complex in composition or behavior, or both. Thus, reading of an assay may be noisy and involve greater difficulties in interpretation than an accurate chemical titration. On the other hand, older generation qualitative assays, especially bioassays, may be much more gross and less quantitative (e.g., counting death or dysfunction of an organism or cells in a population, or some descriptive change in some body part of a group of animals). Assays have become a routine part of modern medical, environmental, pharmaceutical, and forensic technology. Other businesses may also employ them at the industrial, curbside, or field levels. Assays in high commercial demand have been well investigated in research and development sectors of professional industries. They have also undergone generations of development and sophistication. In some cases, they are protected by intellectual property regulations such as patents granted for inventions. Such industrial-scale assays are often performed in well-equipped laboratories and with automated organization of the procedure, from ordering an assay to pre-analytic sample processing (sample collection, necessary manipulations e.g. spinning for separation, aliquoting if necessary, storage, retrieval, pipetting, aspiration, etc.). Analytes are generally tested in high-throughput autoanalyzers, and the results are verified and automatically returned to ordering service providers and end-users. These are made possible through the use of an advanced laboratory informatics system that interfaces with multiple computer terminals with end-users, central servers, the physical autoanalyzer instruments, and other automata. Etymology According to Etymology Online, the verb assay means "to try, endeavor, strive, test the quality of"; from Anglo-French assaier, from assai (noun), from Old French essai, "trial". Thus the noun assay means "trial, test of quality, test of character" (from mid-14th century), from Anglo-French assai; and its meaning "analysis" is from the late 14th century. For assay of currency coins this literally meant analysis of the purity of the gold or silver (or whatever the precious component) that represented the true value of the coin. This might have translated later (possibly after the 14th century) into a broader usage of "analysis", e.g., in pharmacology, analysis for an important component of a target inside a mixture—such as the active ingredient of a drug inside the inert excipients in a formulation that previously was measured only grossly by its observable action on an organism (e.g., a lethal dose or inhibitory dose). General steps An assay (analysis) is never an isolated process, as it must be accompanied with pre- and post-analytic procedures. Both the communication order (the request to perform an assay plus related information) and the handling of the specimen itself (the collecting, documenting, transporting, and processing done before beginning the assay) are pre-analytic steps. Similarly, after the assay is completed the results must be documented, verified and communicated—the post-analytic steps. As with any multi-step information handling and transmission system, the variation and errors in reporting final results entail not only those intrinsic to the assay itself but also those occurring in the pre-analytic and post-analytic procedures. While the analytic steps of the assay itself get much attention, it is those that get less attention of the chain of users—the pre-analytic and post-analytic procedures—that typically accumulate the most errors; e.g., pre-analytic steps in medical laboratory assays may contribute 32–75% of all lab errors. Assays can be very diverse, but generally involve the following general steps: Sample processing and manipulation in order to selectively present the target in a discernible or measurable form to a discrimination/identification/detection system. It might involve a simple centrifugal separation or washing or filtration or capture by some form of selective binding or it may even involve modifying the target e.g. epitope retrieval in immunological assays or cutting down the target into pieces e.g. in Mass Spectrometry. Generally there are multiple separate steps done before an assay and are called preanalytic processing. But some of the manipulations may be inseparable part of the assay itself and will not thus be considered pre-analytic. Target-specific discrimination/identification principle: to discriminate from background (noise) of similar components and specifically identify a particular target component ("analyte") in a biological material by its specific attributes. (e.g. in a PCR assay a specific oligonucleotide primer identifies the target by base pairing based on the specific nucleotide sequence unique to the target). Signal (or target) amplification system: The presence and quantity of that analyte is converted into a detectable signal generally involving some method of signal amplification, so that it can be easily discriminated from noise and measured - e.g. in a PCR assay among a mixture of DNA sequences only the specific target is amplified into millions of copies by a DNA polymerase enzyme so that it can be discerned as a more prominent component compared to any other potential components. Sometimes the concentration of the analyte is too large and in that case the assay may involve sample dilution or some sort of signal diminution system which is a negative amplification. Signal detection (and interpretation) system: A system of deciphering the amplified signal into an interpretable output that can be quantitative or qualitative. It can be visual or manual very crude methods or can be very sophisticated electronic digital or analog detectors. Signal enhancement and noise filtering may be done at any or all of the steps above. Since the more downstream a step/process during an assay, the higher the chance of carrying over noise from the previous process and amplifying it, multiple steps in a sophisticated assay might involve various means of signal-specific sharpening/enhancement arrangements and noise reduction or filtering arrangements. These may simply be in the form of a narrow band-pass optical filter, or a blocking reagent in a binding reaction that prevents nonspecific binding or a quenching reagent in a fluorescence detection system that prevents "autofluorescence" of background objects. Assay types based on the nature of the assay process Time and number of measurements taken Depending on whether an assay just looks at a single time point or timed readings taken at multiple time points, an assay may be: An end point assay, in which a single measurement is performed after a fixed incubation period; or A kinetic assay, in which measurements are performed multiple times over a fixed time interval. Kinetic assay results may be visualized numerically (for example, as a slope parameter representing the rate of signal change over time), or graphically (for example, as a plot of the signal measured at each time point). For kinetic assays, both the magnitude and shape of the measured response over time provide important information. A high throughput assay can be either an endpoint or a kinetic assay usually done on an automated platform in 96-, 384- or 1536-well microplate formats (High Throughput Screening). Such assays are able to test large number of compounds or analytes or make functional biological readouts in response to a stimuli and/or compounds being tested. Number of analytes detected Depending on how many targets or analytes are being measured: Usual assays are simple or single target assays which is usually the default unless it is called multiplex. Multiplex assays are used to simultaneously measure the presence, concentration, activity, or quality of multiple analytes in a single test. The advent of multiplexing enabled rapid, efficient sample testing in many fields, including immunology, cytochemistry, genetics/genomics, pharmacokinetics, and toxicology. Result type Depending on the quality of the result produced, assays may be classified into: Qualitative assays, i.e. assays which generally give just a pass or fail, or positive or negative or some such sort of only small number of qualitative gradation rather than an exact quantity. Semi-quantitative assays, i.e. assays that give the read-out in an approximate fashion rather than an exact number for the quantity of the substance. Generally they have a few more gradations than just two outcomes, positive or negative, e.g. scoring on a scale of 1+ to 4+ as used for blood grouping tests based on RBC agglutination in response to grouping reagents (antibody against blood group antigens). Quantitative assays, i.e. assays that give accurate and exact numeric quantitative measure of the amount of a substance in a sample. An example of such an assay used in coagulation testing laboratories for the most common inherited bleeding disease - Von Willebrand disease is VWF antigen assay where the amount of VWF present in a blood sample is measured by an immunoassay. Functional assays, i.e. an assay that tries to quantify functioning of an active substance rather than just its quantity. The functional counterpart of the VWF antigen assay is Ristocetin Cofactor assay, which measures the functional activity of the VWF present in a patient's plasma by adding exogenous formalin-fixed platelets and gradually increasing quantities of drug named ristocetin while measuring agglutination of the fixed platelets. A similar assay but used for a different purpose is called Ristocetin Induced Platelet Aggregation or RIPA, which tests response of endogenous live platelets from a patient in response to Ristocetin (exogenous) & VWF (usually endogenous). Sample type and method Depending on the general substrate on which the assay principle is applied: Bioassay: when the response is biological activity of live objects. Examples include in vivo, whole organism (e.g. mouse or other subject injected with a drug) ex vivo body part (e.g. leg of a frog) ex vivo organ (e.g. heart of a dog) ex vivo part of an organ (e.g. a segment of an intestine). tissue (e.g. limulus lysate) cell (e.g. platelets) Ligand binding assay when a ligand (usually a small molecule) binds a receptor (usually a large protein). Immunoassay when the response is an antigen antibody binding type reaction. Signal amplification Depending on the nature of the signal amplification system assays may be of numerous types, to name a few: Enzyme assay: Enzymes may be tested by their highly repeating activity on a large number of substrates when loss of a substrate or the making of a product may have a measurable attribute like color or absorbance at a particular wavelength or light or Electrochemiluminescence or electrical/redox activity. Light detection systems that may use amplification e.g. by a photodiode or a photomultiplier tube or a cooled charge-coupled device. Radioisotope labeled substrates as used in radioimmunoassays and equilibrium dialysis assays and can be detected by the amplification in Gamma counters or X-ray plates, or phosphorimager Polymerase Chain Reaction Assays that amplify a DNA (or RNA) target rather than the signal Combination Methods Assays may utilize a combination of the above and other amplification methods to improve sensitivity. e.g. Enzyme-linked immunoassay or EIA, enzyme linked immunosorbent assay. Detection method or technology Depending on the nature of the Detection system assays can be based on: Colony forming or virtual colony count: e.g. by multiplying bacteria or proliferating cells. Photometry / spectrophotometry When the absorbance of a specific wavelength of light while passing through a fixed path-length through a cuvette of liquid test sample is measured and the absorbance is compared with a blank and standards with graded amounts of the target compound. If the emitted light is of a specific visible wavelength it may be called colorimetry, or it may involve specific wavelength of light e.g. by use of laser and emission of fluorescent signals of another specific wavelength which is detected via very specific wavelength optical filters. Transmittance of light may be used to measure e.g. clearing of opacity of a liquid created by suspended particles due to decrease in number of clumps during a platelet agglutination reaction. Turbidimetry when the opacity of straight-transmitted light passing through a liquid sample is measured by detectors placed straight across the light source. Nephelometry where a measurement of the amount of light scattering that occurs when a beam of light is passed through the solution is used to determine size and/or concentration and/or size distribution of particles in the sample. Reflectometry When color of light reflected from a (usually dry) sample or reactant is assessed e.g. the automated readings of the strip urine dipstick assays. Viscoelastic measurements e.g. viscometry, elastography (e.g. thromboelastography) Counting assays: e.g. optic Flow cytometric cell or particle counters, or coulter/impedance principle based cell counters Imaging assays, that involve image analysis manually or by software: Cytometry: When the size statistics of cells is assessed by an image processor. Electric detection e.g. involving amperometry, Voltammetry, coulometry may be used directly or indirectly for many types of quantitative measurements. Other physical property based assays may use Osmometer Viscometer Ion Selective electrodes Syndromic testing Assay types based on the targets being measured DNA Assays for studying interactions of proteins with DNA include: DNase footprinting assay Filter binding assay Gel shift assay Protein Bicinchoninic acid assay (BCA assay) Bradford protein assay Lowry protein assay Secretion assay RNA Nuclear run-on Ribosome profiling Cell counting, viability, proliferation or cytotoxicity assays A cell-counting assay may determine the number of living cells, the number of dead cells, or the ratio of one cell type to another, such as enumerating and typing red versus different types of white blood cells. This is measured by different physical methods (light transmission, electric current change). But other methods use biochemical probing cell structure or physiology (stains). Another application is to monitor cell culture (assays of cell proliferation or cytotoxicity). A cytotoxicity assay measures how toxic a chemical compound is to cells. MTT assay Cell Counting Kit-8 (WST-8 based cell viability assay) SRB (Sulforhodamine B) assay CellTiter-Glo® Luminescent Cell Viability Assay Cell counting instruments and methods: CASY cell counting technology, Coulter counter, Electric cell-substrate impedance sensing Cell viability assays: resazurin method, ATP test, Ethidium homodimer assay (detect dead or dying cells), Bacteriological water analysis, Clonogenic assays, ... Environmental or food contaminants Bisphenol F Aquatic toxicity tests Surfactants An MBAS assay indicates anionic surfactants in water with a bluing reaction. Other cell assays Many cell assays have been developed to assess specific parameters or response of cells (biomarkers, cell physiology). Techniques used to study cells include : reporter assays using i.e. Luciferase, calcium signaling assays using Coelenterazine, CFSE or Calcein Immunostaining of cells on slides by Microscopy (ImmunoHistoChemistry or Fluorescence), on microplates by photometry including the ELISpot (and its variant FluoroSpot) to enumerate B-Cells or antigen-specific cells, in solution by Flow cytometry Molecular biology techniques such as DNA microarrays, in situ hybridization, combined to PCR, Computational genomics, and Transfection; Cell fractionation or Immunoprecipitation Migration assays, Chemotaxis assay Secretion assays Apoptosis assays such as the DNA laddering assay, the Nicoletti assay, caspase activity assays, and Annexin V staining Chemosensitivity assay measures the number of tumor cells that are killed by a cancer drug Tetramer assay detect the presence of antigen specific T-cells Gentamicin protection assay or survival assay or invasion assay to assess ability of pathogens (bacteria) to invade eukaryotic cells Metastasis Assay Enhancer-FACS-seq, the technique using a cell sorting process before DNA sequencing Petrochemistry Crude oil assay Virology The HPCE-based viral titer assay uses a proprietary, high-performance capillary electrophoresis system to determine baculovirus titer. The Trofile assay is used to determine HIV tropism. The viral plaque assay is to calculate the number of viruses present in a sample. In this technique the number of viral plaques formed by a viral inoculum is counted, from which the actual virus concentration can be determined. Cellular secretions A wide range of cellular secretions (say, a specific antibody or cytokine) can be detected using the ELISA technique. The number of cells which secrete those particular substances can be determined using a related technique, the ELISPOT assay. Drugs Testing for Illegal Drugs Radioligand binding assay Quality When multiple assays measure the same target their results and utility may or may not be comparable depending on the natures of the assay and their methodology, reliability etc. Such comparisons are possible through study of general quality attributes of the assays e.g. principles of measurement (including identification, amplification and detection), dynamic range of detection (usually the range of linearity of the standard curve), analytic sensitivity, functional sensitivity, analytic specificity, positive, negative predictive values, turn around time i.e. time taken to finish a whole cycle from the preanalytic steps till the end of the last post analytic step (report dispatch/transmission), throughput i.e. number of assays done per unit time (usually expressed as per hour) etc. Organizations or laboratories that perform Assays for professional purposes e.g. medical diagnosis and prognostics, environmental analysis, forensic proceeding, pharmaceutical research and development must undergo well regulated quality assurance procedures including method validation, regular calibration, analytical quality control, proficiency testing, test accreditation, test licensing and must document appropriate certifications from the relevant regulating bodies in order to establish the reliability of their assays, especially to remain legally acceptable and accountable for the quality of the assay results and also to convince customers to use their assay commercially/professionally. List of BioAssay databases Bioactivity databases Bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs. Protocol databases Protocol databases correlate results from bioassays to their metadata about experimental conditions and protocol designs. See also Analytical chemistry MELISA Multiplex (assay) Pharmaceutical chemistry Titration References External links This includes a detailed, technical explanation of contemporaneous metallic ore assay techniques. Biochemistry Laboratory techniques Titration
Assay
Chemistry,Biology
4,378
1,688,876
https://en.wikipedia.org/wiki/Lysochrome
A lysochrome is a soluble dye used for histochemical staining of lipids, which include triglycerides, fatty acids, and lipoproteins. Lysochromes such as Sudan IV dissolve in the lipid and show up as colored regions. The dye does not stick to any other substrates, so a quantification or qualification of lipid presence can be obtained. The name was coined by John Baker (biologist) in his book "Principles of Biological Microtechnique", published in 1958, from the Greek words lysis (solution) and chroma (colour). References Biochemistry methods Lipids Histochemistry
Lysochrome
Chemistry,Biology
139
53,296,282
https://en.wikipedia.org/wiki/Cloudbleed
Cloudbleed was a Cloudflare buffer overflow disclosed by Project Zero on February 17, 2017. Cloudflare's code disclosed the contents of memory that contained the private information of other customers, such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data. As a result, data from Cloudflare customers was leaked to all other Cloudflare customers that had access to server memory. This occurred, according to numbers provided by Cloudflare at the time, more than 18,000,000 times before the problem was corrected. Some of the leaked data was cached by search engines. Discovery The discovery was reported by Google's Project Zero team. Tavis Ormandy posted the issue on his team's issue tracker and said that he informed Cloudflare of the problem on February 17. In his own proof-of-concept attack he got a Cloudflare server to return "private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We're talking full https requests, client IP addresses, full responses, cookies, passwords, keys, data, everything." Similarities to Heartbleed In its effects, Cloudbleed is comparable to the 2014 Heartbleed bug, in that it allowed unauthorized third parties to access data in the memory of programs running on web servers, including data which had been shielded while in transit by TLS. Cloudbleed also likely impacted as many users as Heartbleed since it affected a content delivery network serving nearly two million websites. Tavis Ormandy, first to discover the vulnerability, immediately drew a comparison to Heartbleed, saying "it took every ounce of strength not to call this issue 'cloudbleed'" in his report. Reactions Cloudflare On Thursday, February 23, 2017, Cloudflare wrote a post noting that: The bug was serious because the leaked memory could contain private information and because it had been cached by search engines. We have also not discovered any evidence of malicious exploits of the bug or other reports of its existence. The greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage (that’s about 0.00003% of requests). Cloudflare acknowledged that the memory could have leaked as early as September 22, 2016. The company also stated that one of its own private keys, used for machine-to-machine encryption, was leaked. It turned out that the underlying bug that caused the memory leak had been present in our Ragel-based parser for many years but no memory was leaked because of the way the internal NGINX buffers were used. Introducing cf-html subtly changed the buffering which enabled the leakage even though there were no problems in cf-html itself. John Graham-Cumming, Cloudflare's CTO, noted that Cloudflare clients, such as Uber and OkCupid, weren't directly informed of the leaks due to the security risks involved in the situation. “There was no backdoor communication outside of Cloudflare — only with Google and other search engines,” he said. Graham-Cumming also said that "Unfortunately, it was the ancient piece of software that contained a latent security problem and that problem only showed up as we were in the process of migrating away from it." He added that his team has already begun testing their software for other possible issues. Google Project Zero team Tavis Ormandy initially stated that he was "really impressed with Cloudflare's quick response, and how dedicated they are to cleaning up from this unfortunate issue." However, when Ormandy pressed Cloudflare for additional information, "They gave several excuses that didn't make sense," before sending a draft that "severely downplays the risk to customers." Uber Uber stated that the impact on its service was very limited. An Uber spokesperson added "only a handful of session tokens were involved and have since been changed. Passwords were not exposed." OKCupid OKCupid CEO Elie Seidman said: "CloudFlare alerted us last night of their bug and we've been looking into its impact on OkCupid members. Our initial investigation has revealed minimal, if any, exposure. If we determine that any of our users has been impacted we will promptly notify them and take action to protect them." Fitbit Fitbit stated that they had investigated the incident and only found that a "handful of people were affected". They recommended that concerned customers should change their passwords and clear session tokens by revoking and re-adding the app to their account. 1Password In a blog post, Jeffery Goldberg stated that no data from 1Password would be at risk due to Cloudbleed, citing the service's use of Secure Remote Password protocol (SRP), in which the client and server prove their identity without sharing any secrets over the network. 1Password data is additionally encrypted using keys derived from the user's master password and a secret account code, which Goldberg claims would protect the credentials even if 1Password's own servers were breached. 1Password did not suggest users change their master password in response to a potential breach involving the bug. Remediation Many major news outlets advised users of sites hosted by Cloudflare to change their passwords, as even accounts protected by multi-factor authentication could be at risk. Passwords of mobile apps too could have been impacted. Researchers at Arbor Networks, in an alert, suggested that "For most of us, the only truly safe response to this large-scale information leak is to update our passwords for the Web sites and app-related services we use every day...Pretty much all of them." Inc. Magazine cybersecurity columnist, Joseph Steinberg, however, advised people not to change their passwords, stating that "the current risk is much smaller than the price to be paid in increased 'cybersecurity fatigue' leading to much bigger problems in the future." References External links List of domains using Cloudflare DNS on GitHub Simple website that lets you check for affected domains quickly A Chrome extension that checks bookmarks against potentially affected domains Cloudbleed explained-How the biggest web cache leak on internet happened Quantifying the impact of CloudBleed bug Internet security Software bugs 2017 in computing Cloud infrastructure attacks and failures Cloudflare Computer security exploits
Cloudbleed
Technology,Engineering
1,354
8,591,873
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Triangulum
This is a list of notable stars in the constellation Triangulum, sorted by decreasing brightness. See also List of stars by constellation References List Triangulum
List of stars in Triangulum
Astronomy
33
19,189,326
https://en.wikipedia.org/wiki/Rs1800955
In genetics, rs1800955 (also written as C-521T and -521C/T) is a single nucleotide polymorphism (SNP). It is located in the promoter region of the DRD4 gene. This gene codes for the dopamine receptor D4. Due to the dopamine hypothesis of schizophrenia the SNP has been investigated for link to schizophrenia, and it may be slightly associated with this disorder. The SNP has been investigated with respect to novelty seeking, — a personality trait that may be measured with the Temperament and Character Inventory. A 2008 meta-analysis indicates a possible association between novelty seeking and C-521T though rather small. References Other Table 4 from "The analysis of 51 genes in DSM-IV combined type attention deficit hyperactivity disorder: association signals in DRD4, DAT1 and 16 other genes" SNPs on chromosome 11
Rs1800955
Biology
187
23,658,801
https://en.wikipedia.org/wiki/Pejorative
A pejorative word, phrase, slur, or derogatory term is a word or grammatical form expressing a negative or disrespectful connotation, a low opinion, or a lack of respect toward someone or something. It is also used to express criticism, hostility, or disregard. Sometimes, a term is regarded as pejorative in some social or ethnic groups but not in others or may be originally pejorative but later adopt a non-pejorative sense (or vice versa) in some or all contexts. Etymology The word pejorative is derived from a Late Latin past participle stem of , meaning "to make worse", from "worse". Pejoration and melioration In historical linguistics, the process of an inoffensive word becoming pejorative is a form of semantic drift known as pejoration. An example of pejoration is the shift in meaning of the word silly from meaning that a person was happy and fortunate to meaning that they are foolish and unsophisticated. The process of pejoration can repeat itself around a single concept, leaping from word to word in a phenomenon known as the euphemism treadmill, for example as in the successive pejoration of the terms bog-house, privy-house, latrine, water closet, toilet, bathroom, and restroom (US English). When a term begins as pejorative and eventually is adopted in a non-pejorative sense, this is called melioration or amelioration. One example is the shift in meaning of the word nice from meaning a person was foolish to meaning that a person is pleasant. When performed deliberately, it is described as reclamation or reappropriation. Examples of a word that has been reclaimed by portions of the community that it targets is queer, faggot and dyke which began being re-appropriated as a positive descriptor in the early 1990s by activist groups. However, due to its history and – in some regions – continued use as a pejorative, there remain LGBT individuals who are uncomfortable with having this term applied to them. The use of the racial slur nigger (specifically the -a variant) by African Americans is often viewed as another act of reclamation, though much like the latter in the LGBT movement, there exists a vocal subset of people with Sub-Saharan African descent that object to the use of the word under any circumstances. See also Approbative Defamation Dysphemism Fighting words Insult Judgmental language List of ethnic slurs List of religious slurs Profanity References Further reading External links Connotation Criticisms Harassment and bullying Prejudice and discrimination
Pejorative
Biology
546
4,835,156
https://en.wikipedia.org/wiki/Green%20trading
Green trading encompasses all forms of environmental financial trading, including carbon dioxide, sulfur dioxide (acid rain), nitrogen oxide (ozone), renewable energy credits, and energy efficiency (negawatts). All these emerging and established environmental financial markets have one thing in common, which is making profits in the emerging emissions offset economy by investing in "clean technology". Green Trading claims to accelerate change to a cleaner environment by using market-based incentives whose application is global. Some examples, such as the carbon market or market for SO2 suggests that market-based systems are more likely environmentally effective because market systems will direct abatement to relatively larger and more heavily utilized sources with relatively high emission intensities. . Many current projects to advance green technology are recipients of funding generated through the voluntary carbon offset market in the United States. Though currently not required to do so, many companies are seeking ways to clean up their environmental impact. Bad energy practices that they cannot eliminate, they may offset; knowing that they are funding projects that are actively developing cleaner energy practices and increasing energy efficiency for the future. In November 2008, in a unique partnership initiated by Verus Carbon Neutral, 17 businesses of Atlanta's Virginia Highland came together to establish themselves as the first carbon-neutral zone in the United States. Their efforts now fund the Valley Wood Carbon Sequestration Project, the first such project to be verified through the Chicago Climate Exchange. See also Carbon emission trading Eco-capitalism Eco-investing Low-carbon economy Natural resource economics References Financial markets Environmental economics
Green trading
Environmental_science
310
11,472,733
https://en.wikipedia.org/wiki/Saccharopine%20dehydrogenase
In molecular biology, the protein domain Saccharopine dehydrogenase (SDH), also named Saccharopine reductase, is an enzyme involved in the metabolism of the amino acid lysine, via an intermediate substance called saccharopine. The Saccharopine dehydrogenase enzyme can be classified under , , , and . It has an important function in lysine metabolism and catalyses a reaction in the alpha-Aminoadipic acid pathway. This pathway is unique to fungal organisms therefore, this molecule could be useful in the search for new antibiotics. This protein family also includes saccharopine dehydrogenase and homospermidine synthase. It is found in prokaryotes, eukaryotes and archaea. Function Simplistically, SDH uses NAD+ as an oxidant to catalyse the reversible pyridine nucleotide dependent oxidative deamination of the substrate, Saccharopine, in order to form the products, lysine and alpha-ketoglutarate. This can be described by the following equation: SDH Saccharopine ⇌ lysine + alpha-ketoglutarate Saccharopine dehydrogenase EC catalyses the condensation to of l-alpha-aminoadipate-delta-semialdehyde (AASA) with l-glutamate to give an imine, which is reduced by NADPH to give saccharopine. In some organisms this enzyme is found as a bifunctional polypeptide with lysine ketoglutarate reductase (PF). Homospermidine synthase proteins (EC). Homospermidine synthase (HSS) catalyses the synthesis of the polyamine homospermidine from 2 mol putrescine in an NAD+-dependent reaction. Structure There appears to be two protein domains of similar size. One domain is a Rossmann fold that binds NAD+/NADH, and the other is relatively similar. Both domains contain a six-stranded parallel beta-sheet surrounded by alpha-helices and loops (alpha/beta fold). Clinical significance Deficiencies are associated with hyperlysinemia. References Protein domains Protein families
Saccharopine dehydrogenase
Biology
476
70,224,579
https://en.wikipedia.org/wiki/Sunobinop
Sunobinop (developmental code names V117957; IMB-115) is a high affinity small molecule nociceptin receptor partial agonist. As of February 2024, it is under clinical investigation for the treatment of insomnia/alcohol use disorder, interstitial cystitis, and overactive bladder syndrome. It was previously also under investigation for the treatment of fibromyalgia. Pharmacology Sunobinop has nanomolar affinity (K) and efficacy (EC) at human recombinant nociceptin/orphanin-FQ peptide (NOP) receptors. It has a high degree of functional selectivity for the NOP receptor. Sunobinop is a low affinity antagonist at human mu and kappa opioid receptors, and is a low affinity weak partial agonist at human delta opioid receptors. Clinical trials Sunobinop was generally well tolerated in 3 studies involving 70 healthy subjects at doses that ranged from 0.6 to 30 mg. The most prominent adverse event was dose-dependent sedation/somnolence, which was more common at doses greater than 10 mg. In these studies, most of the absorbed sunobinop was excreted unchanged via rapid renal elimination. The safety and effectiveness of sunobinop has not been evaluated by the FDA. There is no guarantee that sunobinop will successfully complete development or gain FDA approval. See also List of investigational sleep drugs § Nociceptin receptor agonists Nociceptin receptor References Bridged heterocyclic compounds Carboxylic acids Experimental psychiatric drugs Nociceptin receptor agonists Quinoxalines
Sunobinop
Chemistry
348
4,250,298
https://en.wikipedia.org/wiki/Auxiliary%20field
In physics, and especially quantum field theory, an auxiliary field is one whose equations of motion admit a single solution. Therefore, the Lagrangian describing such a field contains an algebraic quadratic term and an arbitrary linear term, while it contains no kinetic terms (derivatives of the field): The equation of motion for is and the Lagrangian becomes Auxiliary fields generally do not propagate, and hence the content of any theory can remain unchanged in many circumstances by adding such fields by hand. If we have an initial Lagrangian describing a field , then the Lagrangian describing both fields is Therefore, auxiliary fields can be employed to cancel quadratic terms in in and linearize the action . Examples of auxiliary fields are the complex scalar field F in a chiral superfield, the real scalar field D in a vector superfield, the scalar field B in BRST and the field in the Hubbard–Stratonovich transformation. The quantum mechanical effect of adding an auxiliary field is the same as the classical, since the path integral over such a field is Gaussian. To wit: See also Bosonic field Fermionic field Composite Field References Quantum field theory
Auxiliary field
Physics
244
75,955,442
https://en.wikipedia.org/wiki/Holmium%20nitride
Holmium nitride is a binary inorganic compound of holmium and nitrogen with the chemical formula . Synthesis To produce holmium nitride nanoparticles, a plasma arc discharge technique can be employed. In this process, holmium granules are placed in a copper crucible, which acts as the anode, while a tungsten cathode is used. Before starting, the furnace is evacuated , and this step is repeated twice to eliminate most of the air. Afterward, the furnace is filled with argon and nitrogen gas to a partial pressure of 4 kPa and a mixture ratio of 80% nitrogen to 20% argon. Once the furnace is prepared, a current of ~220 A is applied along ~50 V, creating an arc plasma that generates holmium vapor. The holmium vapor then continues to react with the nitrogen gas in the surrounding environment, resulting in the formation of holmium nitride nanoparticles. Physical properties The compound forms crystals of cubic system. References Nitrides Holmium compounds Nitrogen compounds
Holmium nitride
Chemistry
212
9,110
https://en.wikipedia.org/wiki/Diophantus
Diophantus of Alexandria (born ; died ) was a Greek mathematician, who was the author of two main works: On Polygonal Numbers, which survives incomplete, and the Arithmetica in thirteen books, most of it extant, made up of arithmetical problems that are solved through algebraic equations. His Arithmetica influenced the development of algebra by Arabs, and his equations influenced modern work in both abstract algebra and computer science. The first five books of his work are purely algebraic. Furthermore, recent studies of Diophantus's work have revealed that the method of solution taught in his Arithmetica matches later medieval Arabic algebra in its concepts and overall procedure. Diophantus was among the earliest mathematicians who recognized positive rational numbers as numbers, by allowing fractions for coefficients and solutions. He coined the term παρισότης (parisotēs) to refer to an approximate equality. This term was rendered as adaequalitas in Latin, and became the technique of adequality developed by Pierre de Fermat to find maxima for functions and tangent lines to curves. Although not the earliest, the Arithmetica has the best-known use of algebraic notation to solve arithmetical problems coming from Greek antiquity, and some of its problems served as inspiration for later mathematicians working in analysis and number theory. In modern use, Diophantine equations are algebraic equations with integer coefficients for which integer solutions are sought. Diophantine geometry and Diophantine approximations are two other subareas of number theory that are named after him. Biography Diophantus was born into a Greek family and is known to have lived in Alexandria, Egypt, during the Roman era, between AD 200 and 214 to 284 or 298. Much of our knowledge of the life of Diophantus is derived from a 5th-century Greek anthology of number games and puzzles created by Metrodorus. One of the problems (sometimes called his epitaph) states:Here lies Diophantus, the wonder behold. Through art algebraic, the stone tells how old: 'God gave him his boyhood one-sixth of his life, One twelfth more as youth while whiskers grew rife; And then yet one-seventh ere marriage begun; In five years there came a bouncing new son. Alas, the dear child of master and sage After attaining half the measure of his father's life chill fate took him. After consoling his fate by the science of numbers for four years, he ended his life.'This puzzle implies that Diophantus' age can be expressed as which gives a value of 84 years. However, the accuracy of the information cannot be confirmed. In popular culture, this puzzle was the Puzzle No.142 in Professor Layton and Pandora's Box as one of the hardest solving puzzles in the game, which needed to be unlocked by solving other puzzles first. Arithmetica Arithmetica is the major work of Diophantus and the most prominent work on premodern algebra in Greek mathematics. It is a collection of problems giving numerical solutions of both determinate and indeterminate equations. Of the original thirteen books of which Arithmetica consisted only six have survived, though there are some who believe that four Arabic books discovered in 1968 are also by Diophantus. Some Diophantine problems from Arithmetica have been found in Arabic sources. It should be mentioned here that Diophantus never used general methods in his solutions. Hermann Hankel, renowned German mathematician made the following remark regarding Diophantus:Our author (Diophantos) not the slightest trace of a general, comprehensive method is discernible; each problem calls for some special method which refuses to work even for the most closely related problems. For this reason it is difficult for the modern scholar to solve the 101st problem even after having studied 100 of Diophantos's solutions. History Like many other Greek mathematical treatises, Diophantus was forgotten in Western Europe during the Dark Ages, since the study of ancient Greek, and literacy in general, had greatly declined. The portion of the Greek Arithmetica that survived, however, was, like all ancient Greek texts transmitted to the early modern world, copied by, and thus known to, medieval Byzantine scholars. Scholia on Diophantus by the Byzantine Greek scholar John Chortasmenos (1370–1437) are preserved together with a comprehensive commentary written by the earlier Greek scholar Maximos Planudes (1260 – 1305), who produced an edition of Diophantus within the library of the Chora Monastery in Byzantine Constantinople. In addition, some portion of the Arithmetica probably survived in the Arab tradition (see above). In 1463 German mathematician Regiomontanus wrote:No one has yet translated from the Greek into Latin the thirteen books of Diophantus, in which the very flower of the whole of arithmetic lies hidden.Arithmetica was first translated from Greek into Latin by Bombelli in 1570, but the translation was never published. However, Bombelli borrowed many of the problems for his own book Algebra. The editio princeps of Arithmetica was published in 1575 by Xylander. The Latin translation of Arithmetica by Bachet in 1621 became the first Latin edition that was widely available. Pierre de Fermat owned a copy, studied it and made notes in the margins. A later 1895 Latin translation by Paul Tannery was said to be an improvement by Thomas L. Heath, who used it in the 1910 second edition of his English translation. Margin-writing by Fermat and Chortasmenos The 1621 edition of Arithmetica by Bachet gained fame after Pierre de Fermat wrote his famous "Last Theorem" in the margins of his copy: If an integer is greater than 2, then has no solutions in non-zero integers , , and . I have a truly marvelous proof of this proposition which this margin is too narrow to contain.Fermat's proof was never found, and the problem of finding a proof for the theorem went unsolved for centuries. A proof was finally found in 1994 by Andrew Wiles after working on it for seven years. It is believed that Fermat did not actually have the proof he claimed to have. Although the original copy in which Fermat wrote this is lost today, Fermat's son edited the next edition of Diophantus, published in 1670. Even though the text is otherwise inferior to the 1621 edition, Fermat's annotations—including the "Last Theorem"—were printed in this version. Fermat was not the first mathematician so moved to write in his own marginal notes to Diophantus; the Byzantine scholar John Chortasmenos (1370–1437) had written "Thy soul, Diophantus, be with Satan because of the difficulty of your other theorems and particularly of the present theorem" next to the same problem. Other works Diophantus wrote several other books besides Arithmetica, but only a few of them have survived. The Porisms Diophantus himself refers to a work which consists of a collection of lemmas called The Porisms (or Porismata), but this book is entirely lost. Although The Porisms is lost, we know three lemmas contained there, since Diophantus refers to them in the Arithmetica. One lemma states that the difference of the cubes of two rational numbers is equal to the sum of the cubes of two other rational numbers, i.e. given any and , with , there exist , all positive and rational, such that . Polygonal numbers and geometric elements Diophantus is also known to have written on polygonal numbers, a topic of great interest to Pythagoras and Pythagoreans. Fragments of a book dealing with polygonal numbers are extant. A book called Preliminaries to the Geometric Elements has been traditionally attributed to Hero of Alexandria. It has been studied recently by Wilbur Knorr, who suggested that the attribution to Hero is incorrect, and that the true author is Diophantus. Influence Diophantus' work has had a large influence in history. Editions of Arithmetica exerted a profound influence on the development of algebra in Europe in the late sixteenth and through the 17th and 18th centuries. Diophantus and his works also influenced Arab mathematics and were of great fame among Arab mathematicians. Diophantus' work created a foundation for work on algebra and in fact much of advanced mathematics is based on algebra. How much he affected India is a matter of debate. Diophantus has been considered "the father of algebra" because of his contributions to number theory, mathematical notations and the earliest known use of syncopated notation in his book series Arithmetica. However this is usually debated, because Al-Khwarizmi was also given the title as "the father of algebra", nevertheless both mathematicians were responsible for paving the way for algebra today. Diophantine analysis Today, Diophantine analysis is the area of study where integer (whole-number) solutions are sought for equations, and Diophantine equations are polynomial equations with integer coefficients to which only integer solutions are sought. It is usually rather difficult to tell whether a given Diophantine equation is solvable. Most of the problems in Arithmetica lead to quadratic equations. Diophantus looked at 3 different types of quadratic equations: , , and . The reason why there were three cases to Diophantus, while today we have only one case, is that he did not have any notion for zero and he avoided negative coefficients by considering the given numbers , , to all be positive in each of the three cases above. Diophantus was always satisfied with a rational solution and did not require a whole number which means he accepted fractions as solutions to his problems. Diophantus considered negative or irrational square root solutions "useless", "meaningless", and even "absurd". To give one specific example, he calls the equation 'absurd' because it would lead to a negative value for . One solution was all he looked for in a quadratic equation. There is no evidence that suggests Diophantus even realized that there could be two solutions to a quadratic equation. He also considered simultaneous quadratic equations. Mathematical notation Diophantus made important advances in mathematical notation, becoming the first person known to use algebraic notation and symbolism. Before him everyone wrote out equations completely. Diophantus introduced an algebraic symbolism that used an abridged notation for frequently occurring operations, and an abbreviation for the unknown and for the powers of the unknown. Mathematical historian Kurt Vogel states:The symbolism that Diophantus introduced for the first time, and undoubtedly devised himself, provided a short and readily comprehensible means of expressing an equation... Since an abbreviation is also employed for the word 'equals', Diophantus took a fundamental step from verbal algebra towards symbolic algebra.Although Diophantus made important advances in symbolism, he still lacked the necessary notation to express more general methods. This caused his work to be more concerned with particular problems rather than general situations. Some of the limitations of Diophantus' notation are that he only had notation for one unknown and, when problems involved more than a single unknown, Diophantus was reduced to expressing "first unknown", "second unknown", etc. in words. He also lacked a symbol for a general number . Where we would write , Diophantus has to resort to constructions like: "... a sixfold number increased by twelve, which is divided by the difference by which the square of the number exceeds three". Algebra still had a long way to go before very general problems could be written down and solved succinctly. See also Erdős–Diophantine graph Diophantus II.VIII Polynomial Diophantine equation Notes References Sources Allard, A. "Les scolies aux arithmétiques de Diophante d'Alexandrie dans le Matritensis Bibl.Nat.4678 et les Vatican Gr.191 et 304" Byzantion 53. Brussels, 1983: 682–710. Bachet de Méziriac, C.G. Diophanti Alexandrini Arithmeticorum libri sex et De numeris multangulis liber unus. Paris: Lutetiae, 1621. Bashmakova, Izabella G. Diophantos. Arithmetica and the Book of Polygonal Numbers. Introduction and Commentary Translation by I.N. Veselovsky. Moscow: Nauka [in Russian]. Christianidis, J. "Maxime Planude sur le sens du terme diophantien "plasmatikon"", Historia Scientiarum, 6 (1996)37-41. Christianidis, J. "Une interpretation byzantine de Diophante", Historia Mathematica, 25 (1998) 22–28. Czwalina, Arthur. Arithmetik des Diophantos von Alexandria. Göttingen, 1952. Heath, Sir Thomas, Diophantos of Alexandria: A Study in the History of Greek Algebra, Cambridge: Cambridge University Press, 1885, 1910. Robinson, D. C. and Luke Hodgkin. History of Mathematics, King's College London, 2003. Rashed, Roshdi. L’Art de l’Algèbre de Diophante. éd. arabe. Le Caire : Bibliothèque Nationale, 1975. Rashed, Roshdi. Diophante. Les Arithmétiques. Volume III: Book IV; Volume IV: Books V–VII, app., index. Collection des Universités de France. Paris (Société d’Édition "Les Belles Lettres"), 1984. Sesiano, Jacques. The Arabic text of Books IV to VII of Diophantus’ translation and commentary. Thesis. Providence: Brown University, 1975. Sesiano, Jacques. Books IV to VII of Diophantus’ Arithmetica in the Arabic translation attributed to Qusṭā ibn Lūqā, Heidelberg: Springer-Verlag, 1982. , . Σταμάτης, Ευάγγελος Σ. Διοφάντου Αριθμητικά. Η άλγεβρα των αρχαίων Ελλήνων. Αρχαίον κείμενον – μετάφρασις – επεξηγήσεις. Αθήναι, Οργανισμός Εκδόσεως Διδακτικών Βιβλίων, 1963. Tannery, P. L. Diophanti Alexandrini Opera omnia: cum Graecis commentariis, Lipsiae: In aedibus B.G. Teubneri, 1893-1895 (online: vol. 1, vol. 2) Ver Eecke, P. Diophante d’Alexandrie: Les Six Livres Arithmétiques et le Livre des Nombres Polygones, Bruges: Desclée, De Brouwer, 1921. Wertheim, G. Die Arithmetik und die Schrift über Polygonalzahlen des Diophantus von Alexandria. Übersetzt und mit Anmerkungen von G. Wertheim. Leipzig, 1890. Further reading Bashmakova, Izabella G. "Diophante et Fermat", Revue d'Histoire des Sciences 19 (1966), pp. 289–306 Bashmakova, Izabella G. Diophantus and Diophantine Equations. Moscow: Nauka 1972 [in Russian]. German translation: Diophant und diophantische Gleichungen. Birkhauser, Basel/ Stuttgart, 1974. English translation: Diophantus and Diophantine Equations. Translated by Abe Shenitzer with the editorial assistance of Hardy Grant and updated by Joseph Silverman. The Dolciani Mathematical Expositions, 20. Mathematical Association of America, Washington, DC. 1997. Bashmakova, Izabella G. "Arithmetic of Algebraic Curves from Diophantus to Poincaré", Historia Mathematica 8 (1981), 393–416. Bashmakova, Izabella G., Slavutin, E.I. History of Diophantine Analysis from Diophantus to Fermat. Moscow: Nauka 1984 [in Russian]. Rashed, Roshdi, Houzel, Christian. Les Arithmétiques de Diophante : Lecture historique et mathématique, Berlin, New York : Walter de Gruyter, 2013. Rashed, Roshdi, Histoire de l’analyse diophantienne classique : D’Abū Kāmil à Fermat, Berlin, New York : Walter de Gruyter. External links Diophantus's Riddle Diophantus' epitaph, by E. Weisstein Norbert Schappacher (2005). Diophantus of Alexandria : a Text and its History. Review of Sesiano's Diophantus Review of J. Sesiano, Books IV to VII of Diophantus' Arithmetica, by Jan P. Hogendijk Latin translation from 1575 by Wilhelm Xylander Scans of Tannery's edition of Diophantus at wilbourhall.org 3rd-century births 3rd-century deaths 3rd-century Greek people 3rd-century Egyptian people Roman-era Alexandrians Diophantus of Alexandria Ancient Greeks in Egypt Egyptian mathematicians Diophantus of Alexandria 3rd-century writers 3rd-century mathematicians
Diophantus
Mathematics
3,740
53,821,839
https://en.wikipedia.org/wiki/AsrC%20small%20RNA
AsrC (Antisense RNA of rseC) is a cis-encoded antisense RNA of rseC (an activator gene of sigma factor RpoE) described in Salmonella enterica serovar Typhi. It was discovered by deep sequencing and its transcription was confirmed by Northern blot. AsrC  is an 893 bp sequence that covers all of the rseC coding region in the reverse direction of transcription. It increases the level of rseC mRNA and protein, indirectly activating RpoE. RpoE can promote flagellar gene expression and motility. Coincidentally, expression of AsrC increased bacterial swimming motility. it is possible that it is because AsrC is promoting the expression of genes related to motility. References Non-coding RNA
AsrC small RNA
Chemistry
159
61,680,054
https://en.wikipedia.org/wiki/Severnside%20Sirens
The Severnside Sirens are a system of Civil defense sirens located along the South Severn Estuary coastline from Redcliffe Bay to Pilning, northwest of Bristol. They are activated by Avon and Somerset Police in the event of a potential incident at one of the COMAH sites located in the area, mainly in and near Avonmouth. The system was setup in 1997 following a fire at the Albright and Wilson site in 1996. Severnside Sirens Trust Severnside Sirens Trust Limited is the organisation responsible for maintaining the system. It is a registered company (number 3348008) and charity (number 1063224) and was incorporated on 9 April 1997. The trust's activities are funded by the 3 local authorities whose constituents the sirens serve, North Somerset Council, Bristol City Council, and South Gloucestershire Council, and from donations from the organisations running the COMAH sites themselves. Sirens The sirens themselves are mounted on dedicated poles and all but one are manufactured by the Federal Signal Corporation. Most of them are Federal Signal Modulators. They are operated via radio signal from a control system at Avon and Somerset Police Headquarters in Portishead. Testing The sirens are tested at 1500 on the 3rd of every month. The test comprises the following: 3 minutes of the alert warning (a continuous, stepped, rising tone) 1 minute of silence 1 minute of the all clear siren (a continuous constant tone) Local volunteers monitor the sirens on test day. References External links Severnside Sirens Trust - the charitable organisation responsible for running the system Warning systems Sirens Emergency population warning systems
Severnside Sirens
Technology,Engineering
312
6,317,347
https://en.wikipedia.org/wiki/207%20%28number%29
207 (two hundred [and] seven) is the natural number following 206 and preceding 208. It is an odd composite number with a prime factorization of . In Mathematics 207 is a Wedderburn-Etherington number. There are exactly 207 different matchstick graphs with eight edges. 207 is a deficient number, as 207's proper divisors (divisors not including the number itself) only add up to 105: . References Integers
207 (number)
Mathematics
93
17,981,631
https://en.wikipedia.org/wiki/Papulacandin%20B
Papulacandin B is a papulacandin isolated from a strain of Papularia sphaerosperma. It is a molecule with antifungal activity. See also Echinocandin References Antifungals Phenol glycosides Resorcinols Spiro compounds Oxygen heterocycles
Papulacandin B
Chemistry
72
11,471,451
https://en.wikipedia.org/wiki/Phomopsis%20asparagi
Phomopsis asparagi is a fungal plant pathogen that causes phomopsis blight in asparagus. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Stem vegetable diseases asparagi Fungus species
Phomopsis asparagi
Biology
51
5,023,862
https://en.wikipedia.org/wiki/Arsenic%20trichloride
Arsenic trichloride is an inorganic compound with the formula AsCl3, also known as arsenous chloride or butter of arsenic. This poisonous oil is colourless, although impure samples may appear yellow. It is an intermediate in the manufacture of organoarsenic compounds. Structure AsCl3 is a pyramidal molecule with C3v symmetry. The As-Cl bond is 2.161 Å and the angle Cl-As-Cl is 98° 25'±30. AsCl3 has four normal modes of vibration: ν1(A1) 416, ν2(A1) 192, ν3 393, and ν4(E) 152 cm−1. Synthesis This colourless liquid is prepared by treatment of arsenic(III) oxide with hydrogen chloride followed by distillation: As2O3 + 6 HCl → 2 AsCl3 + 3 H2O It can also be prepared by chlorination of arsenic at 80–85 °C, but this method requires elemental arsenic. 2 As + 3 Cl2 → 2 AsCl3 Arsenic trichloride can be prepared by the reaction of arsenic oxide and sulfur monochloride. This method requires simple apparatus and proceeds efficiently: 2 As2O3 + 6 S2Cl2 → 4 AsCl3 + 3 SO2 + 9 S A convenient laboratory method is refluxing arsenic(III) oxide with thionyl chloride: 2 As2O3 + 3 SOCl2 → 2 AsCl3 + 3 SO2 Arsenic trichloride can also be prepared by the reaction of hydrochloric acid and arsenic(III) sulfide. As2S3 + 6 HCl → 2 AsCl3 + 3 H2S Reactions Hydrolysis gives arsenous acid and hydrochloric acid: AsCl3 + 3 H2O → As(OH)3 + 3 HCl Although AsCl3 is less moisture sensitive than PCl3, it still fumes in moist air. AsCl3 undergoes redistribution upon treatment with As2O3 to give the inorganic polymer AsOCl. With chloride sources, AsCl3 also forms salts containing the anion [AsCl4]−. Reaction with potassium bromide and potassium iodide give arsenic tribromide and arsenic triiodide, respectively. AsCl3 is useful in organoarsenic chemistry, for example triphenylarsine is derived from AsCl3: AsCl3 + 6 Na + C6H5Cl → As(C6H5)3 + 6 NaCl The chemical weapons called Lewisites are prepared by the addition of arsenic trichloride to acetylene: Safety Inorganic arsenic compounds are highly toxic, and AsCl3 especially so because of its volatility and solubility (in water). It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. References Arsenic(III) compounds Arsenic halides Chlorides
Arsenic trichloride
Chemistry
651
23,797,849
https://en.wikipedia.org/wiki/Laminar%20sublayer
The laminar sublayer, also called the viscous sublayer, is the region of a mainly-turbulent flow that is near a no-slip boundary and in which viscous shear stresses are important. As such, it is a type of boundary layer. The existence of the viscous sublayer can be understood in that the flow velocity decreases towards the no-slip boundary. The laminar sublayer is important for river-bed ecology: below the laminar-turbulent interface, the flow is stratified, but above it, it rapidly becomes well-mixed. This threshold can be important in providing homes and feeding grounds for benthic organisms. Whether the roughness due to the bed sediment or other factors are smaller or larger than this sublayer has an important bearing in hydraulics and sediment transport. Flow is defined as hydraulically rough if the roughness elements are larger than the laminar sublayer (thereby perturbing the flow), and as hydraulically smooth if they are smaller than the laminar sublayer (and therefore ignorable by the main body of the flow). References Fluid mechanics
Laminar sublayer
Chemistry,Engineering
231
227,584
https://en.wikipedia.org/wiki/Wildlife%20garden
A wildlife garden (or habitat garden or backyard restoration) is an environment created with the purpose to serve as a sustainable haven for surrounding wildlife. Wildlife gardens contain a variety of habitats that cater to native and local plants, birds, amphibians, reptiles, insects, mammals and so on, and are meant to sustain locally native flora and fauna. Other names this type of gardening goes by can vary, prominent ones being habitat, ecology, and conservation gardening. Both public and private gardens can be specifically transformed to attract the native wildlife, and in doing so, provide a natural array of support through available shelter and sustenance. This method of gardening can be a form of restoration in private gardens as much as those in public, as they contribute to connectivity due to the variability of their scattered locations, as well as an increased habitat availability. Establishing a garden that emulates the environment before the residence was built and/or renders the garden similar to intact wild areas nearby (rewilding) will allow natural systems to interact and establish an equilibrium, ultimately minimizing the need for gardener maintenance and intervention. Wildlife gardens can also play an essential role in biological pest control, and also promote biodiversity, native plantings, and generally benefit the wider environment. Some environmental benefits include the reduction in pest populations through the natural mechanism of biological pest control, by helping reduce the need for pesticides. Habitat gardens also provide the environment an ecosystem service by recharging aquifers by intercepting rainfall. Purpose Anthropogenic activities such as land development and urbanization are major drivers of habitat destruction, causing habitat loss and displacing wildlife as a result of increasing fragmentation. Fragmentation, among other human factors contributing to habitat loss, aside from constant land disturbance (such as the heavy use of pesticides), all contribute to large declines in wildlife populations and biodiversity. By converting private green spaces in residential and commercial areas to wildlife or habitat gardens, residents can collectively assist restorative and conservation efforts in providing more spaces for wildlife to survive, and potentially strengthen ecological resilience in urban areas. Planning Planning a successful wildlife garden requires consideration of the area surroundings, and a focus on overall ecological functionality. Vegetative structure and complexity play an important role in the benefits the landscape will provide to the wildlife, through the varying plants serving as sources of food and cover for survival. In particular, planting native vegetation creates greater diversity in yards by providing habitat for birds, pollinators such as bees, and other wildlife, which results in their numbers in population growing. There are countless ways in which wildlife gardens can be built or converted, as long as food, water, shelter, and space are provided. The process will usually involve removing invasive species to replace with native species, retaining leaf litter as well as mature trees, assuring varying distribution of vegetation complexity and structure, and implementing other habitat elements such as ponds to include water sources. True to the nature of a habitat in the wild varying depending on its environment and the species inhabiting it, a wildlife garden can be built to resemble a desired habitat, with strategic features meant to attract desired birds or pollinators. Habitats Building a successful garden suitable for local wildlife is best accomplished through the use of multiple three-dimensional habitats with diverse structures that provide places for animals to nest and hide. Wildlife gardens may contain a range of habitats, including: Log piles – Preferably located in a shady area, a pile of logs is a sanctuary for insects and other invertebrates, as well as reptiles and amphibians. The organic structure is a shelter for both protection and breeding. In addition to logs, garden debris may also be added around the garden to be used as a natural mulch, fertilizer, weed control, soil amendment, and habitat for arthropod predators. Bird feeding stations and bird houses – A place for birds to eat and take shelter will increase the number of birds in the garden, which play a key role in biological pest control. Not only will food and shelter increase the survival rate of birds, but it will also ensure that they are healthy enough for a successful breeding season. Bug boxes and bee hotels – Bundles of hollow stems (elderberry, Joe-Pye weed, bamboo) can be hung up as an alternate place of shelter and breeding for beneficial insects, such as the Mason bee, which are valuable pollinators. Sources of water – A water feature, such as a pond, has the potential to support a large biodiversity of wildlife. To maximize the amount of wildlife attracted to the water feature, it should consist of ranging depths. Shallow areas are used by birds to drink and by insects and amphibians to lay eggs. Deeper areas provide habitat for aquatic insects and a place for amphibians, or even fish to swim. Pollinators – Flowers rich in nectar will attract bees and butterflies into the garden, which is of particular importance given the dramatic reduction in pollinator populations in the US, Europe and elsewhere. Wildflower meadows are an alternative option for lawns in the garden and will serve as a sanctuary for pollinators. However, pollinating plants should not be confused with plants suitable for butterfly breeding. Plant diversity – The garden should include a range of plant types to act as different habitats. A balance between ground cover, shrub, understory, and canopy species will allow different sized wildlife shelters, varying in height, that fit their individual needs. It is particularly important to use species that are native to the area or state, as native plants will more reliably be suited to insects and other invertebrates than many non-native plants; increased variety of insects is valuable both for its own sake and for birds and other predators. Programs like the National Wildlife Federation's Wildlife-Yard Certification program that provide certification to yards that contribute to the habitat of diverse species, provider higher native plant diversity, and diversify the bird population. Horizontal structure is an important principle to plan when constructing habitats, as it is only natural that the landscape will gradually change over time due to the nature of wildlife gardening requiring less human maintenance such as mowing. Vegetation changes occur in successions, with a meadow eventually becoming a forest in its final stage after gradually being replaced with woody species; to achieve horizontal structure, vegetation must be arranged and interspersed in these different stages with some proximity, so that different wildlife species will be supported. Vertical structure is also essential to the construction of the habitat. It is the inclusion of layers of plants that in such a way that it provides an efficient level of diversity as well as purpose in its arrangement. This is so that a broader arrangement of flora and fauna is provided. An example of a vertical structure is when a wildlife garden includes a mulch layer, herbaceous layer, shrub layer, tree layer. All layers can support various wildlife species as well as enhancing the diversity of plant life in a residential yard. Significant essential natural processes are also enhanced during the implementation of a vertical structure such as that of maintaining soil temperature, protection from erosion, decomposition, replenishment of nutrients, and additions to the food web. Choice of plants Although some exotics may also be included, as discussed in the previous section, wild gardens usually mostly feature a variety of native species. Generally, these will be a part of the pre-existing natural ecology of an area, making them easier to grow than most exotic species. Choosing native plants comes with an array of benefits for both plant and animal diversity, especially the ability to support native insect and fungal populations. Ornamental plants on the market tend to lean toward "pest-free" plants, making it hard for native insects to adapt, and ultimately reducing their food supply. Decreases in insect populations due to excessive ornamental planting will discourage bird populations from inhabiting the particular area. Invasive species can always prove problematic in the garden due to the absence of natural predators and their ability to reproduce rapidly. Without any measures of control, invasive species can easily overtake native species in the garden. Addressing invasive plants can be done a variety of ways; however, to ensure the least amount of damage to the surrounding ecosystem, this is best done by cutting down the plant, The debris from the invasive species can be piled and used as a home for smaller critters. In Australia, it has been found that invasive species such as Lantana (Lantana camara) can also provide refuge for bird species such as the superb fairywren (Malurus cyaneus) and silvereye (Zosterops lateralis), in the absence of native plant equivalents. Careful thought about how to balance invasive species management with what is best for urban biodiversity is needed for the best outcome in your garden. A wildlife garden should be dense enough in native plant species that there is enough ground coverage for species varying in size to find cover (for hiding or shade amongst other things) and shelter. Creating shade is also important in any wildlife garden. Leaf litter, or material that has fallen from a plant on to the ground, creates the perfect mulch and fertilization for a wildlife garden. Leaf litter can soak up excess water from heavy rainfall during the fall and winter time, contain that moisture and slowly release it into surrounding native plants to help them during the spring and summer time. It may also be of help to added native forbs, herbaceous flowering plants, to provide additional food for the wildlife. In the US, some examples of native forbs would be species such as the tapertip hawksbeard (Crepis acuminata), this yellow flowering plant is native and common in the west. The tapertip hawksbeard (Crepis acuminata) is low in abundance and is in need for the sage-grouse species to thrive. Complications with wildlife gardens In theory, with proper planning, a wildlife garden can successfully provide habitat for desired wildlife and can attract many pollinators, essentially boosting local species biodiversity. However, a wildlife garden can also become a habitat sink, instead accomplishing the opposite of its intended purpose. Many wildlife gardens will have native vegetation planted due to the benefits it offers to the local fauna, as well as its convenience to humans because of its easy maintenance. It is important to consider when planning these gardens that if there are no similar native plants neighboring its intended location, the garden may indeed attract desired wildlife, but its visibility may also attract unwanted predators. As the local species population grows due to the newly provided habitat, predators may take advantage of the sudden influx in prey populations, and might show up unexpectedly to strike. In cases such as this, the wildlife garden instead becomes a habitat sink; thus it is important to plan carefully and take precautions, while always expecting the unexpected. Benefits of a wildlife garden Beautifying your home or community, the satisfaction of creative effort, the health benefits of spending time outdoors are just some of the benefits of wildlife gardening. Research has found that a positive feedback loop is built as wildlife choose to visit and enjoy the wildlife gardens in people's homes leaving the owners feeling a sense of satisfaction, fulfillment, and affirmation. Living in the city can result in a loss of connection with nature, and reduce the desire to seek this interaction in our daily lives. Having this disconnect with nature can impact the empathy and care we have for other species other than ourselves as we cannot see our impacts on them if we do not interact with them. Wildlife gardening can enhance urban biodiversity as well as connection to nature. If done in large enough proportions, wildlife gardens can form wildlife corridors. As there continues to be a decline in urban biodiversity it is said that wildlife gardens will need to be the new 'nature,' gardening has now taken a role that transcend the needs of the gardener, they now instead will play a major role in sustaining the wildlife of our country, this will allow the owners of these wildlife gardens to truly make a difference. Social and human well-being benefits Wildlife gardeners report that wildlife gardening has provided them with benefits such as a reduction in stress and anxiety, an improvement in overall mental well-being, the act of making social connections, and the sense of accomplishment at the witness of their efforts proving successful once different species begin to interact with their gardens. There are also several known positive effects that come from interacting with nature, resulting in beneficial reactions from the human body. Immediate positive effects are an increased physical activity and mental stimulation from physical labor going into the gardens by humans, but there are positive effects that happen internally within the human body as well. Some examples include visual or olfactory contact with flora or any kind of nature, which stimulates the parasympathetic nervous system, and an association between an increase in attention in humans who experience plant and animal diversity, implying a reduction in anxiety. National Wildlife Habitat Certification The U.S. National Wildlife Federation provides a Certified Wildlife Habitat program that's main goal is to certify homeowners that provide additional habitat for wildlife that reside in urban areas dominated by the human population. In order to be a part of the program one must first fill out the certification application that the National Wildlife Federation has created. The application form includes a check-list that homeowners must check off when each element is providable to wildlife in their wildlife garden. There are five key components on the check-list: sustainable garden practices (such as being without harmful pesticides or fertilizers and practicing techniques such as composting), sources of food and water, places to take cover/hide, and space to raise potential offspring. It is important as well, when considering types of food to include to consider those from categories such as seeds from flowers or trees, nectar, twigs, fruit such as berries, pollen, and sap. There are additional specifications for each property depending on size of yard and region/area that the home is in. The Homeowner's associations have also been working towards aiding the increase of biodiversity, specifically of plant and bird species, and encouraging participants and other homeowners to do so. Residential wildlife gardens can help strengthen connections between humans and the environment, between both its abiotic and biotic features. Wildlife gardens are very necessary to restoration efforts and with more efforts and collaborate work it can be even more effective as an urban footprint that helps offset the negative environmental effects of urban development. The National Wildlife Federation is also able to go far beyond certifying homeowner's yards but also balconies (in apartments for example), workplaces (near or in the buildings) schools (class gardens or rooftops), farms, along with community gardens. In the Netherlands Wildlife gardens in the Netherlands are called "heemtuinen". The first was created in 1925: Thijsse's Hof (Garden of Thijsse) in Bloemendaal, near Haarlem. It was given to Jac. P. Thijsse on the occasion of his 60th anniversary, and still exists today. The garden gives a display of about 800 plants native to the dune region of South Kennemerland, in which the garden is situated. It is said to be one of the oldest wildlife gardens of its sort in the world. Nowadays, some 25 wildlife gardens exist in the Netherlands. See also List of garden types Backyard Wildlife Habitat Butterfly gardening Climate-friendly gardening Native plant gardening Natural landscaping Permaculture Wilderness (garden history) References External links National Wildlife Federation: Garden for wildlife (USA) Wild Ones: Native Plants, Natural Landscapes (USA) Wild about Gardens (UK) In-depth guide to all aspects of wildlife gardening (UK) Wildscaping (California) Backyards for Wildlife (Adelaide, Australia) Comprehensive wildlife gardening guide for US, Canada and UK Gardens for Wildlife Victoria(Australia) Types of garden Organic gardening Conservation projects Ecological restoration Habitats Wildlife
Wildlife garden
Chemistry,Engineering,Biology
3,200
12,590,397
https://en.wikipedia.org/wiki/RD1
RD1 or 0140+326 RD1 is a distant galaxy, it once held the title of most distant galaxy known. RD1 was discovered in March 1998, and is at z = 5.34, and was the first object found to exceed redshift 5. It bested the previous recordholders, a pair of galaxies at z=4.92 lensed by the galaxy cluster CL 1358+62 (CL 1358+62 G1 & CL 1358+62 G2). It was the most distant object known to mankind for a few months in 1998, until BR1202-0725 LAE was discovered at z = 5.64. Distance measurements The "distance" of a far away galaxy depends on the chosen distance measurement. With a redshift of 5.34, light from this galaxy is estimated to have taken around 12.5 billion years to reach us. But since this galaxy is receding from Earth, the present comoving distance is estimated to be around 26 billion light-years. References Galaxies Triangulum
RD1
Astronomy
217
7,891,226
https://en.wikipedia.org/wiki/He%20Fell%20into%20a%20Dark%20Hole
"He Fell into a Dark Hole" is a science fiction short story by American writer Jerry Pournelle. Set in his CoDominium future alternative history, it was originally published in the magazine Analog Science Fiction and Fact issue of March 1973. The story was reprinted in Warrior: There will be War, Volume V, edited by Pournelle and John F. Carr. The story tells of a CoDominium spaceship sent on a search and rescue mission. However, the mission becomes complicated due to a natural phenomenon. Background The short story is set in the late 21st century. In Pournelle's fictional milieu, this is the era of the CoDominium. The CD is an alliance between the United States of America and the Union of Soviet Socialist Republics, who jointly control Earth and an interstellar empire. To keep their rule over the world, the CoDominium routinely suppresses all scientific research and development. Scientists are censored, spied upon, and can even be deported off Earth. Interstellar travel is possible thanks to the Alderson Drive. The Drive allows ships to jump to star systems thanks to a fifth force. Stars generate this force, and a pathway for instantaneous travel can be created between stars. However, like many science fiction jump drives, these jumps are limited so a ship can only jump from an Alderson Point, a certain location in space. Ships can only travel from and to these points. Plot summary On Ceres, Bartholomew Ramsey, captain of the CDSS Daniel Webster, meets secretly with Vice Admiral Sergei Lermontov. Five years earlier, Ramsey's son and wife Barbara Jean disappeared in space on a passenger liner using a new Alderson point. Several ships were sent to investigate, but they too vanished. Recently, Grand Senator Grant, Barbara Jean's father, had disappeared. He was on a frigate, captained by his nephew, that used the point from which ships never returned. Lermontov needs to find Grant, whose political support could prevent severe cuts in the navy's budget. However, no one knows why ships keep on disappearing. An illegal physicist named Marie Ward provides an explanation: a black hole. Due to restrictions, research on black holes has not been conducted, and few people are experts on the subject. Alderson jumps work by jumping to the closest star. If an undetected black hole were between two stars, a ship would arrive near the black hole instead. The missing ships could have been trapped by the black hole's gravity. The Daniel Webster, with Ward aboard, travels to the black hole and finds several of the missing ships. Many of the crews and passengers of the ships are alive, including the Grants and Ramsey's family. Barbara Jean married Commander James Harriman, who has led the survivors for five years. Ward develops a theory that could allow the Daniel Webster and the survivors to jump out of the system. However, the plan requires a spaceship to go into the black hole. Harriman volunteers and successfully pilots one of the crippled ships into the black hole. The theory works, and the survivors escape to the nearest star. Continuity Lermontov and the Grants make other appearances in the Falkenberg's Legion books, playing important roles throughout the series. The story title is a reference to a variant of the nursery song "The Bear went over the Mountain." References External links CoDominium series Works originally published in Analog Science Fiction and Fact 1973 short stories Fictional empires Fiction about faster-than-light travel Fiction about black holes Fiction set on Ceres (dwarf planet)
He Fell into a Dark Hole
Physics
732
2,911,349
https://en.wikipedia.org/wiki/Central%20tolerance
In immunology, central tolerance (also known as negative selection) is the process of eliminating any developing T or B lymphocytes that are autoreactive, i.e. reactive to the body itself. Through elimination of autoreactive lymphocytes, tolerance ensures that the immune system does not attack self peptides. Lymphocyte maturation (and central tolerance) occurs in primary lymphoid organs such as the bone marrow and the thymus. In mammals, B cells mature in the bone marrow and T cells mature in the thymus. Central tolerance is not perfect, so peripheral tolerance exists as a secondary mechanism to ensure that T and B cells are not self-reactive once they leave primary lymphoid organs. Peripheral tolerance is distinct from central tolerance in that it occurs once developing immune cells exit primary lymphoid organs (the thymus and bone-marrow), prior to their export into the periphery. Function Central tolerance is essential to proper immune cell functioning because it helps ensure that mature B cells and T cells do not recognize self-antigens as foreign microbes. More specifically, central tolerance is necessary because T cell receptors (TCRs) and B cell receptors (BCRs) are made by cells through random somatic rearrangement. This process, known as V(D)J recombination, is important because it increases the receptor diversity which increases the likelihood that B cells and T cells will have receptors for novel antigens. Junctional diversity occurs during recombination and serves to further increase the diversity of BCRs and TCRs. The production of random TCRs and BCRs is an important method of defense against microbes due to their high mutation rate. This process also plays an important role in promoting the survival of a species, because there will be a variety of receptor arrangements within a species – this enables a very high chance of at least one member of the species having receptors for a novel antigen. While the process of somatic recombination is essential to a successful immune defense, it can lead to autoreactivity. For example, lack of functional RAG1/2, enzymes necessary for somatic recombination, has been linked to development of immune cytopenias in which antibodies are produced against the patient's blood cells. Due to the nature of a random receptor recombination, there will be some BCRs and TCRs produced that recognize self antigens as foreign. This is problematic, since these B and T cells would, if activated, mount an immune response against self if not killed or inactivated by central tolerance mechanisms. Therefore, without central tolerance, the immune system could attack self, which is not sustainable and could result in an autoimmune disorder. Mechanism The result of central tolerance is a population of lymphocytes that do not mount immune response towards self-antigens. These cells use their TCR or BCR specificity to recognize foreign antigens, in order to play their specific roles in immune reaction against those antigens. In this way, the mechanisms of central tolerance ensure that lymphocytes that would recognise self-antigens in a way that could endanger the host, are not released into the periphery. It is of note that T cells, despite tolerance mechanisms, are at least to some extent self-reactive. TCR of conventional T cells must be able to recognize parts of major histocompatibility complex (MHC) molecules (MHC class I in case of CD8+ T cells or MHC class II in case of CD4+ T cells) to create proper interaction with antigen-presenting cell. Furthermore, TCRs of regulatory T cells (Treg cells) are directly reactive towards self-antigens (although their self-reactivity is not very strong) and use this autoreactivity to regulate immune reactions by suppressing immune system when it should not be active. Importantly, lymphocytes can only develop tolerance towards antigens that are present in the bone marrow (for B cells) and thymus (for T cells). T cell T cell progenitors (also called thymocytes) are created in the bone marrow and then migrate to the thymus where they continue their development. During this development, the thymocytes perform the V(D)J recombination and some of the developing T cell clones produce TCR that is completely unfunctional (unable to bind peptide-MHC complexes) and some produce TCR that is self-reactive and could therefore promote autoimmunity. These "problematic" clones are therefore removed from the pool of T cells by specific mechanisms. First, during "positive selection" the thymocytes are tested, whether their TCR works properly and those with unfunctional TCR are removed by apoptosis. The mechanism has its name because it selects for survival only those thymocytes whose TCRs do interact with peptide-MHC complexes on antigen presenting cells in the thymus. During the late stage of positive selection, another process called "MHC restricition" (or lineage commitment) takes place. In this process the thymocytes whose TCR recognize with MHCI (MHC class I) molecules become CD4- CD8+ and thymocytes whose TCR recognize MHCII (MHC class II) become CD4+ CD8-. Subsequently, the positively selected thymocytes go through "negative selection" which tests the thymocytes for self-reactivity. The cells that are strongly self-reactive (and therefore prone to attacking the host cells) are removed by apoptosis. Thymocytes that are still self-reactive, but only slightly develop into T regulatory (Treg) cells. Thymocytes that are not self-reactive become mature naïve T cells. Both the Treg and mature naïve T cells subsequently migrate to the secondary lymphoid organs. The negative selection has its name because it selects for survival only those thymocytes whose TCRs do not interact (or interact only slightly) with peptide-MHC complexes on antigen presenting cells in the thymus. Two other terms - recesive and dominant tolerance are also important regarding the T cell central tolerance. Both the terms refer to two possible ways of tolerance establishment towards particular antigen (typically self antigen). The "recesive tolerance" means that the antigen is tolerated via deletion of those T cells that would facilitate immune response against the antigen (deletion of autoreactive cells in negative selection). The "dominant tolerance" means that the T cell clones specific for the antigen are deviated into Treg cells and therefore suppress the immune response against the antigen (Treg selection during the negative selection). Steps of T cell tolerance Development of T cell progenitors T cell precursors originate from bone marrow (BM). Population of the earliest hematopoietic progenitors do not bear markers of differentiated cells (for that they are called Lin- „lineage negative“) but express molecules such as SCA1 (stem cell antigen) and KIT (receptor for stem cell factor SCF). Based on these markers the cells are called LSKs (Lineage-SCA1-KIT). This population can be further divided, based on expression of markers such as CD150 and FMS-related tyrosine kinase 3 (FLT3), into CD150+ FLT3-hematopoietic stem cells (HSCs) and CD150- FLT3low multipotent progenitors (MPPs). The HSCs are „true hematopoietic stem cells“ because they have the ability of self-renewal (generating new HSCs) and also have the potential to differentiate into all blood cell types. The direct descendants of HSCs are the more mature multipotent progenitors (MPPs) that highly proliferate, can differentiate into all blood cell types but are not capable of self-renewal (do not have the ability to indefinitely generate new MPPs and therefore HSCs are needed for generation of new MPPs). Some of the MPPs further upregulate expression of FLT3 (becoming CD150- FLT3high) and start to upregulate genes specific for lymphoid lineage (for example Rag1) (but remain Lin-). These progenitors (still belong to the LSK cells) consist of two similar populations termed lymphoid-primed MPPs (LMPPs) and early lymphoid progenitors (ELPs). The LMPPs/ELPs subsequently give rise to common lymphoid progenitors (CLPs). These cells (FLT3high LIN- KITlow) do not belong to LSK pool, are more mature and more prone towards the lymphoid lineage, meaning that under normal circumstances they will ultimately give rise to T or B cells or other lymphocytes (NK cells). But since they are only progenitors, their cell fate is not strictly predetermined and they still have the ability to differentiate into other lineages. Migration into the thymus Progenitors from bone marrow (BM), even the HSCs, have the ability to randomly exit the BM to the bloodstream and thus can be readily detected there. Therefore, after being generated, the T cell progenitors exit the BM and are randomly carried by blood throughout the body. At the moment they reach postcapillary venules in the thymic cortico-medullary junction, they start slowing down and rolling on the endothelium, because all the progenitors, including LSK cells, express on their surface glycoprotein PSGL1, which is a ligand for P-selectin, expressed on the thymic endothelium. But out of all the aforementioned T cell progenitors, only the LMPPs/ELPs and CLPs express chemokine receptors CCR7 and CCR9 that enable them to enter the thymus. The thymic endothelium express chemokines CCL19 and CCL21, which are ligands for CCR7 and CCL25 which is a ligand for CCR9. The final part of thymic entry is not yet fully understood. Suggested model is that receptor sensing of chemokines by the progenitors activates their integrins (suggested integrins are VLA-4 and LFA-1) which engage with ligands on the endothelium. This interaction stops the rolling, leads to cellular arrest and finally to transmigration along the chemokine gradient inside thymus. Therefore, all the progenitors will be rolling on the thymic endothelium, but only the LMPPs/ELPs and CLPs will enter the thymus because only they have the proper receptor equipement to do so. The mechanism is highly similar to the transmigration, which is used by leukocytes to enter lymph nodes or inflamed tissues. Early thymic development From the moment LMPPs/ETPs and CLPs enter the thymus in the corticomedullary junction, they are referred to as thymus settling progenitors (TSPs). The TSPs highly proliferate and start to migrate to the subcapsullar zone of the thymus. It is not celar what signals drive the migration. One possibility is that they migrate along chemokine gradients, using CXCR4, CCR7 and CCR9 receptors but the migration can be also driven only by interactions of integrins and other cells and ECM (extra-cellular matrix) without direct involvement of chemokines. As they migrate towards the subcapsular zone, the TSPs further continue in their differentiation, which is driven mainly by the thymic microenvironment. Out of many signals the TSPs and other subsequent precursors receive from the microenvironment, the Notch signalling is especially important to drive their differentiation fate. The precursors express Notch1 receptor which is activated by ligands present in the thymic tissue. The subsequent activation of Notch pathway leads to gradual loss of the progenitors capability to generate other cell lineages and they ultimately become only capable to create T cells but this comes at the later stages of the differentiation. At the stage of TSPs, the progenitors still retain the capacity to create both lymphoid and myeloid cells. Given their capability to generate other cell lineages (mainly in vitro) it is even debated that they can physiologically, at least partially contribute to generation of other cell types, present in the thymus, mainly plasmacytoid dendritic cells (pDCs). But this has not yet been clearly proven. DN to DP stages In the next step, the TSPs give rise to early thymic precursors (ETPs), also called as double negative 1 (DN1) cells. The term „double negative“ refers to the fact that at this stage the precursors do not express CD4 nor CD8 coreceptors (sometimes they are even termed „triple negative“ because they also do not express CD3 complex). The DN stages can be distinguished by the expression of surface markers CD44 and CD25, with the DN1 cells being CD44+ CD25-. Similarly to the TSPs, the DN1 cells are still capable of generating other cell types aside from T cells, such as B cells, NK cells, DCs and macrophages (lymphoid and myeloid lineage). But, due to the Notch signalling, they start to committing towards T cell lineage by expression of transcription factors (TFs) such as GATA3 and TCF1. Subsequently, the DN1 cells differentiate into DN2 cells, that are CD44+ and CD25+. The DN2 stage can be further divided into two substages DN2a and DN2b. The transition from the earlier DN2a substage to the later DN2b is also called commitement, because it is at this moment when the T cell precursor finally and completely lose their ability to generate other cell lineages and from that moment they can (even in vitro) only differentiate into T cells. After the commitement, at the DN2b substage, the precursors also start to produce CD3 complex (signalling component of the future TCR receptor complex). Next, the precursors continue their differentiation into DN3 phase in which they are CD44- CD25+. At this stage, the cells finally arrive to the subcapsular zone of the thymus, further proliferate and most importantly, start to express Rag1 and Rag2 (recombinases of the V(D)J recombination of T or B cell receptors). Therefore, it is the DN3 stage at which the T cell precursors start to build their TCRs. It is also at this stage when the precursors decide whether they become αβ or γδ T cell. There are two possible models of how this decision step is made. The first possibility is that the cell fate is simply determined during the development of the precursor by the commitment similar to the development of other cell lineages. Therefore, some T cell precursors commit to γδ T cell and therefore in this step recombine γδTCR and some commit to αβ T cell and similarly recombine αβTCR. The other and generally more accepted model is that the commitment is determined during the TCR rearrangement and formation. Since the V(D)J recombination is step-by-step process, the precursors firstly recombine their genes to produce γδTCR. At the moment, the strength of signal that is produced by the newly formed TCR decides. If the γδTCR is properly formed and receives strong signal by interacting with the ligands present in the thymus, then the precursor continue its development into γδ T cell through specific selection processes. If the T cell precursor receives only weak signal, then the γδTCR formation is scratched and the recombination towards αβTCR starts. Those precursors firstly recombine TCRβ chain and combine it with invariant TCRα (substitute chain) and in previous stages formed CD3 complex to create so-called pre-TCR. With this premature TCR, they enter process called β-selection. This is a control step, in which the progenitor needs to receive positive signal from the pre-TCR to survive. They further need signal from CXCR4 (ligand is CXCL12) which does not serve here to direct migration but as a survival signal along with Notch signalling. Therefore, the β-selection step controls whether the TCRβ chain is properly formed and functional. It can be also understood as a positive selection specific only for the TCRβ chain (TCRα chain is not yet formed) but control for self-reactivity is not included in this step and comes later, especially in the medullary section. The cells that do not create functional γδTCR or pre-TCR or do not successfully pass through β-selection are removed by apoptosis. The cells that successfully pass the β-selection continue their development into DN4 stage, stop the expression of CD25 becoming CD44- CD25- and begin migration inside thymic cortex. It is, again, not completely clear what drives the migration. Probably, the receptors CXCR4 and CXCR9 on the DN4 cells drive the migration along gradients of chemokines CXCL12 and CCL25, although other models of migration to the cortex were established mainly based on movement dynamics of cells due to their extensive proliferation or fluid currents in the thymus without direct involvement of chemokine-driven migration. The DN4 cells subsequently begin the expression of CD8 and CD4 coreceptors becoming CD8+ CD4+ DP cells (DP means double positive because they express both the coreceptors).  Once in the thymic cortex, the DP cells finalize the rearrangement of TCRα chain, which results in production of complete αβTCR complex, which marks the cells ready to enter the positive selection, which takes place in the thymic cortex. During positive selection, T cells are checked for their ability to bind peptide-MHC complexes with affinity. If the T cell cannot bind the MHC class I or MHC class II complex, it does not receive survival signals, so it dies via apoptosis. T cell receptors with sufficient affinity for peptide-MHC complexes are selected for survival. Depending on whether the T cell binds MHC I or II, it will become a CD8+ or CD4+ T cell, respectively. Positive selection occurs in the thymic cortex with the help of thymic epithelial cells that contain surface MHC I and MHC II molecules. During negative selection, T cells are tested for their affinity to self. If they bind a self peptide, then they are signaled to apoptose (process of clonal deletion). The thymic epithelial cells display self antigen to the T cells to test their affinity for self. Transcriptional regulators AIRE and Fezf2 play important roles in the expression of self tissue antigens on the thymic epithelial cells in the thymus. Negative selection occurs in the cortico-medullary junction and in the thymic medulla. The T cells that do not bind self, but do recognize antigen/MHC complexes, and are either CD4+ or CD8+, migrate to secondary lymphoid organs as mature naïve T cells. Regulatory T cells are another type of T cell that mature in the thymus. Selection of T reg cells occurs in the thymic medulla and is accompanied by the transcription of FOXP3. T reg cells are important for regulating autoimmunity by suppressing the immune system when it should not be active. B cell Immature B cells in the bone marrow undergo negative selection when they bind self peptides. Properly functioning B cell receptors recognize non-self antigen, or pathogen-associated molecular proteins (PAMPs). Main outcomes of autoreactivity of BCRs Apoptosis (clonal deletion) Receptor editing: the self-reactive B cell changes specificity by rearranging genes and develops a new BCR that does not respond to self. This process gives the B cell a chance for editing the BCR before it is signaled to apoptose or becomes anergic. Induction of anergy (a state of non-reactivity) Genetic diseases Genetic defects in central tolerance can lead to autoimmunity. Autoimmune Polyendocrinopathy Syndrome Type I is caused by mutations in the human gene AIRE. This leads to a lack of expression of peripheral antigens in the thymus, and hence a lack of negative selection towards key peripheral proteins such as insulin. Multiple autoimmune symptoms result. History The first use of central tolerance was by Ray Owen in 1945 when he noticed that dizygotic twin cattle did not produce antibodies when one of the twins was injected with the other's blood. His findings were confirmed by later experiments by Hasek and Billingham. The results were explained by Burnet's clonal selection hypothesis. Burnet and Medawar won the Nobel Prize in 1960 for their work in explaining how immune tolerance works. See also Autoimmunity Immunology Peripheral tolerance References Immunology
Central tolerance
Biology
4,498
33,488,032
https://en.wikipedia.org/wiki/Graphitic%20carbon%20nitride
Graphitic carbon nitride (g-C3N4) is a family of carbon nitride compounds with a general formula near to C3N4 (albeit typically with non-zero amounts of hydrogen) and two major substructures based on heptazine and poly(triazine imide) units which, depending on reaction conditions, exhibit different degrees of condensation, properties and reactivities. Preparation Graphitic carbon nitride can be made by polymerization of cyanamide, dicyandiamide or melamine. The firstly formed polymeric C3N4 structure, melon, with pendant amino groups, is a highly ordered polymer. Further reaction leads to more condensed and less defective C3N4 species, based on tri-s-triazine (C6N7) units as elementary building blocks. Graphitic carbon nitride can also be prepared by electrodeposition on Si(100) substrate from a saturated acetone solution of cyanuric trichloride and melamine (ratio =1: 1.5) at room temperature. Well-crystallized graphitic carbon nitride nanocrystallites can also be prepared via benzene-thermal reaction between C3N3Cl3 and NaNH2 at 180–220 °C for 8–12 h. Recently, a new method of syntheses of graphitic carbon nitrides by heating at 400-600 °C of a mixture of melamine and uric acid in the presence of alumina has been reported. Alumina favored the deposition of the graphitic carbon nitrides layers on the exposed surface. This method can be assimilated to an in situ chemical vapor deposition (CVD). Characterization Characterization of crystalline g-C3N4 can be carried out by identifying the triazine ring existing in the products by X-ray photoelectron spectroscopy (XPS) measurements, photoluminescence spectra and Fourier transform infrared spectroscopy (FTIR) spectrum (peaks at 800 cm−1, 1310 cm−1 and 1610 cm−1). Properties Due to the special semiconductor properties of carbon nitrides, they show unexpected catalytic activity for a variety of reactions, such as for the activation of benzene, trimerization reactions, and also the activation of carbon dioxide (artificial photosynthesis). Uses A commercial graphitic carbon nitride is available under the brand name Nicanite. In its micron-sized graphitic form, it can be used for tribological coatings, biocompatible medical coatings, chemically inert coatings, insulators and for energy storage solutions. Graphitic carbon nitride is reported as one of the best hydrogen storage materials. It can also be used as a support for catalytic nanoparticles. Areas of interest Due to their properties (primarily large, tuneable band gaps and efficient intercalation of salts) graphitic carbon nitrides are under research for a variety of applications: Photocatalysts Decomposition of water to H2 and O2 Degradation of pollutants Large band gap semiconductor Heterogeneous catalyst and support The significant resilience of carbon nitrides combined with surface and intralayer reactivities make them potentially useful catalysts relying on their labile protons and Lewis base functionalities. Modifications such as doping, protonation and molecular functionalisation can be exploited to improve selectivity and performance. Nanoparticle catalysts supported on gCN are under development for both proton exchange membrane fuel cells and water electrolyzers. Despite graphitic carbon nitride having some advantages, such as mild band gap (2.7 eV), absorption of visible light and flexibility, it still has limitations for practical applications due to low efficiency of visible light utilization, high recombination rate of the photo generated charge carriers, low electrical conductivity and small specific surface area (<10 m2g−1). To modify these shortages, one of the most attractive approaches is doping graphitic carbon nitride with carbon nanomaterials, such as carbon nanotubes. First, carbon nanotubes have large specific surface area, so they can provide more sites to separate the charge carriers, then decrease the recombination rate of the charge carriers and further increase the activity of reduction reaction. Second, carbon nanotubes show high electron conducting ability, which means they can improve graphitic carbon nitride with visible light response, efficient charge carrier separation and transfer, thereby improving its electronic properties. Third, carbon nanotubes can be regarded as a kind of narrow band semiconductor material, also known as a photosensitizer, which can extend the range of the light absorption of semiconductor photocatalytic material, thereby enhancing its utilization of visible light. Energy Storage materials Due to the intercalation of Li being able to occur to more sites than for graphite due to intra layer voids in addition to intercalation between layers, gCN can store a large amount of Li making them potentially useful for rechargeable batteries. See also Beta carbon nitride References Nitrides Inorganic carbon compounds Semiconductor materials Catalysts
Graphitic carbon nitride
Chemistry
1,052
47,040,821
https://en.wikipedia.org/wiki/Giuseppe%20Resnati
Giuseppe Resnati (born 26 August 1955) is an Italian chemist with interests in supramolecular chemistry and fluorine chemistry. He has a particular focus on self-assembly processes driven by halogen bonds, chalcogen bonds, and pnictogen bonds. His results on the attractive non-covalent interactions wherein atoms act as electrophiles thanks to the anisotropic distribution of the electron density typical for bonded atoms, prompted a systematic rationalization and categorization of many different weak bonds formed by many elements of the p- and d-blocks of the periodic table. Education and professional positions Resnati was born in Monza, Italy. He obtained his PhD in Industrial Chemistry at the University of Milan in 1988 with Prof. Carlo Scolastico and a thesis on asymmetric synthesis via chiral sulfoxides. After a period of activity at the Italian National Research Council, in 2001 he became professor of chemistry for materials at the Politecnico di Milano. Research interests His research interests cover/have covered the following topics: enantioselective synthesis of mono- and polyfluorinated compounds and synthesis via perfluorinated reagents (perfluorinated oxaziridines as powerful yet selective oxidizing agents) fluorinated contrast agents for magnetic resonance imaging intermolecular forces and their use in crystal engineering, supramolecular chemistry, Borromean rings, and self-assembly processes in the design and preparation of functional materials halogen bond and iodine chemistry; chalcogen bond. green chemistry Honors and awards van der Waals Prize 2021 (awarded in 2022 by the 2nd International Conference on Noncovalent Interactions, ICNI-2022) RSC-SCI Award Lectureship in the Chemical Sciences (awarded in 2010 by Royal Society of Chemistry/Società Chimica Italiana) Intermolecular Interactions and Structural Aspects in Organic Chemistry Award (awarded in 2008 by Società Chimica Italiana) Corrado Fuortes award (awarded in 1986 by Istituto Lombardo Accademia di Scienze e Lettere) Invited professor at the University of Strasbourg (2012, Strasbourg, France) Invited professor at the Nagoya University (2001, Nagoya, Japan) Invited professor at the Paris-Sud University (1996, Châtenay-Malabry, France) Senior NATO Fellowship (1989-1990, Clemson University, SC, USA) Member of the Academia Europaea (since 2012) Member of the International Advisory Board of the Journal of Fluorine Chemistry (Elsevier, 2001-2023); of Crystals (MDPI, 2015 onwards); of Sustainable Chemistry & Pharmacy (Elsevier, 2017-2019) Topic Editor of Crystal Growth & Design (ACS) (2012 onwards) Member of the International Steering Committee of the: -International Conference on Noncovalent Intereactions (ICNI) from ICNI-1 (Lisbon, Portugal; 2019) onwards; -International Symposium on Fluorine Chemistry (ISFC) from ISFC-15 (Vancouver, Canada; 1997) to ISFC-23 (Quebec, Canada; 2023); -International Meeting on Halogen Chemistry (HalChem) from Halchem-V (Cagliari, Italy; 2010) onwards; -European Symposium on Fluorine Chemistry (ESFC) from ESFC-11 (Bled, Slovenia; 1995) onwards Chair of the: -21st International Symposium on Fluorine Chemistry (23-28 August 2015, Como, Italy); -1st International Symposium on Halogen Bonding (ISXB-1)(18-22 June 2014, Porto Cesareo, Lecce, Italy) Member of the National Organizing Committee of the 6th International IUPAC Conference on Green Chemistry (4-8 September 2016, Venice, Italy); member of the International Scientific Committee of the 2nd Green & Sustainable Chemistry Conference, 14–17 May 2017, Berlin, Germany; member of the Committee of the Faraday Discussion "Halogen Bonding in Supramolecular and Solid State Chemistry", 10–12 July 2017, Ottawa, Canada Coordinator of the UNESCO UNITWIN Network “GREENOMIcS - Green Chemistry Excellence from the Baltic See to the Mediterranean See and Beyond” (2017 onwards) President of the Rotary Club Rozzano Parco Sud in the rotarian year 2006-'07; assistant of the Governor of the District 2050 (south Lombardy) from 2009 to 2013; advisor of District 2050 Governor for starting Rotary Club Morimondo Abbazia (2012), Club charter member (2013) and president (2015–16) Knight of Magistral Grace of the Sovereign Military Order of Malta (2022) Knight Commander of the Equestrian Order of the Holy Sepulchre of Jerusalem (2020) Knight of Merit with Silver Star of the Sacred Military Constantinian Order of Saint George (SMOCSG) (2020); Grand Officer of the Order of Saints Maurice and Lazarus (OSSML) (2022); Knight of the Order of Prince Danilo I (2022). References 1955 births People from Monza Living people University of Milan alumni Members of Academia Europaea Members of the European Academy of Sciences and Arts 20th-century Italian chemists 21st-century Italian chemists Organic chemists Academic staff of the Polytechnic University of Milan Knights of Malta Members of the Order of the Holy Sepulchre Grand Officers of the Order of Saints Maurice and Lazarus National Research Council (Italy) people
Giuseppe Resnati
Chemistry
1,123
7,482,576
https://en.wikipedia.org/wiki/Tyrosine%20%28data%20page%29
References Chemical data pages Chemical data pages cleanup
Tyrosine (data page)
Chemistry
10
2,737,674
https://en.wikipedia.org/wiki/Bullet-nose%20curve
In mathematics, a bullet-nose curve is a unicursal quartic curve with three inflection points, given by the equation The bullet curve has three double points in the real projective plane, at and , and , and and , and is therefore a unicursal (rational) curve of genus zero. If then are the two branches of the bullet curve at the origin. References Plane curves Quartic curves
Bullet-nose curve
Mathematics
89
37,025,957
https://en.wikipedia.org/wiki/HD%2029573
HD 29573 is a binary star system in the constellation Eridanus. It has a combined apparent visual magnitude of 4.99, making it visible to the naked eye. Based upon an annual parallax shift of , it is located 229 light years from the Sun. The system is moving further away from Earth with a heliocentric radial velocity of +3 km/s. The binary nature of this system was discovered through observations made with the Hipparcos spacecraft. The pair orbit each other with a period of 41 years and an eccentricity of 0.8. The magnitude 5.19 primary component has a class of A1, 2.28 times the mass of the Sun, and is a suspected chemically peculiar star. The secondary has magnitude 7.22, 1.56 times the Sun's mass, and a class of F2. The system has a possible infrared excess due to circumstellar dust. References A-type main-sequence stars Eridanus (constellation) Durchmusterung objects Gliese and GJ objects 029573 021644 1483 Binary stars
HD 29573
Astronomy
229