id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
14,705,292 | https://en.wikipedia.org/wiki/Floyd%27s%20triangle | Floyd's triangle is a triangular array of natural numbers used in computer science education. It is named after Robert Floyd. It is defined by filling the rows of the triangle with consecutive numbers, starting with a 1 in the top left corner:
The problem of writing a computer program to produce this triangle has been frequently used as an exercise or example for beginning computer programmers, covering the concepts of text formatting and simple loop constructs.
Properties
The numbers along the left edge of the triangle are the lazy caterer's sequence and the numbers along the right edge are the triangular numbers. The nth row sums to , the constant of an magic square .
Summing up the row sums in Floyd's triangle reveals the doubly triangular numbers, triangular numbers with an index that is triangular.
1 = 1 = T(T(1))
1 = 6 = T(T(2))
2 + 3
1
2 + 3 = 21 = T(T(3))
4 + 5 + 6
Each number in the triangle is smaller than the number below it by the index of its row.
See also
Pascal's triangle
References
External links
Floyd's triangle at Rosetta code
Triangles of numbers
Computer programming
Computer science education | Floyd's triangle | Mathematics,Technology,Engineering | 248 |
24,509,242 | https://en.wikipedia.org/wiki/Gymnopilus%20odini | Gymnopilus odini is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
odini
Fungi of North America
Taxa named by Elias Magnus Fries
Fungus species | Gymnopilus odini | Biology | 46 |
8,552,468 | https://en.wikipedia.org/wiki/Alanosine | Alanosine (also called SDX-102) is a substance that has been studied for the treatment of pancreatic cancer. It is an antimetabolite. It is used as one of a few experimental treatments for patients with deadly pancreatic cancer when the main chemotherapeutic treatment regimen of gemcitabine is no longer useful.
References
Antimetabolites
Amino acid derivatives
2,3-Diaminopropionic acids
Nitrosamines
Experimental cancer drugs
Toxic amino acids | Alanosine | Chemistry | 108 |
26,339,232 | https://en.wikipedia.org/wiki/Primordial%20narcissism | Psychiatrist Ernst Simmel first defined primordial narcissism in 1944. Simmel's fundamental thesis is that the most primitive stage of libidinal development is not the oral, but the gastro-intestinal one. Mouth and anus are merely to be considered as the terminal parts of this organic zone. Simmel terms the psychological condition of prenatal existence "primordial narcissism". It is the vegetative stage of the pre-ego, identical with the id. At this stage there is complete instinctual repose, manifested in unconsciousness. Satiation of the gastro-intestinal zone, the representative of the instinct of self-preservation, can bring back this complete instinctual repose, which, under pathological conditions, can become the aim of the instinct.
Contrary to Lasch, Bernard Stiegler argues in his book, Acting Out, that consumer capitalism is in fact destructive of what he calls primordial narcissism, without which it is not possible to extend love to others.
In other words he is referring to the natural state of an infant as a fetus and in the first few days of its life, before it has learned that other people exist besides itself, and therefore cannot possibly be aware that they are human beings with feelings.
References
Narcissism
Psychoanalysis | Primordial narcissism | Biology | 275 |
20,743,925 | https://en.wikipedia.org/wiki/TRPP3 | Polycystic kidney disease 2-like 2 protein (PKD2L2) also known as transient receptor potential polycystic 5 (TRPP5) is a protein that in humans is encoded by the PKD2L2 gene.
TRPP5 is a member of the transient receptor potential channel family of proteins.
See also
TRPP
References
Further reading
Ion channels | TRPP3 | Chemistry | 78 |
29,949,355 | https://en.wikipedia.org/wiki/Phase%20telescope | A phase telescope or Bertrand lens is an optical device used in aligning the various optical components of a light microscope. In particular it allows observation of the back focal plane of the objective lens and its conjugated focal planes. The phase telescope/Bertrand lens is inserted into the microscope in place of an eyepiece to move the intermediate image plane to a point where it can be observed.
Phase telescopes are primarily used for aligning the optical components required for Köhler illumination and phase contrast microscopy. For Köhler illumination the light source and condenser diaphragm should appear in focus at the back focal plane of the objective lens. For phase contrast microscopy the phase ring (at the back focal plane of the objective) and the annulus (at the back focal plane of the condenser lens) should appear in focus and in alignment.
Bertrand lenses find use in creating interference figures and assisting in aligning a microscope to generate interference figures. The name Bertrand lens commemorates French mineralogist Emile Bertrand (1844-1909), for whom the mineral Bertrandite is also named.
References
Microscopy | Phase telescope | Chemistry | 219 |
73,947,499 | https://en.wikipedia.org/wiki/Megan%20Robertson%20%28scientist%29 | ''For the Australian former rowing coxswain, see Megan Robertson.’'
Megan L. Robertson is a professor of chemical and biomolecular engineering at the University of Houston noted for her work in polymer chemistry towards achieving "green birth, green life, and green death" via recycling and via biosourced oils and fatty acids to develop new elastomers with the aim of replacing petrochemical sources.
Education
Robertson earned her B.S. in Chemical Engineering at Washington University in St. Louis and her Ph.D. in Chemical Engineering at the University of California, Berkeley working under the direction of Prof. Nitash Balsara. After working at Rohm and Haas (now Dow Chemical) as a senior scientist for two years, she joined the group of Marc Hillmyer at the University of Minnesota as a postdoctoral research associate.
Career
In 2010 she joined the Department of Chemical and Biomolecular Engineering at the University of Houston, and in 2021 she became a full professor. She has received funding from the Department of Defense to investigate chitin-based bulletproof coatings and leads an interdisciplinary team funded through the Welch Foundation to transform polyolefin plastic waste into useful materials. Her most cited work, which was published in Science, is a review on the topic of plastics and recycling. She is an Associate Editor at Macromolecules (journal) and is on the editorial advisory board of the European Polymer Journal. She is a member of the National Academies of Sciences, Engineering, and Medicine Board on Chemical Sciences and Technology.
Awards and recognition
2014 – NSF CAREER Award
2015 – Kavli Fellow of the National Academy of Sciences
2017 – PMSE Young Investigator
2018 – Sparks–Thomas award from the ACS Rubber Division
2022 – Fellow of the American Chemical Society
2023 – National Science Foundation Special Creativity Award
References
Living people
Polymer scientists and engineers
Women materials scientists and engineers
Bioplastics
Biomaterials
Fellows of the American Chemical Society
Year of birth missing (living people)
University of Houston faculty
UC Berkeley College of Engineering alumni
McKelvey School of Engineering alumni | Megan Robertson (scientist) | Physics,Chemistry,Materials_science,Technology,Biology | 425 |
77,892,541 | https://en.wikipedia.org/wiki/HD%2018742 | HD 18742 (proper name Ayeyarwady) is a 8th-magnitude subgiant star located about away in the constellation of Eridanus. It is orbited by one confirmed exoplanet, super-Jupiter HD 18742 b (proper name Bagan), and possibly by another Jovian planetary candidate (HD 18742 c).
Stellar characteristics
HD 18742 is a yellow subgiant star with a spectral type of G8/K0 IV. Its precise physical parameters vary from publication to publication, with calculated radii ranging between 4.086.34 , and mass estimates falling mostly between 1.361.73 , though a 2017 paper suggests a significantly higher value of . The star has an effective temperature of about and a luminosity of 13.2 or 20.7 , and is thought to be about 2.32.5 billion years old. Seen from Earth, the star has an apparent magnitude of 7.81, making it visible with binoculars and by the naked eye under the darkest skies with effort.
Nomenclature
In 2019, the Republic of the Union of Myanmar was assigned to giving the HD 18742 system a proper name as part of the IAU100 NameExoWorlds Project, planned to celebrate the hundredth anniversary of the International Astronomical Union (IAU), which grants the right to name an exoplanetary system to every state and territory in the world. Names were submitted and selected within Myanmar, which were then presented to the IAU to be officially recognized. On 17 December 2019, the IAU announced that HD 18742 and its confirmed planet, b, were named Ayeyarwady and Bagan, respectively.
Ayeyarwady was named after a river of the same name, the longest and most important river in Myanmar. Bagan refers to one of the ancient cities of the country located right beside the Ayeyarwady, which was listed as a UNESCO World Heritage Site in 2019.
Planetary system
In 2011, radial-velocity observations made at the W. M. Keck Observatory revealed the existence of one exoplanet around HD 18742. The planet, HD 18742, is thought to be a gas giant with a minimum mass of 3.362 , which orbits its host star at a distance of 1.82 AU once every . Its orbit is nearly circular (i.e., with a low eccentricity), similar to planets in the Solar System.
Other than the doppler shifts caused by HD 18742 b, radial-velocity measurements used to discover the planet also included an additional linear trend. Utilizing data gathered at the Keck Observatory between 2007 and 2015, Luhn et al. subtracted the effects of HD 18742 b from the radial-velocity curve, revealing a 900-day-period signal, possibly caused by another similar planet. Though the existence of such a planet would provide a far better match to the observed curve, this signal remains a planetary candidate since it would be in a 9:10 resonance with HD 18742 b, a non-physical resonance that is previously unheard of. Follow-up observations are expected to show the true nature of the system.
See also
List of proper names of stars
List of proper names of exoplanets
List of stars in Eridanus
List of exoplanets discovered in 2011
References
External links
Ayeyarwady
Eridanus (constellation)
018742
013993
BD-21 00533
K-type subgiants
G-type subgiants
Planetary systems with one confirmed planet
J03001065-2048091
Planetary systems
Stars | HD 18742 | Astronomy | 739 |
72,525,295 | https://en.wikipedia.org/wiki/New%20Zealand%20Society%20of%20Industrial%20Designers | The New Zealand Society of Industrial Designers, known as NZSID, formed in 1959, was a professional body for designers in New Zealand. Its membership was multi-disciplinary, representing designers in all branches of design for industry—interior, product, furniture, graphic, packaging, exhibition, apparel, design education, design management... It was rebranded New Zealand Society of Designers (NZSD) and reconstituted on 28 May 1988 with a full-time office, the Designers Secretariat, from 1 August, and The Best New Zealand Graphic Design Awards scheme from 1 October.
The Society merged with the New Zealand Association of Interior Designers (NZAID) to form a new society, the Designers Institute of New Zealand (DINZ), in April 1991, which was incorporated on 23 August 1991. NZSID and NZAID were formally dissolved as incorporated societies on 11 August and 10 October 2000 respectively.
Regional groups
Three regional groups (branches) were established on 18 February 1967—two in North Island, following the boundary of Auckland Province, and one in South Island:
New Zealand Society of Industrial Designers, Northern Region (Auckland)
New Zealand Society of Industrial Designers, Central Region (Wellington)
New Zealand Society of Industrial Designers, Southern Region (Christchurch)
Officers
Presidents
1959–1959: Hugh Johansen (Provisional Chairman)
1959–1960: Robert Ellis
1960–1962: Peter Parsons †
1962–1963: Paul Beadle
1963–1965: Keith Mosheim
1965–1969: Douglas Heath
1969–1971: Noel Tritton
1971–1973: Don Haynes
1973–1977: Keverne Trevelyan
1977–1981: Michael Smythe
1981–1984: Peter Haythornthwaite
1984–1986: Monica Schaer-Vance
1986–1988: Rudi Schwarz
1988–1992: Mark Adams
† Unconfirmed
Vice-presidents, councillors, secretaries and treasurers (A-Z)
Some members serving various terms, 1959–1992, with indication of office (VP, C, S, T):
Mark Adams (C), Maurice Askew (VP, C), Paul Beadle (VP, C), Jan Beck (C), A. J. Bisley (C), Frank Carpay (C), Mark Cleverly (C), James Coe (VP, C), Kate Coolahan (C), John Crichton (C), Gary Couchman (C), K. Crook (C), John Densem (C), W. J. E. Dodds (C), Gray Dixon (C), Robert Drake (C, S), H. B. Ellis (C), B. Ellis (C), E. Fox (C), Hamish Keith (C), Stephen Green, Peder Hansen (C, S), Don Hatcher (C), Don Haynes (VP, C, S, T), K. Hawkins (C), Max Hailstone (C), Peter Haythornthwaite (C, S), Douglas Heath (VP, C), Gifford Jackson (C), J. Laird (C), Don Little (C, S), Gerry Luhman (C), Clive Luscombe (C), M. J. Mason (C), Stan Mauger (C), Lindsay Missen (C), Keith Mosheim (VP), Geoff Nees (VP, C), Michael Penck (C), Peter Parsons (C, S), G. Percy (C), Mark Pennington (C), Ben Petts (C), G. Preston (C), Don Ramage (C), P. Richings (C), Jolyon Saunders (VP, C, S, T), Monica Schaer (C), Rudi Schwarz (C), Ann Shanks (C), Graham Simpson (C), Michael Smythe (C, S), Richard T. Te One (C), Ray Thorburn (C), Keverne Trevelyan (C), Noel Tritton (VP, C), Bill Tunnicliffe (C), Rowland Walsh (C), Elly van de Wijdeven (C), Erwin T. Winkler (C), Tony Winter (C, S), John Woodruffe (C, T), B. Yap (C), Edward J. Zagorski (C)
Executive Director, Designers Secretariat
1988–1992: Michael Smythe
Publications
SID Scene (1970–). Nos. 1–. Christchurch: New Zealand Society of Industrial Designers Inc.; Designprint Press Ltd. Bi-monthly membership newsletter.
Designz (November 1973–November December 1985). Original series nos. 1–39. Auckland: New Zealand Society of Industrial Designers Inc. – via Auckland Libraries; National Library of New Zealand.
Designz: Magazine of the New Zealand Society of Designers Inc. (September 1988–December 1990). New series nos. 1–7. New Zealand Society of Designers Inc. ISSN: 1170-6686 – via Auckland Libraries; National Library of New Zealand.
References
External links
Designers Institute of New Zealand
Design institutions
New Zealand design
Learned societies of New Zealand
Organizations established in 1959
Arts organizations established in 1959 | New Zealand Society of Industrial Designers | Engineering | 1,079 |
14,638,086 | https://en.wikipedia.org/wiki/Ilya%20Zbarsky | Ilya Borisovich Zbarsky (; 8 November 1913 – 9 November 2007) was a Soviet and Russian biochemist who served as the head of Lenin's Mausoleum from 1956 to 1989. He was appointed as Advisor at the Direction of the Institute in 1989 due to his age. He was the son of Boris Zbarsky, who helped mummify Lenin's body in 1924. Zbarsky was a member of the Russian Academy of Medical Sciences.
With Samuel Hutchinson, he was the author of the book Lenin's Embalmers.
He died on 9 November 2007, in Moscow.
References and sources
Ilya Borisovich Zbarsky biography
References
1913 births
2007 deaths
20th-century Russian chemists
People from Kamianets-Podilskyi
Academicians of the Russian Academy of Medical Sciences
Academicians of the USSR Academy of Medical Sciences
Moscow State University alumni
Recipients of the Order of Friendship of Peoples
Recipients of the Order of the Red Banner of Labour
Molecular biologists
Russian biochemists
Russian Jews
Soviet biochemists
Soviet Jews | Ilya Zbarsky | Chemistry | 210 |
1,724,209 | https://en.wikipedia.org/wiki/Sensor%20web | Sensor web is a type of sensor network that heavily utilizes the World Wide Web and is especially suited for environmental monitoring.
OGC's Sensor Web Enablement (SWE) framework defines a suite of web service interfaces and communication protocols abstracting from the heterogeneity of sensor (network) communication.
Definition
The term "sensor web" was first used by Kevin Delin of NASA in 1997,
to describe a novel wireless sensor network architecture where the individual pieces could act and coordinate as a whole. In this sense, the term describes a specific type of sensor network: an amorphous network of spatially distributed sensor platforms (pods) that wirelessly communicate with each other. This amorphous architecture is unique since it is both synchronous and router-free, making it distinct from the more typical TCP/IP-like network schemes. A pod as a physical platform for a sensor can be orbital or terrestrial, fixed or mobile and might even have real time accessibility via the Internet. Pod-to-pod communication is both omni-directional and bi-directional where each pod sends out collected data to every other pod in the network. Hence, the architecture allows every pod to know what is going on with every other pod throughout the sensor web at each measurement cycle. The individual pods (nodes) were all hardware equivalent and Delin's architecture did not require special gateways or routing to have each of the individual pieces communicate with one another or with an end user. Delin's definition of a sensor web was an autonomous, stand-alone, sensing entity – capable of interpreting and reacting to the data measured – that does not necessarily require the presence of the World Wide Web to function.
As a result, on-the-fly data fusion, such as false-positive identification and plume tracking, can occur within the sensor web itself and the system subsequently reacts as a coordinated, collective whole to the incoming data stream. For example, instead of having uncoordinated smoke detectors, a sensor web can react as a single, spatially dispersed, fire locator.
The term "sensor web" has also morphed into sometimes being associated with an additional layer connecting sensors to the World Wide Web.
The Sensor Web Enablement (SWE) initiative of the Open Geospatial Consortium (OGC) defines service interfaces which enable an interoperable usage of sensor resources by enabling their discovery, access, tasking, as well as eventing and alerting. By defining standardized service interfaces, a sensor web based on SWE services hides the heterogeneity of an underlying sensor network, its communication details and various hardware components, from the applications built on top of it.
OGC's SWE initiative defines the term "sensor web" as an infrastructure enabling access to sensor networks and archived sensor data that can be discovered and accessed using standard protocols and application programming interfaces. Through this abstraction from sensor details, their usage in applications is facilitated.
Characteristics of Delin's sensor web architecture
Delin designed a sensor web as a web of interconnected pods. All pods in a sensor web are equivalent in hardware (there are no special "gateway" or "slave" pods). Nevertheless, there are additional functions that pods can perform besides participating in the general sensor web function. Any pod of a sensor web can be a portal pod and provides users access to the sensor web (both input and output).
Access can be provided by RF modem, cell phone connections, laptop connections, or even an Internet Server. In some cases, a pod will have an attached removable memory unit (such as a USB stick or a laptop) that stores collected data.
The term of mother pod refers to the pod that contains the master clock of the synchronous sensor web system. The mother pod has no special hardware associated with it, its designation as a mother is merely based on the ID number associated with the pod. Often the mother pod serves as a primary portal point to the Internet, but this is done only for deployment convenience. Early papers referenced the mother pod as "a prime node" if it additionally contained special hardware for a particular type of input/output device (say an RF modem).
Because of the inherent hopping of data within a sensor web, a pod with no attached sensors can be deployed as a relay with the single purpose of facilitating communication between the other pods and to expand the communication range to a particular end-point (such as a mother pod). Sensors can be attached to relay pods at a later time and relays can also serve as portal pods.
Each pod usually contains:
one or more sensor leading to one or more data channel,
a processing unit such as a micro-controller or microprocessor,
a two-way communication component such as a radio and antenna (radio ranges are typically limited by government spectrum requirements; unlicensed bands will allow for communication of a few hundred yards in unobstructed areas, although line of sight is not a requirement),
an energy source such as a battery coupled with a solar cell,
a package to protect components against sometimes harsh environment,
Each pod also typically requires a support such as a pole or tripod.
The number of pods may vary, with examples of sensor webs with 12 to 30 pods.
The shape of a sensor web may impact its usefulness, for instance a particular deployment
made sure each pod was in range to communicate with at least two other pods. Sensor web measurement cycles have typically been between 30 seconds and 15 minutes for deployed systems thus far.
Sensor webs consisting of pods have been deployed that have spanned miles and run continuously for years.
Sensor webs have been fielded in harsh environments (including deserts, mountain snowpacks, and Antarctica)
for the purposes of environmental science and have also proved valuable in urban search and rescue and infrastructure protection.
The technology is not only monitoring the environment but sometimes also controlling the environment by actuating devices.
See also
Internet of Things
Web of Things
Observations and Measurements
Open Geospatial Consortium
Semantic Sensor Web
Sensor Grid: Sensor Web Meets Grid Computing
SensorML
References
Further reading
Sensor Webs, K.A. Delin, S.P. Jackson, and R.R. Some NASA Tech Briefs 1999, 23, 90 open access publication.
The Sensor Web: Distributed Sensing for Collective Action, Kevin A. Delin Sensors Online July 2006, 18 open access publication.
The Sensor Web: A Distributed, Wireless Monitoring System, Kevin A. Delin Sensors Online April 2004, 21 open access publication.
New Generation Sensor Web Enablement, Arne Bröring, Johannes Echterhoff, Simon Jirka, Ingo Simonis, Thomas Everding, Christoph Stasch, Steve Liang, Rob Lemmens Sensors 2011, Volume 11, Number 3, 2652-2699 open access publication.
Environmental Studies with the Sensor Web: Principles and Practice, Kevin A. Delin, Shannon P. Jackson, David W. Johnson, Scott C. Burleigh, Richard R.Woodrow, J. Michael McAuley, James M. Dohm, Felipe Ip, Ty P.A. Ferré, Dale F. Rucker, Victor R. Baker Sensors 2005, 5, 103-117 open access publication.
OGC Sensor Web Enablement: Overview and High Level Architecture, Botts, Percivall, Reed, and Davidson OGC White Paper, July 2006 open access publication.
Open Sensor Web Architecture: Core Services,Xingchen Chu, Tom Kobialka, Bohdan Durnota, and Rajkumar Buyya, Proceedings of the 4th International Conference on Intelligent Sensing and Information Processing (ICISIP 2006, IEEE Press, Piscataway, New Jersey, USA, , 98-103pp), Dec. 15-18, 2006, Bangalore, India. open access publication.
A SensorWeb Middleware with Stateful Services for Heterogeneous Sensor Networks,Tom Kobialka, Rajkumar Buyya, Christopher Leckie, and Rao Kotagiri, Proceedings of the 3rd International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP 2007, IEEE Press, Piscataway, New Jersey, USA), Dec. 3-6, 2007, Melbourne, Australia. open access publication.
External links
SensorWare Systems - The company spun out of NASA to commercialize the sensor web technology.
Sensor Web Alliance – An organization that is developing a collaborative research platform called the Sensor Web Alliance (SWA). The aim is to pool resources in the SWA, coordinate research and allow participating organisations to share IP, which will spread risk and lower the cost of entry.
SenseWeb Project – A Microsoft Research project that lets users visualize and query real-time data using a geographical interface such as Windows Live Local and allows data owners to easily publish their live data using a web service interface.
GeoSensor Web Lab, University of Calgary – A university research lab that is developing a GIS infrastructure for the Sensor Web and its applications. Several sensor web applications have been developed and deployed for environmental and agricultural applications. Project information, publications, and demo videos can be found on this site.
52°North – An open partnership organization that interoperable web services and data encoding models, which constitute the technical building blocks of Spatial Data Infrastructures (SDIs).
SWSL at the Institute for Geoinformatics (IFGI) of the University of Muenster – The Sensor web, Web-based geoprocessing, and Simulation Lab (SWSL) is a university lab working on the building blocks of the geosensor web to make all different kinds of sensors discoverable, accessible and taskable in an interoperable way.
OGC SWE – Since 2002, the Open Geospatial Consortium (OGC) has had a focused Sensor Web Enablement (SWE) activity. From the OGC perspective, a sensor web refers to web accessible sensor networks and archived sensor data that can be discovered and accessed using standard protocols and application program interfaces (APIs).
Open SensorWeb Architecture Project – The project focuses on the development of service-oriented middleware for SensorWeb that integrates sensor networks and distributed computing environments such as computational grids.
Sensorweb Research Laboratory – A research lab at Georgia State University, developing sensor web systems and applying them to scientific and social applications, such as environment monitoring, smart environments, and smart grid applications.
SEPS Project – The Self-adaptive Earth Predictive System (SEPS) concept combines Earth System Models (ESM) and Earth observations (EO) into one system through standard Web services. This is a collaborative project that consists of scientists from the Center for Spatial Information Science and Systems (CSISS) of George Mason University, NASA GSFC, and UMBC.
Networks
Web Geographic information systems
Sensors | Sensor web | Technology,Engineering | 2,200 |
71,672,577 | https://en.wikipedia.org/wiki/Page%20Analysis%20and%20Ground%20Truth%20Elements | Page Analysis and Ground Truth Elements (PAGE) is an XML standard for encoding digitised documents. Comparable to ALTO (XML), it allows the organisation and structure of a page and its contents to be described.
PAGE XML can be used to describe:
page content (regions, lines of text, words, glyphs, reading order, text content, ...)
the evaluation of the layout analysis (evaluation profiles, evaluation results, ...)
the cutting of the document image (cutting grids)
The format is developed by the Pattern Recognition & Image Analysis Lab (PRIMA) at the University of Salford in Manchester.
It was designed to be used in conjunction with automatic segmentation and transcription techniques (OCR and HTR): indeed, PAGE aims to support each of the different steps in the processing chain for image document analysis (from image enhancement to layout analysis to OCR).
The PAGE XML schema is notably used as an export and import format by automatic transcription software such as eScriptorium and Transkribus. It is also an export format used by Kraken, a turnkey OCR system optimised for documents in historical and non-Latin scripts.
References
External links
Documentation
Encoding example
Documentation of the PAGE XML Format for Page Content in the OCR-D project, funded by Deutsche Forschungsgemeinschaft.
Documentation "Page Content - Ground Truth and Storage"
Documentation "Evaluation - Metadata, Profile and Results"
Documentation "Dewarping - Ground Truth and Storage"
XML-based standards
Optical character recognition
Handwriting recognition | Page Analysis and Ground Truth Elements | Technology | 317 |
3,784,078 | https://en.wikipedia.org/wiki/FireWire%20camera | FireWire cameras use the IEEE 1394 bus standard for the transmission of audio, video and control data. FireWire is Apple Computer's trademark for the IEEE 1394 standard.
FireWire cameras are available in the form of photo cameras and video cameras, which provide image and audio data. A special form of video cameras is used in the domains of industry, medicine, astronomy, microscopy and science. These special cameras do not provide audio data.
Structure
The basic structure of FireWire cameras is based on the following six modules:
Optics
FireWire cameras are based on CCD or CMOS chips. The light-sensitive area, as well as the pixels of these chips, are small. In the case of cameras with integrated optics, we can assume that the optics are adapted to these chips.
However, in the domains of professional, and semi-professional photography, as well as in the domain of special cameras, interchangeable optics are often used. In these cases, a system specialist has to adapt the optics and the chip to the application (see System integration). Besides normal lenses, such interchangeable lenses may be microscopes, endoscopes, telescopes, etc. With the exception of the standard C-mount and CS-mount, the mounts of interchangeable optics are company-specific.
Signal capture
Since the function of a FireWire camera depends upon electrical signals, the module "signal capture" transforms the incident light, as well as the incident sound into electrons. In the case of light, this process is performed by a CCD or CMOS chip. The transformation of the sound is performed by a microphone.
Digitization
The first step of the image's digitization results from the structure of a CCD or CMOS chip. It dissects the image into pixels. If a pixel has collected many photons, it creates a high voltage. Should there only be a few photons, a low voltage is created. "Voltage" is an analog value. Therefore, during the digitization's second step, the voltage has to be transformed into a digital value by an A/D converter. Now the raw digital image is available.
A microphone transforms the sound into a voltage. An A/D converter transforms these analog values into digital ones.
Signal enhancement
The creation of color is based on a color filter, which is located in front of the CCD or CMOS chip. It is red, green or blue and changes its color from pixel to pixel. Therefore, the filter is called a color filter array or, after its inventor, Bayer filter. Using these raw digital images, the module "signal enhancement" creates an image, which meets aesthetic requirements. The same is true for the audio data.
In the final step, the module compresses the image and audio data and outputs them - in the case of video cameras - as a DV data stream. In the case of photo cameras, single images may be output and, if applicable, voice comments as files.
The application domains of industry, medicine, astronomy, microscopy and science often use special monochrome cameras. They forgo any signal enhancement and thus output the digital image data in its raw state.
Some special models of color cameras are only capable of outputting raw digital image data. Such cameras are called ColorRAW or Bayer cameras. They are often used in industry, medicine, astronomy, microscopy, and science. In form of photo cameras, they are used by professional photographers. Semi-professional photo cameras often offer an optional RAW mode.
The enhancement of the raw digital data takes place outside the camera on a computer and therefore the user is able to adapt it to a particular application.
Interface
The first three modules are part of any digital camera. The interface is the module that characterizes the FireWire camera. It is based on the IEEE 1283 standard, defined by the organization "Institute of Electrical and Electronics Engineers". This standard defines a bus, which transmits:
time critical data, for example, a video and
data whose integrity is of critical importance (for example, parameters or files).
It allows the simultaneous use of up to 74 different devices (cameras, scanners, video recorders, hard disks, DVD drives, etc.).
Other standards, called "protocols" define the behavior of these devices. FireWire cameras mostly use one of the following protocols:
AV/C AV/C stands for "Audio Video Control" and defines the behavior of DV devices, for example, video cameras and video recorders. It is a standard, defined by the 1348 Trade Association. The Audio/Video Working Group is in charge of it.
DCAM DCAM stands for "1394-based Digital Camera Specification" and defines the behavior of cameras that output uncompressed image data without audio. It is a standard, defined by the 1394 Trade Association. The IIDC (Instrumentation and Industrial Control Working Group) is in charge of it.
IIDC IIDC is often used synonymously with DCAM.
SBP-2 SBP-2 stands for "Serial Bus Protocol" and defines the behavior of mass storage devices, such as hard disks. It is an ANSI standard maintained by NCITS.
Devices that use the same protocol are able to communicate with each other. A typical example is the connection of a video camera and a video recorder. Thus, in contrast to the USB bus, there is no need to use a controlling computer. If a computer is used, it has to be compatible with the protocols of the device with which it is to communicate (please cf. Exchanging data with computers).
Control
The controlling module coordinates the other ones. The user may specify its behavior by:
switches outside the camera,
the FireWire bus, using application software or
a hybrid of the first two cases.
Photo cameras
Professional and semi-professional photo cameras, and especially digital camera backs, offer FireWire interfaces to transfer image data and to control the camera.
The image data's transfer is based on the protocol SBP-2. In this mode, the camera behaves as an external hard disk and thus enables the simple exchange of image files with a computer (please cf. Exchanging data with computers).
To increase the work efficiency in a photo studio, additionally photo cameras and digital backs are controllable via the FireWire bus. Usually the camera manufacturer does not publish the protocol used in this mode. Therefore, camera control requires a specialized piece of software provided by the camera manufacturer, which mostly is available for Macintosh and Windows computers.
Video cameras
Although compatibility to the FireWire bus is only found in high-end photo cameras, it has usually been present in home-user level video cameras. Video cameras are mostly based on the protocol AV/C. It defines the flow of audio and video data, as well as the camera's control signals.
The majority of video cameras only provides the output of audio and video data via the FireWire bus ("DVout"). Additionally, some video cameras are able to record audio and video data ("DVout/DVin"). Video cameras exchange their data with computers and/or video recorders.
Special cameras
In the domains of industry, medicine, astronomy, microscopy and science FireWire cameras are often used not for aesthetic, but rather for analytical purposes. They output uncompressed image data, without audio. These cameras are based on the protocol DCAM (IIDC) or on company specific protocols.
Due to their field of application, their behavior is considerably different from photo cameras or video cameras:
Their case is small and built mainly from metal and do not follow aesthetic, but rather functional design constraints.
The vast majority of special cameras does not offer integrated optics, but a standardized lens mount called "C-mount" or "CS-mount". This standard is not only used by lenses, but also by microscopes, telescopes, endoscopes and other optical devices.
Recording aids, such as autofocus or image stabilization are not available.
Special cameras often utilize monochrome CCD or CMOS chips.
Special cameras often do not apply an infrared cut filter or optical low pass filters, thus avoid affecting the image.
Special cameras output image data streams and single images, which are captured using an external trigger signal. In this way, these cameras can be integrated into industrial processes.
Mass storage devices are not available since the images have to be analyzed more or less immediately by the computer connected to the camera.
The vast majority of special cameras is controlled by application software, installed on a computer. Therefore, the cameras do not have external switches.
Application software is rarely available off-the-shelf. It usually has to be adapted to the specific application. Therefore, camera manufacturers offer programming tools designed for their cameras. If a camera uses the standard protocol DCAM (IIDC), it can also be used with third-party software. A lot of industrial computers and embedded systems are compatible to the DCAM (IIDC) protocol (please cf. Structure / Interface and Exchanging data with computers).
In comparison to photo or video cameras, these special cameras are very complifcated. However, it makes no sense to use them in an isolated manner. They are, like other sensors, only components of a bigger system (please cf. System integration).
Exchanging data with computers
FireWire cameras are able to exchange data with any other FireWire device, as long as both devices use the same protocol (please cf. Structure / Interface). Depending upon the specific camera, these data are:
Image and audio files (protocol: SBP-2)
Image and audio data flows (protocol: AV/C or DCAM (IIDC))
Parameters to control the camera (protocol: AV/C or DCAM (IIDC))
If the camera is to communicate with a computer, this computer has to have a FireWire interface and to use the camera's protocol. The old days of FireWire cameras were dominated by company specific solutions. Some specialist offered interface boards and drivers, which were accessible only by their application software. Following this approach, application software is in charge of the protocol. Since this solution utilizes the computing resources in a very efficient manner, it is still used in the context of highly specialized, industrial projects. This strategy often leads to problems, using other FireWire devices, as for instance hard disks. Open systems avoid this disadvantage.
Open systems are based on a layer model. The behavior of the single layers (interface board, low level driver, high level driver and API) follows the constraints of the respective operating system manufacturer. Application software is allowed to access operating system APIs, but never should access any level lower. In the context of FireWire cameras, the high level drivers are responsible for the protocol. The low level drivers and the interface boards put the definitions of the standard IEEE 1394 into effect. The advantage of this strategy is the simple realization of application software, which is independent of hardware and specific manufacturers.
Especially in the domains of photo cameras and special cameras hybrids between open and company specific systems are used. The interface boards and the low level drivers typically adhere to the standard, while the levels above are company specific.
The basic characteristic of open systems is not to use the APIs of the hardware manufacturers, but those of the operating system. For Apple and Microsoft the subject "image and sound" is of high importance. According to their APIs - QuickTime and DirectX - are very well known. However, in the public perception they are reduced to the reproduction of audio and video. Actually, they are powerful APIs that are also responsible for image acquisition.
Under Linux this API is called video4linux. It is less powerful than QuickTime and DirectX and therefore additional APIs exist besides video4linux:
Photo cameras Photo cameras usually use Linux' infrastructure for mass storage devices. One of the typical applications is digiKam.
Video cameras Video cameras are accessed by various APIs. The image to the right depicts the access of the video editing software Kino to the libavc1394 API. Kino also accesses other APIs which are not shown in the image to simplify matters.
Special cameras The most important API for special cameras is libdc1394. The image to the right depicts the access of the application software Coriander to this API. Coriander controls FireWire cameras that are based on the protocol DCAM (IIDC) and acquires their images.
In order to simplify the use of video4linux and the dedicated APIs, the meta API unicap has been developed. It covers their bits and pieces with the aid of a simple programming model.
System integration
Often FireWire cameras are only a cog in a bigger system. Typically, a system specialist uses a number of different components to solve a particular problem. There are two basic approaches to do this:
The problem at hand is interesting enough for a group of users. The typical indicator of this situation is the off-the-shelf availability of application software. Studio photography is an example.
The problem at hand is only of interest to a particular application. In such cases, there is typically no application software available off-the-shelf. Therefore, it has to be written by a system specialist. The gauging of a steel plate is an example.
Many aspects of system integration are not directly related to FireWire cameras. For example, illumination has a very strong influence on the quality of the acquired images. This holds true for both aesthetic and analytical applications.
However, in the context of the realization of application software, there is a special feature, which is typical for FireWire cameras. It is the availability of standardized protocols, such as AV/C, DCAM, IIDC and SBP-2 (please cf. Structure / Interface and Exchanging data with computers). Using these protocols, the software is written independently from any particular camera and manufacturer.
By leaving the realization of the protocol to the operating system, and by enabling access to a set of APIs, software can be developed independently from hardware. If, for instance, under Linux a piece of application software uses the API libdc1394 (please cf. Exchanging data with computers), it can access all FireWire cameras that use the protocol DCAM (IIDC). Using the API unicap additionally permits access to other video sources, such as frame grabbers.
See also
FireWire
Camera
Video camera
Digital Video
Optics
Lens
Microscope
Telescope
Endoscope
Lens mount
External links
1394 Trade Association
Complete list of Firewire cameras
Supplier overview
Full line of Firewire IEEE1394a and IEEE1394b cameras and peripherals
Imaging Solutions Group FireWire Cameras
FireWire video cameras - for industrial, scientific and medical applications
Photo cameras
Video cameras
Special cameras
Operating System APIs
QuickTime
DirectX
ActiveX
Operating System APIs under Linux
video4linux
libavc1394
libdc1394
unicap
Application software under Linux
ucview
digiKam
Kino
Coriander
Videography
Cameras
Film and video technology | FireWire camera | Technology | 3,059 |
74,836,769 | https://en.wikipedia.org/wiki/Julie%20Diani | Julie Diani is a French academic specialised in the characterization and simulation of polymeric materials. She is the CNRS Research Director at École Polytechnique’s Solid Mechanics Laboratory, and holder of the Arkema Design and Modeling of Innovative Materials Chair.
Education
Diani earned a B.S. in Applied Mathematics and her S.M. Degree in Mechanical Engineering at the Pierre et Marie Curie University. She completed her doctoral degree in Materials Science and Engineering at Ecole Normale Superieure de Cachan.
Career
She joined CNRS in 2000. From 2004 to 2006, she was a visiting researcher at the University of Colorado, Boulder.
Diani's most cited works include a review of the Mullins effect and a constitutive model for Shape-memory polymers.
Personal life
Diani is the daughter of two math teachers, and she does Judo and biking.
Awards and recognition
2015 - Sparks–Thomas award from the ACS Rubber Division
References
Polymer scientists and engineers
Women materials scientists and engineers
Year of birth missing (living people)
Living people
Pierre and Marie Curie University alumni
École Normale Supérieure alumni
Research directors of the French National Centre for Scientific Research | Julie Diani | Chemistry,Materials_science,Technology | 237 |
66,381,155 | https://en.wikipedia.org/wiki/Clear%20Mobile | Clear Mobile is a mobile telephone network running as a mobile virtual network operator (MVNO) using Vodafone's Irish network. Vodafone owns Clear Mobile. Clear Mobile was launched on 14 January 2021.
Products and services
Since launch, Clear Mobile has offered one product, which is a sim-only mobile contract. The package is post-paid and includes unlimited calls to Irish mobiles and landlines, unlimited texts to Irish mobiles, unlimited 4G data with a maximum download speed of 5 Mbit/s and 10 GB EU data.
Customer service
Clear Mobile has no customer service phone lines. All support is via social media and online channels.
References
Telecommunications
Irish companies established in 2021
Mobile virtual network operators
Mobile telecommunications networks | Clear Mobile | Technology | 150 |
52,373,783 | https://en.wikipedia.org/wiki/Biman%20Bagchi | Biman Bagchi is an Indian scientist currently serving as a SERB-DST National Science Chair Professor and Honorary Professor at the Solid State and Structural Chemistry Unit of the Indian Institute of Science. He is a theoretical physical chemist and biophysicist known for his research in the area of statistical mechanics; particularly in the study of phase transition and nucleation, solvation dynamics, mode-coupling theory of electrolyte transport, dynamics of biological macromolecules (proteins, DNA etc.), protein folding, enzyme kinetics, supercooled liquids and protein hydration layer. He is an elected fellow of the Indian National Science Academy, the Indian Academy of Sciences, The World Academy of Sciences and an International honorary member of the American Academy of Arts and Sciences. Along with several scientific articles, he has authored three books, (i) Molecular Relaxation in Liquids, (ii) Water in Biological and Chemical Processes: From Structure and Dynamics to Function, and (iii) Statistical Mechanics for Chemistry and Materials Science.
Biography
Bagchi was born in 1954 to Binay K. Bagchi, a school principal and his homemaker/part-time teacher wife, Abha, in Kolkata in the Indian state of West Bengal. He graduated in chemistry from Presidency College, Kolkata (present-day Presidency University) in 1974 and obtained a master's degree from Rajabazar Science College, Calcutta University in 1976. He earned a PhD at Brown University in 1980, working with Julian Gibbs and did his post-doctoral studies at James Franck Institute of the University of Chicago as a research associate. There he worked with renowned chemists such as David W. Oxtoby, Graham Fleming and Stuart Rice till 1983 when he shifted to the laboratory of Robert Zwanzig of the University of Maryland for a one-year stint. Bagchi returned to India in 1984 and joined Indian Institute of Science (IISc) at their Solid State and Structural Chemistry Unit as a lecturer and established his research group.
Research
In an academic career spanning over more than three decades in which Bagchi has travelled over a wide landscape of physical chemistry, chemical physics, and biophysical chemistry where his contributions often helped build up an area from its foundations. This was done by maintaining a close collaboration with experimental research groups both in India and abroad. He often developed theories that combined sophisticated theoretical approaches (such as mode coupling theory) to extend traditional and established theories and methods (like Kramers’ theory of barrier crossing dynamics, FRET, and electrochemistry) to explain emerging experimental and simulation results.
Professor Bagchi has published more than 480 articles and received more than 24000 citations. His work has been published in reputed journals such as Nature, PNAS, PRL, JACS, JPC and Chemical Reviews. He has also authored two well-known monographs published by the Oxford University Press (NY) [Molecular Relaxation in Liquids] and Cambridge University Press (UK) [Water in Biological and Chemical Processes: From Structure and Dynamics to Function]. And a third major text on Statistical Mechanics published by Francis-Taylor & CRC Press. Bagchi has conducted lectures at national and international levels. He is also associated with a number of science journals as a member of their editorial boards. He has authored 22 major review articles that are partly pedagogical and influenced generations of physical and theoretical chemists.
Some representative examples of his seminal contributions are discussed below:
(i) In the 1970s and 1980s, it was realized that a large number of ultrafast processes could show the usual dynamical characteristics of an activated reaction but occur in the absence of any activation barrier to their reactive motion. Prof. Bagchi developed the first and till to date the most successful theory of barrierless chemical reactions. This theory explained how we can speak of a reaction rate even in the absence of a barrier.
(ii) Solvation dynamics of polar solutes in dipolar liquids (like water, ethanol) was a topic of huge contemporary interest from mid-eighties to late nineties. A continuum model of the solvent with a frequency dependent dielectric function was developed by Bagchi that predicted a relaxation time, later called the longitudinal relaxation time, which was faster than the dielectric relaxation time of the solvent, thus providing a first-time explanation of the experimentally observed fast relaxation of the time dependent solvation energy. However, the continuum model could not explain the ultrafast sub-100 fs solvation observed by Fleming et al. Bagchi explained this by developing a microscopic theory which included intermolecular correlations and also the contribution of translational contributions of solvent molecules.
(iii) The dielectric relaxation theories prior to mid-eighties considered primarily rotational modes. Bagchi and co-workers came up with a microscopic theory of frequency and wave vector dependent dielectric function which included both rotational and translational degrees of freedom. Translational modes were shown to play a hidden role in dielectric relaxation. Due to the presence of orientational correlations, the longitudinal and transverse dielectric functions exhibit vastly different relaxation times at finite wave vectors. This was indeed an important result because, in many dynamical processes, it is the finite wave vector response of the solvent that matters the most. A self-consistent theory was developed for the dielectric friction and dielectric relaxation. It was shown that the presence of translational contributions can make dielectric relaxation more Debye-like for cases where only rotational contributions give rise to a highly non-Debye form of dielectric relaxation.
Awards and honors
The Indian National Science Academy awarded Bagchi the INSA Medal for Young Scientists in 1986; the Academy would honor him again in 1990 with A. K. Bose Memorial Medal and with an elected fellowship in 1995. He received the Homi Bhabha fellowship in 1989 before the Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 1991. The same year, the Indian Academy of Sciences elected him as their fellow and he became an elected fellow of The World Academy of Sciences in 2004. In between, he received the G. D. Birla Award in 1997, TWAS Prize in 1998, the Alumni Excellence Award in Research of the Indian Institute of Science in 2002 and Goyal Prize in Chemistry in 2003. He was selected as J. C. Bose National Fellow in 2006 and the several award orations he has delivered include B. C. Laha Memorial Lecture of 2001, conducted by Indian Association for the Cultivation of Science and Mizushima-Raman Lecture of 2006, jointly organized by the Department of Science and Technology and Japan Society for the Promotion of Science. The Journal of Physical Chemistry published a festschrift on Bagchi by way of their August 2015 issue. He has been selected as the 2021 recipient of the Joel Henry Hildebrand Award in the Theoretical and Experimental Chemistry of Liquids, by the American Chemical Society (ACS). He was also selected for the prestigious Alexander von Humboldt Foundation’s Humboldt Science Research Award (2019) in recognition of his work in chemical sciences.
Selected bibliography
Books
Chapters
Notes
References
External links
Further reading
Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science
1946 births
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Science Academy
20th-century Indian chemists
Scientists from Kolkata
Bengali scientists
Indian theoretical chemists
TWAS fellows
Presidency University, Kolkata alumni
University of Calcutta alumni
Brown University alumni
University of Chicago alumni
Living people
TWAS laureates | Biman Bagchi | Chemistry | 1,525 |
11,552,627 | https://en.wikipedia.org/wiki/Hendersonia%20creberrima | Hendersonia creberrima is a fungal plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pleosporales
Fungus species | Hendersonia creberrima | Biology | 39 |
1,855,722 | https://en.wikipedia.org/wiki/Acousto-optic%20modulator | An acousto-optic modulator (AOM), also called a Bragg cell or an acousto-optic deflector (AOD), uses the acousto-optic effect to diffract and shift the frequency of light using sound waves (usually at radio-frequency). They are used in lasers for Q-switching, telecommunications for signal modulation, and in spectroscopy for frequency control. A piezoelectric transducer is attached to a material such as glass. An oscillating electric signal drives the transducer to vibrate, which creates sound waves in the material. These can be thought of as moving periodic planes of expansion and compression that change the index of refraction. Incoming light scatters (see Brillouin scattering) off the resulting periodic index modulation and interference occurs similar to Bragg diffraction. The interaction can be thought of as a three-wave mixing process resulting in sum-frequency generation or difference-frequency generation between phonons and photons.
Principles of operation
A typical AOM operates under Bragg condition, where the incident light comes at Bragg angle from the perpendicular of the sound wave's propagation.
Diffraction
When the incident light beam is at Bragg angle, a diffraction pattern emerges where an order of diffracted beam occurs at each angle θ that satisfies:
Here, is the order of diffraction, is the wavelength of light in vacuum, and is the wavelength of the sound. Note that m = 0 order travels in the same direction as the incident beam.
Diffraction from a sinusoidal modulation in a thin crystal mostly results in the diffraction orders. Cascaded diffraction in medium thickness crystals leads to higher orders of diffraction. In thick crystals with weak modulation, only phasematched orders are diffracted; this is called Bragg diffraction. The angular deflection can range from 1 to 5000 beam widths (the number of resolvable spots). Consequently, the deflection is typically limited to tens of milliradians.
The angular separation between adjacent orders for Bragg diffraction is twice the Bragg angle, i.e.
Intensity
The amount of light diffracted by the sound wave depends on the intensity of the sound. Hence, the intensity of the sound can be used to modulate the intensity of the light in the diffracted beam. Typically, the intensity that is diffracted into order can be varied between 15% and 99% of the input light intensity. Likewise, the intensity of the order can be varied between 0% and 80%.
An expression of the efficiency in order is:
where the external phase excursion
To obtain the same efficiency for different wavelength, the RF power in the AOM has to be proportional to the square of the wavelength of the optical beam. Note that this formula also tells us that, when we start at a high RF power , it might be higher than the first peak in the sine squared function, in which case as we increase , we would settle at the second peak with a very high RF power, leading to overdriving the AOM and potential damage to the crystal or other components. To avoid this problem, one should always start with a very low RF power, and slowly increase it to settle at the first peak.
Note that there are two configurations that satisfies Bragg Condition: If the incident beam's wavevector's component on the sound wave's propagation direction goes against the sound wave, the Bragg diffraction/scattering process will result in the maximum efficiency into m = +1 order, which has a positive frequency shift; However, if the incident beam goes along the sound wave, the maximum diffraction efficiency into order is achieved, which has a negative frequency shift.
Frequency
One difference from Bragg diffraction is that the light is scattering from moving planes. A consequence of this is the frequency of the diffracted beam in order will be Doppler-shifted by an amount equal to the frequency of the sound wave .
This frequency shift can be also understood by the fact that energy and momentum (of the photons and phonons) are conserved in the scattering process. A typical frequency shift varies from 27 MHz, for a less-expensive AOM, to 1 GHz, for a state-of-the-art commercial device. In some AOMs, two acoustic waves travel in opposite directions in the material, creating a standing wave. In this case the spectrum of the diffracted beam contains multiple frequency shifts, in any case integer multiples of the frequency of the sound wave.
Phase
In addition, the phase of the diffracted beam will also be shifted by the phase of the sound wave. The phase can be changed by an arbitrary amount.
Polarization
Collinear transverse acoustic waves or perpendicular longitudinal waves can change the polarization. The acoustic waves induce a birefringent phase-shift, much like in a Pockels cell. The acousto-optic tunable filter, especially the dazzler, which can generate variable pulse shapes, is based on this principle.
Mode-locking
Acousto-optic modulators are much faster than typical mechanical devices such as tiltable mirrors. The time it takes an AOM to shift the exiting beam in is roughly limited to the transit time of the sound wave across the beam (typically 5 to 100 ns). This is fast enough to create active modelocking in an ultrafast laser. When faster control is necessary electro-optic modulators are used. However, these require very high voltages (e.g. 1...10 kV), whereas AOMs offer more deflection range, simple design, and low power consumption (less than 3 W).
Applications
Q-switching
Regenerative amplifiers
Cavity dumping
Modelocking
Laser Doppler vibrometer
Film scanner
Confocal microscopy
Synthetic array heterodyne detection
Hyperspectral Imaging
See also
Acousto-optics
Acousto-optic deflector
Acousto-optical spectrometer
Electro-optic modulator
Jeffree cell
Liquid crystal tunable filter
Photoelasticity
Pockels effect
References
External links
Olympus Microscopy Resource Center
Optical devices | Acousto-optic modulator | Materials_science,Engineering | 1,278 |
16,926,862 | https://en.wikipedia.org/wiki/Villa%20rustica | Villa rustica () was the term used by the ancient Romans to denote a farmhouse or villa set in the countryside and with an agricultural section, which applies to the vast majority of Roman villas. In some cases they were at the centre of a large agricultural estate, sometimes called a latifundium. The adjective rustica was used only to distinguish it from a much rarer sub-urban resort villa, or otium villa built for purely leisure and luxury, and typically located in the Bay of Naples. The villa rustica would thus serve both as a residence of the landowner and his family (and servants) and also as a farm management centre. It would often comprise separate buildings to accommodate farm labourers and sheds and barns for animals and crops.
The villa rustica's design differed, but usually it consisted of two parts; the pars urbana (main house), and the pars rustica (farm area).
List of villae rusticae
Austria
, Altheim, Austria
Bosnia-Herzegovina
Mogorjelo
Bulgaria
Villa Armira, Ivaylovgrad
Italy
Villa Boscoreale
Villa dei Volusii, Fiano Romano
Portugal
Castelo da Lousa
Roman Villa of Rabaçal
Roman ruins of Quinta da Abicada
Centum Cellas
Villa of Torre de Palma
Villa of Cerro da Vila
Roman ruins of Pisões
Roman ruins of São Cucufate
Roman Ruins of Milreu (Estoi)
Roman Villa of Sendim
Turkey
Gökkale
Üçayaklı ruins
United Kingdom
Bignor Roman Villa
Borough Hill Roman villa
Brading Roman Villa
Chedworth Roman Villa
Crofton Roman Villa
Fishbourne Roman Palace
Gadebridge Park Roman Villa
Littlecote Roman Villa
Llantwit Major Roman Villa
Low Ham Roman Villa
Lullingstone Roman Villa
Newport Roman Villa
North Leigh Roman Villa
Piddington Roman Villa
Woodchester Roman Villa
France
Villa Rustica, Coustaty
Montmaurin
Germany
Baden-Württemberg
Villa Rustica, Baden-Baden-Haueneberstein, Roman settlement at Wohlfahrtsberg
Villa Rustica at Bondorf, Böblingen
, Konstanz
, Lörrach
Villa Rustica at Eigeltingen
Villa Rustica at Gaggenau-Bad Rotenfels / Oberweier
Villa urbana at Grenzach-Wyhlen (Museum Römervilla)
, Zollernalbkreis
Villa urbana at Heitersheim
, Sigmaringen
Villa Rustica at Hirschberg
, Sigmaringen
Villa Rustica at Karlsruhe-Durlach
Villa Rustica at Langenau
, Sigmaringen
, Heilbronn
Villa Rustica at Mühlacker
Villa Rustica at Nagold
Villa Rustica at Oberndorf-Bochingen
, Rems-Murr-Kreis
, Heilbronn
, Rhein-Neckar-Kreis
, Tuttlingen
, Heilbronn
Villa Rustica Bietigheim-Weilerlen at Bietigheim-Bissingen, Ludwigsburg
Bavaria
, Stadt München
Villa Rustica at Großberghofen, Dachau
, Donau-Ries
Villa Rustica at Hüssingen
Villa Rustica Kohlhunden, Ostallgäu
, Stadt Starnberg
(Naturpark Altmühltal)
Villa Rustica (Nassenfels), Eichstätt
, Freising
, Ingolstadt
Villa Rustica (Peiting), Weilheim-Schongau
, Oberallgäu
Hesse
Groß-Umstadt-Heubach,
, Odenwald
Rodau, Zwingenberg, "Kleine Weide"
Northrhine-Westphalia
Villae Rusticae at Eschweiler, Aachen
, Eschweiler, Aachen
Villae Rusticae near Hambach surface mine, Düren
at Sollig, Nettersheim-Roderath, Euskirchen
Rheinland-Palatine
, Bad Dürkheim-Ungstein
, Bad Kreuznach
, Eifelkreis Bitburg-Prüm
, Fließem, Eifelkreis Bitburg-Prüm
Villa Rustica at Sarresdorf (Gerolstein)
, Mainz-Bingen
Villa Rustica at Herschweiler-Pettersheim, Kusel
(Mosel), Trier-Saarburg
, Mainz-Bingen
, Trier-Saarburg
Saarland
Roman Villa Borg
Reinheim
Roman villa at Nennig
Serbia
Village Gornja Bukovica.Valjevo, villa rustica IV century A.D.
Switzerland
Aargau
Basel-Landschaft
Genf
Jura
Solothurn
Waadt
Zürich
Irgenhausen Castrum (built on the remains of a former villa rustica)
Villa in Wetzikon - Kempten
References
External links
Villa Rustica - open-air museum at Hechingen (Germany)
Architectural history | Villa rustica | Engineering | 992 |
72,429,750 | https://en.wikipedia.org/wiki/Laura%20I.%20Gomez | Laura I. Gómez is a computer scientist known for establishing Atipca, a company that presents bias free names in recruiting.
Early life and education
Gómez was born in León Guanajuato, México and then moved to California when she was eight years old. Gomez got her first software engineering internship at the age of seventeen, when she got an internship working at Hewlett-Packard after she received a work permit. For college, she earned a Bachelor of Human Development and Family Studies from University of California Berkeley and a Master of Latin American Studies from University of California San Diego.
Career
Gomez worked with several start-ups and big technology companies, including YouTube, Google, and Twitter. She was one of the early employees at Twitter, and her work there centered on bringing Spanish into the user interface. Gomez has also discussed the use of social media as a means to practice as people learn a new language.
Gomez was a founding member of a project known as Project Include, a non-profit led by Ellen Pao that advocates for inclusion in the technology field. Project Interlude funded Gomez's start-up, Atipica, an organization which provides artificial and human intelligence to sort job candidates in a manner that reduces bias. Over time, Atipica was backed by Kapor Capital, Precursor Ventures, and True Ventures. One of the perks provided by Atipica is paid time off for employees supporting a political cause. The funding Gomez raised for Atipca was the largest financing level for a Latinx founder in Silicon Valley. As of 2023, Gomez was working on Proyecto Solace, a mental health initiative for Latinx peoples.
Awards and honors
Gomez was recognized by the Department of State and Former Secretary of State, Hillary Clinton, for her work in the TechWomen Program.
References
Living people
Computer scientists
University of California, Berkeley alumni
University of California, San Diego alumni
Year of birth missing (living people) | Laura I. Gomez | Technology | 391 |
46,824,914 | https://en.wikipedia.org/wiki/Furthering%20Asbestos%20Claims%20Transparency%20Act%20of%202015 | The Furthering Asbestos Claim Transparency (FACT) Act of 2015 (old bill number- , now Section 3 of ) is a bill introduced in the U.S. House of Representatives by Congressman Blake Farenthold that would require asbestos trusts in the United States to file quarterly reports about the payouts they make and personal information on the victims who receive them in a publicly accessible database. The legislation would also allow defendant corporations in asbestos cases to demand information from the trusts for any reason.
Purpose
The intent of the bill, according to The Wall Street Journal, is to bring "transparency" to a system that is susceptible to abuse. However, according to The New York Times, the bill would limit or greatly slow down the ability of trusts to award compensation to victims by instituting additional requirements for filing and reporting.
Mesothelioma and other asbestos-related diseases often appear decades after exposure to asbestos.
A Wall Street Journal investigative report discovered hundreds of alleged inconsistencies between claims with trusts and in-court cases. The opinion article also claimed many implausible claims that children were exposed to asbestos while working in industrial settings.
The Government Accountability Office (GAO) studied asbestos trusts and concluded that the trusts operate without meaningful federal oversight. The GAO directly asked the asbestos trusts if their own audits uncovered fraud. In response, the trusts replied that they did not find any fraud among the more than $20 billion in claims paid out.
Background
1930s
In the 1930s corporations that manufactured asbestos knew that their product caused serious adverse health effects and even alarming rates of death among workers. Fearing the liability ramifications, the industry engaged in a large cover-up to hide the dangers to exposed workers. Although litigation by sick and injured victims began during this period, companies demanded confidentiality regarding settlement agreements. As a result, it took 45 years before the lethal nature of the product came to light.
1980s
In 1981, federal scientists published data showing that asbestos contributed to causing more cancer cases than any other workplace exposure. The Center for Disease Control estimates that over 3,000 Americans still die from asbestos related diseases and cancers every year. Mesothelioma victims most often die 4–18 months after receiving diagnosis. The United States has not banned asbestos and it is still widely used in many common products.
1994 law
In 1994, Congress passed legislation, amending §524 of the Bankruptcy Code, to establish special trusts to compensate past and future asbestos victims while allowing companies seeking bankruptcy protection to remain operating and economically healthy. Unlike other companies that file for Chapter 11 bankruptcy protection, companies filing under this special section of the Bankruptcy Code need not be insolvent, but rather merely must be "named as a defendant in personal injury, wrongful death, or property-damage actions seeking recovery for damages allegedly caused by the presence of, or exposure to, asbestos or asbestos-containing products". This legislation allowed the asbestos defendant to continue to operate as a profitable company while providing for yet unknown victims. Before these bankruptcy amendments were passed "courts had to find inventive ways to include future claimants in the proceeding." "The goal of the trusts is to compensate present and future claimants, equitably and outside the court system, by managing the debtor company's assets assumed by the trust as part of the bankruptcy reorganization," while relieving the company of any future asbestos related liability.
Once a trust has been established, the company has, essentially, already conceded liability. Therefore, to make a claim against a trust, a victim need not prove liability as in a normal lawsuit, but rather must show evidence of the diagnosis of an asbestos related disease/cancer, evidence of the place of exposure, and medical documentation discussing the extent that asbestos played in the illness.
Procedural history
The FACT Act of 2015 was introduced in the U.S. House of Representatives by Rep. Blake Farenthold of Texas (TX-27) on January 26, 2015 and assigned to the House Judiciary Committee. A hearing on the FACT Act was held on February 4, 2015 by the United States House Judiciary Subcommittee on Regulatory Reform, Commercial and Antitrust Law. On May 14, the bill was voted out of the Judiciary Committee, 19-9, and was sent to be voted on by the full House of Representatives.
In December 2015, the FACT Act was added onto another U.S. House bill, H.R. 1927 (the Fairness in Class Action Litigation Act), and became Section 3 of H.R. 1927. The bill was renamed the "Fairness in Class Action Litigation and Furthering Asbestos Claim Transparency Act of 2016.
On January 8, 2016, the U.S. House of Representatives passed H.R. 1927 by a vote of 211 to 188. The vote was largely along party lines, with no Democrats voting for it and sixteen Republicans voting against it.
According to a Statement of Administration Policy, issued by the Office of Management and Budget on January 6, 2016, "The [Obama] Administration strongly opposes House passage of H.R. 1927 because it would impair the enforcement of important federal laws, constrain access to the courts, and needlessly threaten the privacy of asbestos victims." It continues, "if the president were presented with H.R.1927, his senior advisers would recommend that he veto the bill."
Similar versions of the FACT Act legislation have been passed the House of Representatives in previous Republican-controlled sessions, including in 2013. In 2013, although the bill passed the House, it was never voted on by the Senate.
Provisions
According to the Congressional Research Service, the FACT Act requires an asbestos trust to file with the court "quarterly reports, available on the public docket, which describe each demand the trust has received from a claimant and the basis for any payment made to that claimant."
In addition, the legislation "requires such reports, upon written request, and subject to payment (demanded at the option of the trust) for any reasonable cost incurred by it, to provide any information related to payment from, and demands for payment from, the trust to any party to any action in law or equity concerning liability for asbestos exposure."
Debate
Veterans
The FACT Act of 2015 would have disproportionately affected veterans suffering from asbestos related illnesses and cancers. "Veterans who received compensation from an asbestos trust could have their work histories, compensation amount and partial Social Security numbers posted in bulk on a court's public docket." While veterans make up only 8% of our population, 30% of mesothelioma victims were exposed to asbestos while serving the country.
According to the Veterans Administration, "veterans who served in any of the following occupations may have been exposed to asbestos: mining, milling, shipyard work, insulation work, demolition of old buildings, carpentry and construction, manufacturing and installation of products such as flooring and roofing."
In an op-ed written for The Hill, Blake Farenthold (TX-27), the bills sponsor in the House of Representatives, argued that without it, asbestos trusts will run out. He claims, as a result, they will not be able to pay the veterans, first responders, workers and other Americans who will become sick in the future from asbestos exposure that occurred in the past. In response to Mr. Farentholds op-ed, leadership from three major veterans organizations responded with an article titled "Farenthold has his facts wrong: The FACT Act hurts veterans." Representatives from The Military Order of the Purple Heart of the U.S.A (MOPH), American Veterans (AMVETS), and the Association of the United States Navy (AUSN), explained that the FACT Act of 2015 "would be extremely detrimental to our members and all veterans who were exposed to asbestos while serving their country, in addition to their family members. Veterans disproportionately make up those who are dying and afflicted with mesothelioma and other asbestos-related illnesses and injuries."
According to Stars and Stripes (newspaper), a "newspaper that reports on matters affecting members of the United States Armed Forces," "at least 16 national veterans groups came out strongly against it prior to the vote, warning it would allow companies to delay the claims of terminally ill veterans while exposing their sensitive personal information to identity theft." Veterans who received compensation from an asbestos trust could have their work histories, compensation amount and partial Social Security numbers posted in bulk on a court's public docket,
In January 2016, several veterans organizations have come out in opposition to the legislation, explaining (in an Op-Ed) "it will add significant time and delay in paying claims to our veterans and their families by putting burdensome and costly reporting requirements on trusts." Groups signed onto the op-ed include, Air Force Sergeants Association (AFSA), Air Force Women Officers Associated (AFWOA), American Veterans (AMVETS), Association of the United States Navy (AUSN), Commissioned Officers Association of the U.S. Public Health Service, Fleet Reserve Association (FRA), Jewish War Veterans of the USA (JWV), Marine Corps Reserve Association (MCRA), Military Officers Association of America (MOAA), Military Order of the Purple Heart (MOPH), National Association for Uniformed Services (NAUS), National Defense Council, Naval Enlisted Reserve Association (NERA), The Retired Enlisted Association (TREA), U.S. Coast Guard Chief Petty Officers Association, U.S. Army Warrant Officers Association, Vietnam Veterans of America (VVA).
In a letter to the Senate Judiciary Committee the veterans organizations called the FACT Act "a cynical ploy by the asbestos industry to avoid compensating its victims who are seeking justice in court."
Privacy
The FACT Act would require public disclosure of personal information of asbestos victims. Under the legislation, the only information shielded from the disclosure is "confidential medical records" and a victims "full social security number." According to a 2013 piece published in The Hill's Congress Blog by Susan Vento, widow of Congressman Bruce Vento, and Judy van Ness, with the Asbestos Cancer Victims' Rights Campaign, information that may be requested and publicly disclosed includes the victims name, address, date of birth, last four digits of their social security number, information about children and spouses, employment history, salary history, information about their medical condition, and personal finances.
Patient and consumer advocates have expressed concern that making this information easily accessible could leave sick and dying asbestos victims and their families, vulnerable to identity theft and other frauds. In a coalition letter to Congress, consumer advocates raised the possibility of "predators, con artists and unscrupulous businesses" scouring the public disclosures for asbestos victims personal information.
Speaking with reporters on a January 2016 conference call, Senator Chuck Schumer explained that requiring disclosure of their personal information online would deter "veterans from filing for the money they are owed but would also result in an egregious violation of their privacy and expose them to identity theft," and called the bill "offensive invasion of the privacy of those who defended this country."
HIPAA
HIPAA laws that protect patient privacy only apply to a distinct group of "covered entities" and do not apply to asbestos trust. During one U.S. House Judiciary Committee hearing, Congressman Hank Johnson (D- GA) offered an amendment to the legislation that required trust to abide by HIPAA laws but the measure failed 12-18.
Compliance costs
Trust representatives have explained, in letters to Congress, that the reporting and compliance requirements of the FACT Act would require large victim trusts to spend tens of thousands of additional hours every year to compile the lists and respond to all information requests allowed by this legislation. Funds used to fulfill these new requirements would come from the limited pool of money available to pay claims.
A study by RAND found that the asbestos trusts are already underfunded. They reported the median payment from asbestos trusts to victims is 25% of the value of the claim. Some payments are as low as 1.1 percent of the claim's value.
Several asbestos trusts have expressed strong opposition to this legislation, in part because of the burdensome administrative costs that will significantly reduce recoveries for future trust claimants.
Payment delays
Cancer patient rights advocates say the FACT Act will cause long delays in payment of trust claims. Elihu Inselbuch, an expert on trust management and partner at Caplin & Drysdale, explained that because trusts will be buried in additional paperwork, the FACT Act "would slow down or stop the process by which the trusts review and pay claims, such that many victims would die before receiving compensation, since victims of mesothelioma typically only live for 4 to 18 months after their diagnosis." In many cases, "the delays in trust payment will force dying plaintiffs, who are in desperate need of funds, to settle for lower amounts with solvent defendants.… Delay is a weapon for asbestos defendants."
Support and opposition
Supporters
This bill is sponsored by Republicans in the House of Representatives and Senate. The FACT Act is also supported by the United States Chamber of Commerce, which listed the bill as one of their top legislative priorities of the 114th legislative session. In their letter to Congress, the U.S. Chamber claims this legislation is needed to shine a light on asbestos victim trusts and prevent "double-dipping" by victims.
Supporters include:
60 Plus Association
Air Force Association, Department of Indiana
American Insurance Association
American Military Society
Arizona Chamber of Commerce & Industry
Arizona Manufacturers Council
BCA
Civil Justice Association of California
Coalition for Common Sense
Cost of Freedom, Indiana Chapter
Florida Chamber of Commerce
Florida Justice Reform Institute
Georgia Chamber of Commerce
Hamilton County Veterans
Illinois Chamber of Commerce
Lawsuit Reform Alliance of New York
Louisiana Association of Business and Industry
Michigan Chamber of Commerce
Military Officers Association of America, Indianapolis Chapter
Missing in America Project of Indiana
National Association of Manufacturers
National Association of Mutual Insurance Companies
National Black Chamber of Commerce
New Jersey Civil Justice Institute
North Carolina Chamber of Commerce
Pennsylvania Chamber of Business & Industry
Reserve Officers Association, Department of Indiana
Save Our Veterans
South Carolina Civil Justice Coalition
Taxpayers Protection Alliance
Texas Civil Justice League
Texas Coalition of Veterans Organizations
The Cost of Freedom, Inc. of Indiana
TLR
U.S. Chamber Institute for Legal Reform
U.S. Chamber of Commerce
Veteran Resource List
West Virginia Business & Industry Council
West Virginia Chamber
Wisconsin Manufacturers & Commerce
Opponents
Opponents of the legislation dispute the motivation of the law and the need to prevent double dipping. 'Double-dippers' are alleged to file multiple claims that may not be congruent with one another, and thus, may be considered fraudulent." In testimony to Congress, asbestos trust expert, Elihu Inselbuch explained that the idea that victims can double-dip is largely false. "When an asbestos victim recovers from each defendant whose product contributed to their disease, that victim is in no way 'double-dipping;' rather they are recovering a portion of their damages from each of the corporations who harmed them. In fact, each trust is responsible for and pays for only its own share of the damages." In 2011, the Government Accountability Office investigated possible frauds in the Asbestos Victim Trusts and could not find one example of a fraudulent claim.
In October 2015, in a piece written for The Hill, titled "Farenthold has his facts wrong: The FACT Act hurts veterans," representatives from The Military Order of the Purple Heart of the U.S.A (MOPH), American Veterans (AMVETS), and the Association of the United States Navy (AUSN), explained their organizations' strong opposition to this "legislation due to its detrimental and disparate impact on men and women in uniform." They explained the FACT Act of 2015 "would be extremely detrimental to our members and all veterans who were exposed to asbestos while serving their country, in addition to their family members. Veterans disproportionately make up those who are dying and afflicted with mesothelioma and other asbestos-related illnesses and injuries." In addition, according to Stars and Stripes (newspaper), 16 veterans groups came out in strong opposition to the FACT Act.
Various groups that oppose the legislation include:
AFL-CIO
Alliance for Justice
AFSCME
American Association for Justice
American Veterans
Asbestos Disease Awareness Organization
Asbestos Cancer Victims' Rights Campaign
Association of the United States Navy
Center for Effective Government
Center for Justice & Democracy
Communications Workers of America
ConnectiCOSH
Connecticut Center for Patient Safety
Constitutional Alliance
Consumer Action
Consumer Watchdog
Environmental Working Group Action Fund
Essential Information
Government Accountability Project
International Association of Fire Fighters
Knox Area Workers' Memorial Day Committee
Labor & Employment Committee of the National Lawyers Guild
Maine Labor Group on Health
Military Order of the Purple Heart of the U.S.A.
National Association of Consumer Advocates- NACA
National Educational Association
National Employment Lawyers Association
National Consumers League
National Council for Occupational Safety and Health
New England Regional Council of Carpenters
New Jersey State Industrial Union Council
New Solutions: A Journal of Environmental and Occupational Health Policy
NHCOSH
Occupational Health Clinical Centers, Syracuse, New York
Occupational Safety & Health Law Project
OpentheGovernment.org
Patient Privacy Rights
Privacy Rights Clearinghouse
Privacy Times
Protect All Children's Environment
Public Citizen
RICOSH
SafeWork Washington
TURN-The Utility Reform Network
U.S. Public Interest Research Group (U.S. PIRG)
United Steelworkers
Western New York Council on Occupational Safety and Health
Worksafe
World Privacy Forum
References
External links
Proposed legislation of the 114th United States Congress
Asbestos | Furthering Asbestos Claims Transparency Act of 2015 | Environmental_science | 3,552 |
24,994,666 | https://en.wikipedia.org/wiki/Nishina%20Memorial%20Prize | The is the oldest and most prestigious physics award in Japan.
Information
Since 1955, the Nishina Memorial Prize has been awarded annually by the Nishina Memorial Foundation. The Foundation was established to commemorate Yoshio Nishina, who was the founding father of modern physics research in Japan and a mentor of the first two Japanese Nobel Laureates, Hideki Yukawa and Sin-Itiro Tomonaga.
The Prize, of ¥500,000 (about US$5,000) and the certificate, is bestowed upon young scientists who have made substantial contributions in the field of atomic and sub-atomic physics research. As of 2024, six Nobel Prizes have been awarded to prior Nishina recipients: Leo Esaki, Makoto Kobayashi, Toshihide Maskawa, Masatoshi Koshiba, Shuji Nakamura and Takaaki Kajita.
Laureates
Notable Nishina laureates are:
1955: Kazuhiko Nishijima
1957: Ryogo Kubo (1977 Boltzmann Medal)
1959: Leo Esaki (1973 Nobel Prize, 1998 Japan Prize)
1963: Chushiro Hayashi
1968: Jun Kondo
1969: Hisashi Matsuda, Hiroyuki Ikezi, Kyoji Nishikawa
1972: Kyozi Kawasaki (2001 Boltzmann Medal)
1974: Bunji Sakita
1976: Susumu Okubo (2006 Wigner Medal)
1978: Akito Arima
1979: Makoto Kobayashi (2008 Nobel Prize), Toshihide Maskawa (2008 Nobel Prize)
1982: Akira Tonomura (1998 Benjamin Franklin Medal)
1985: Sumio Iijima (2008 Kavli Prize)
1987: Masatoshi Koshiba (2002 Nobel Prize), Yoji Totsuka (2007 Benjamin Franklin Medal)
1989: Ken'ichi Nomoto (2019 Hans A. Bethe Prize)
1990: Yoshinori Tokura
1992: Yoshihisa Yamamoto
1996: Shuji Nakamura (2002 Benjamin Franklin Medal, 2008 Prince of Asturias Award, 2014 Nobel Prize)
1997: Anthony Ichiro Sanda
1999: Kenzo Inoue, Akira Kakuto, Takaaki Kajita (2015 Nobel Prize), Yasunobu Nakamura
2009: Hirosi Ooguri
2012: Hideo Hosono
2013 Hidetoshi Katori, Yoshiro Takahashi, Takahiko Kondo, Tomio Kobayashi, Shoji Asai
2014 Yuji Matsuda, Takashi Kobayashi, Tsuyoshi Nakaya
2015 Shinsei Ryu, Akira Furusaki, Tohru Motobayashi, Hiroyoshi Sakurai
2016 Tadashi Takayanagi
2017 Hiroki Takesue, Chihaya Adachi, Mahito Kohmoto
2018 Masaru Shibata, Koichiro Tanaka
2019 Yoshihiro Iwasa, Shigeru Yoshida, Aya Ishihara
2020 Kazushi Kanoda, Kazuma Nakazawa
2021 Takahisa Arima, Tsuyoshi Kimura, Masato Takita, Satoshi Miyazaki
2022 Eiji Saitoh,Eiichiro Komatsu
2023 Atsuko Ichikawa
Nishina Asia Award
In 2012, the foundation established a parallel prize called the Nishina Asia Award. This prize was meant for "outstanding achievement by young Asian scientists" (outside Japan) in fundamental physics. The prize was given to one physicist each year.
2013: Shiraz Minwalla
2014: Yuanbo Zhang
2015: Ke He
2016: Seok Kim
2017: Hongming Wen
2018: Yu-tin Huang
2019: Chao-Yang Lu
2020: Ying Jiang
2021: Wang Yao
2022: Suvrat Raju
See also
List of physics awards
References
Early career awards
Academic awards
Physics awards
Japanese science and technology awards
Awards established in 1955
1955 establishments in Japan | Nishina Memorial Prize | Technology | 776 |
61,190,522 | https://en.wikipedia.org/wiki/C7H5BrO | {{DISPLAYTITLE:C7H5BrO}}
The molecular formula C7H5BrO may refer to:
Benzoyl bromide
Bromobenzaldehydes
2-Bromobenzaldehyde
3-Bromobenzaldehyde
4-Bromobenzaldehyde
Bromotropone | C7H5BrO | Chemistry | 67 |
6,223,779 | https://en.wikipedia.org/wiki/Ignition%20switch | An ignition switch, starter switch or start switch is a switch in the control system of a motor vehicle that activates the main electrical systems for the vehicle, including "accessories" (radio, power windows, etc.). In vehicles powered by internal combustion engines, the switch provides power to the starter solenoid and the ignition system components (including the engine control unit and ignition coil), and is frequently combined with the starter switch which activates the starter motor.
Historically, ignition switches were key switches that requires the proper key to be inserted in order for the switch functions to be unlocked. These mechanical switches remain common in modern vehicles, further combined with an immobiliser to only activate the switch functions when a transponder signal in the key is detected. However, many new vehicles have been equipped with so-called "keyless" systems, which replace the key switch with a push button that also requires a transponder signal.
The ignition locking system may be sometimes bypassed by disconnecting the wiring to the switch and manipulating it directly; this is known as hotwiring.
Ignition switches are generally a simple repair that can be completed without much knowledge. They are mainly vehicle specific and plug and play.
See also
List of auto parts
Remote keyless system
Smart key
References
Vehicle parts | Ignition switch | Technology | 262 |
11,422,320 | https://en.wikipedia.org/wiki/Togavirus%205%E2%80%B2%20plus%20strand%20cis-regulatory%20element | The Togavirus 5′ plus strand cis-regulatory element is an RNA element which is thought to be essential for both plus and minus strand RNA synthesis.
Genus Alphavirus belongs to the family Togaviridae. Alpha viruses contain secondary structural motifs in the 5′ UTR that allow them to avoid detection by IFIT1.
See also
Rubella virus 3′ cis-acting element
References
External links
Cis-regulatory RNA elements | Togavirus 5′ plus strand cis-regulatory element | Chemistry | 87 |
24,859,499 | https://en.wikipedia.org/wiki/The%20Arabidopsis%20Information%20Resource | The Arabidopsis Information Resource (TAIR) is a community resource and online model organism database of genetic and molecular biology data for the model plant Arabidopsis thaliana, commonly known as mouse-ear cress.
TAIR integrates information about the Arabidopsis genome, genes, gene products, natural variants, mutant alleles and plant phenotypes and research literature. Data in TAIR can be retrieved using simple and advanced searches, bulk query and download tools, and in collections of prepared text files. The Arabidopsis genome and annotations can be visualized using the interactive SeqViewer and GBrowse tools. TAIR’s biocurators are responsible for acquiring and integrating data from the research literature (functional annotation) as well as for assisting the community in using Arabidopsis data and tools. TAIR collaborates with the Arabidopsis Biological Resource Consortium (ABRC) to allow researchers to search, browse and order seed and DNA stocks. The ABRC's mission is to acquire, preserve and distribute seed and DNA resources that are useful to the Arabidopsis research community. TAIR’s community includes over 28,000 registered users and the website draws about 60,000 unique visitors per month.
TAIR is located at Phoenix Bioinformatics, and funded by subscriptions.
TAIR funding history
From its inception in 1999 to 2013, TAIR was primarily funded by the National Science Foundation (Grant No. DBI-0850219). In response to the end of NSF funding, a core group of TAIR staff founded the non-profit organization, Phoenix Bioinformatics, with the aim of finding creative solutions to database sustainability. In September 2013, with the support of Phoenix, TAIR transitioned to subscription revenues. Subscription fees are used to fund continuous data curation and improvements to TAIR’s database and tools. TAIR offers a variety of subscription options to access the full, up-to-date resource.
To ensure the greatest community access to data, and promote data reuse, subscriber-only data in TAIR is made available to the public one year after its initial release on the TAIR site.
References
External links
The Arabidopsis Information Resource
Model organism databases
Arabidopsis thaliana | The Arabidopsis Information Resource | Biology | 469 |
2,900,261 | https://en.wikipedia.org/wiki/Telesis | Telesis (from the Greek τέλεσις /telesis/) or "planned progress" was a concept and neologism coined by the American sociologist Lester Frank Ward (often referred to as the "father of American sociology"), in the late 19th century to describe directed social advancement via education and the scientific method. The term has since been adopted as the name of numerous groups, schools, and businesses.
Architecture and planning
A group of architects, landscape architects, and urban planners from the Bay Area, founded in late 1939 through the merging of two groups of architects, one from San Francisco and the other from the University of California, Berkeley, called themselves Telesis. Philosophically, the group also evolved from several larger international architectural movements, which included CIAM (Congrès International d'Architecture Moderne) and MARS (Modern Architectural Research Group).
Their stated aim was to research the development and implications of what architectural critic Lewis Mumford called the Second Bay Area Regional Style. As set forth in their founding statement, the group believed that "People and the Land make up the environment which has four distinct parts--a place to Live, Work, Play, and the Services which integrate these and make them operate. These components must be integrated in the community and urban region through rational planning, and through the use of modern building technology."—from The Things Telesis Has Found Important
Noted Telesis members included William Wurster, Catherine Bauer Wurster, Vernon DeMars, Thomas Church, Garrett Eckbo, Grace McCann Morley, Geraldine Knight Scott, Joseph Allen Stein, Jack Hillmer, Francis Violich, and T. J. Kent, Jr. In addition to internal research and working groups that investigated such topics as speculative housing, industrial design, and the relationship of the physical environment of the San Francisco Bay Area to indigenous architectural styles, the group also organized several influential exhibitions on contemporary architecture and planning with the support of the San Francisco Museum of Art. Professional and personal papers from many of Telesis's members are collected in the Environmental Design Archives at the University of California, Berkeley.
Sociology
The mechanics of society fall under two general groups: social statics and social dynamics. Social dynamics is further divided into social genesis and social telesis. Social telesis may be further divided into individual telesis and collective telesis.
Telesis: Progress consciously planned and produced by intelligently directed effort.
Social telesis: The intelligent direction of social activity towards the achievement of a desired and understood end.
Collective telesis: Adaptation of means to ends by society.
Individual telesis: The conscious adaptation of conduct by an individual to the achievement of his own consciously apprehended ends.
See also
Cultural Creatives
Allied Telesis
Pacific Telesis
Polytely
Telos (philosophy)
References
External links
Sociology: The Outlines as Set Forth in Lester F. Ward's New Handbook, New York Times, 11 June 1898
Architectural theory
Concepts in epistemology
Ontology
Urbanization | Telesis | Engineering | 605 |
3,674,853 | https://en.wikipedia.org/wiki/Ethnomycology | Ethnomycology is the study of the historical uses and sociological impact of fungi and can be considered a subfield of ethnobotany or ethnobiology. Although in theory the term includes fungi used for such purposes as tinder, medicine (medicinal mushrooms) and food (including yeast), it is often used in the context of the study of psychoactive mushrooms such as psilocybin mushrooms, the Amanita muscaria mushroom, and the ergot fungus.
American banker Robert Gordon Wasson pioneered interest in this field of study in the late 1950s, when he and his wife became the first Westerners on record allowed to participate in a mushroom velada, held by the Mazatec curandera María Sabina. The biologist Richard Evans Schultes is also considered an ethnomycological pioneer. Later researchers in the field include Terence McKenna, Albert Hofmann, Ralph Metzner, Carl Ruck, Blaise Daniel Staples, Giorgio Samorini, Keewaydinoquay Peschel, John Marco Allegro, Clark Heinrich, John W. Allen, Jonathan Ott, Paul Stamets, Casey Brown and Juan Camilo Rodríguez Martínez.
Besides mycological determination in the field, ethnomycology depends to a large extent on anthropology and philology. One of the major debates among ethnomycologists is Wasson's theory that the Soma mentioned in the Rigveda of the Indo-Aryans was the Amanita muscaria mushroom. Following his example similar attempts have been made to identify psychoactive mushroom usage in many other (mostly) ancient cultures, with varying degrees of credibility. Another much written about topic is the content of the Kykeon, the sacrament used during the Eleusinian mysteries in ancient Greece between approximately 1500 BCE and 396 CE. Although not an ethnomycologist as such, philologist John Allegro has made an important contribution suggesting, in a book controversial enough to have his academic career destroyed, that Amanita muscaria was not only consumed as a sacrament but was the main focus of worship in the more esoteric sects of Sumerian religion, Judaism and early Christianity. Clark Heinrich claims that Amanita muscaria use in Europe was not completely wiped out by Orthodox Christianity but continued to be used (either consumed or merely symbolically) by individuals and small groups such as medieval Holy Grail myth makers, alchemists and Renaissance artists.
While Wasson views historical mushroom use primarily as a facilitator for the shamanic or spiritual experiences core to these rites and traditions, McKenna takes this further, positing that the ingestion of psilocybin was perhaps primary in the formation of language and culture and identifying psychedelic mushrooms as the original "Tree of Knowledge". There is indeed some research supporting the theory that psilocybin ingestion temporarily increases neurochemical activity in the language centers of the brain, indicating a need for more research into the uses of psychoactive plants and fungi in human history.
The 1990s saw a surge in the recreational use of psilocybin mushrooms due to a combination of a psychedelic revival in the rave culture, improved and simplified cultivation techniques, and the distribution of both the mushrooms themselves and information about them via the Internet. This "mushrooming of mushroom use" has also caused an increased popularization of ethnomycology itself as there are websites and Internet forums where mushroom references in Christmas and fairy tale symbolism are discussed. It remains open to interpretation what effect this popularization has on ethnomycology in the academic world, where the lack of verifiable evidence has kept its theories with their often far-reaching implications shrouded in controversy.
References
Sources
Oswaldo Fidalgo, The ethnomycology of the Sanama Indians, Mycological Society of America (1976), ASIN B00072T1TC
E. Barrie Kavasch, Alberto C. Meloni, American Indian EarthSense: Herbaria of Ethnobotany and Ethnomycology, Birdstone Press, the Institute for American Indian Studies (1996). .
Aaron Michael Lampman, Tzeltal ethnomycology: Naming, classification and use of mushrooms in the highlands of Chiapas, Mexico, Dissertation, ProQuest Information and Learning (2004)
Jagjit Singh (ed.), From Ethnomycology to Fungal Biotechnology: Exploiting Fungi from Natural Resources for Novel Products, Springer (1999), .
Keewaydinoquay Peschel. Puhpohwee for the people: A narrative account of some use of fungi among the Ahnishinaubeg (Ethnomycological studies) Botanical Museum of Harvard University (1978),ASIN: B0006E6KTU
External links
"Aboriginal use of fungi", Australian National Botanic Gardens Fungi Web Site.
https://web.archive.org/web/20070206142346/http://www.huh.harvard.edu/libraries/wasson.html R.G. Wasson] - Harvard University Herbaria
https://web.archive.org/web/20070320151934/http://www.bu.edu/classics/faculty/profiles/ruck.html Carl A.P. Ruck] - Boston University Department of Classical Studies
Albert Hofmann Foundation
Terence McKenna - Official site
John W. Allen - Official site
John M. Allegro - Official site
https://web.archive.org/web/20070127105105/http://www.gnosticmedia.com/main/ Jan Irvin and Andrew Rutajit] - Official site
Dan Merkur - Official site
Michael Hoffman
Visionary Mushrooms Studies in Ethnomycology with Contributions by Gaston Guzman and Albert Hofmann
Ethnobotany
Branches of mycology | Ethnomycology | Biology | 1,208 |
332,264 | https://en.wikipedia.org/wiki/Computable%20set | In computability theory, a set of natural numbers is called computable, recursive, or decidable if there is an algorithm which takes a number as input, terminates after a finite amount of time (possibly depending on the given number) and correctly decides whether the number belongs to the set or not.
A set which is not computable is called noncomputable or undecidable.
A more general class of sets than the computable ones consists of the computably enumerable (c.e.) sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set.
Formal definition
A subset of the natural numbers is called computable if there exists a total computable function such that if and if . In other words, the set is computable if and only if the indicator function is computable.
Examples and non-examples
Examples:
Every finite or cofinite subset of the natural numbers is computable. This includes these special cases:
The empty set is computable.
The entire set of natural numbers is computable.
Each natural number (as defined in standard set theory) is computable; that is, the set of natural numbers less than a given natural number is computable.
The subset of prime numbers is computable.
A recursive language is a computable subset of a formal language.
The set of Gödel numbers of arithmetic proofs described in Kurt Gödel's paper "On formally undecidable propositions of Principia Mathematica and related systems I" is computable; see Gödel's incompleteness theorems.
Non-examples:
The set of Turing machines that halt is not computable.
The isomorphism class of two finite simplicial complexes is not computable.
The set of busy beaver champions is not computable.
Hilbert's tenth problem is not computable.
Properties
If A is a computable set then the complement of A is a computable set. If A and B are computable sets then A ∩ B, A ∪ B and the image of A × B under the Cantor pairing function are computable sets.
A is a computable set if and only if A and the complement of A are both computably enumerable (c.e.). The preimage of a computable set under a total computable function is a computable set. The image of a computable set under a total computable bijection is computable. (In general, the image of a computable set under a computable function is c.e., but possibly not computable).
A is a computable set if and only if it is at level of the arithmetical hierarchy.
A is a computable set if and only if it is either the range of a nondecreasing total computable function, or the empty set. The image of a computable set under a nondecreasing total computable function is computable.
See also
Decidability (logic)
Recursively enumerable language
Recursive language
Recursion
References
Cutland, N. Computability. Cambridge University Press, Cambridge-New York, 1980. ;
Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ;
Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987.
External links
Computability theory
Theory of computation | Computable set | Mathematics | 788 |
8,871,506 | https://en.wikipedia.org/wiki/Buformin | Buformin (1-butylbiguanide) is an oral antidiabetic drug of the biguanide class, chemically related to metformin and phenformin. Buformin was marketed by German pharmaceutical company Grünenthal as Silubin.
Chemistry and animal toxicology
Buformin hydrochloride is a fine, white to slightly yellow, crystalline, odorless powder, with a weakly acidic bitter taste. Its melting point is 174 to 177 °C, it is a strong base, and is freely soluble in water, methanol and ethanol, but insoluble in chloroform and ether. Toxicity: guinea pig LD50 subcutaneous 18 mg/kg; mouse LD50 intraperitoneal 140 mg/kg and 300 mg/kg oral. The log octanol-water partition coefficient (log P) is -1.20E+00; its water solubility is 7.46E+05 mg/L at 25 °C. Vapor pressure is 1.64E-04 mm Hg at 25 °C (EST); Henry's law constant is 8.14E-16 atm-m3/mole at 25 °C (EST). Its Atmospheric -OH rate constant is 1.60E-10 cm3/molecule-sec at 25 °C.
Mechanism of action
Buformin delays absorption of glucose from the gastrointestinal tract, increases insulin sensitivity and glucose uptake into cells, and inhibits synthesis of glucose by the liver. Buformin and the other biguanides are not hypoglycemic, but rather antihyperglycemic agents. They do not produce hypoglycemia; instead, they reduce basal and postprandial hyperglycemia in diabetics. Biguanides may antagonize the action of glucagon, thus reducing fasting glucose levels.
Pharmacokinetics
After oral administration of 50 mg of buformin to volunteers, almost 90% of the applied quantity was recovered in the urine; the rate constant of elimination was found to be 0.38 per hr. Buformin is a strong base (pKa = 11.3) and not absorbed in the stomach. After intravenous injection of about 1 mg/kg buformin-14-C, the initial serum concentration is 0.2-0.4 μg/mL. Serum level and urinary elimination rate are linearly correlated. In man, after oral administration of 50 mg 14-C-buformin, the maximum serum concentration was 0.26-0.41 μg/mL. The buformin was eliminated with an average half-life of 2 h. About 84% of the dose administered was found excreted unchanged in the urine. Buformin is not metabolized in humans. The bioavailability of oral buformin and other biguanides is 40%-60%. Binding to plasma proteins is absent or very low.
Dosage
The daily dose of buformin is 150–300 mg by mouth. Buformin has also been available in a sustained release preparation, Silubin Retard, which is still sold in Romania.
Side effects and contraindications
The side effects encountered are anorexia, nausea, diarrhea, metallic taste, and weight loss. Its use is contraindicated in
diabetic coma, ketoacidosis, severe infection, trauma, other conditions where buformin is unlikely to control the hyperglycemia, renal or hepatic impairment, heart failure, recent myocardial infarct, dehydration, alcoholism, and conditions likely to predispose to lactic acidosis.
Toxicity
Buformin was withdrawn from the market in many countries due to an elevated risk of causing lactic acidosis (although not the US, where it was never sold). Buformin is still available and prescribed in Romania (timed release Silubin Retard is sold by Zentiva), Hungary, Taiwan and Japan (sold by Nichi-Iko Pharmaceutical Co., Ltd as "DIBETOS" tablets, each containing 50 mg buformin hydrochloride). The lactic acidosis occurred only in patients with a buformin plasma level of greater than 0.60 μg/mL and was rare in patients with normal renal function.
In one report, the toxic oral dose was 329 ± 30 mg/day in 24 patients who developed lactic acidosis on buformin. Another group of 24 patients on 258 ± 25 mg/day did not develop lactic acidosis on buformin.
Anticancer properties
Buformin, along with phenformin and metformin, inhibits the growth and development of cancer. The anticancer property of these drugs is due to their ability to disrupt the Warburg effect and revert the cytosolic glycolysis characteristic of cancer cells to normal oxidation of pyruvate by the mitochondria. Metformin reduces liver glucose production in diabetics and disrupts the Warburg effect in cancer by AMPK activation and inhibition of the mTor pathway. Buformin decreased cancer incidence, multiplicity, and burden in chemically induced rat mammary cancer, whereas metformin and phenformin had no statistically significant effect on the carcinogenic process relative to the control group. Buformin also exhibits anti-proliferative and anti-invasive effects in endometrial cancer cells, lung cancer cells and cervical cancer cells.
Antiviral properties
Biguanides were first noted to be active against influenza in the 1940s. Further studies confirmed their antiviral activity in vitro. Buformin, especially, was potently antiviral against vaccinia and influenza. Buformin is a metabolic antiviral that inhibits the mTOR pathway used by influenza and Middle East respiratory syndrome-related coronavirus.
History
Buformin was synthesized as an oral antidiabetic in 1957.
Synthesis
Buformin is obtained by reaction of butylamine and 2-cyanoguanidine.
References
Withdrawn drugs
Biguanides
Butyl compounds | Buformin | Chemistry | 1,278 |
32,596,353 | https://en.wikipedia.org/wiki/Gamendazole | Gamendazole is a drug candidate for male contraception. It is an indazole carboxylic acid derived from lonidamine (LND). It has been shown to reduce fertility in male rats without affecting testosterone levels, but human clinical trials have not been started.
Rat studies
Gamendazole produced 100% antispermatogenic effects at 25 mg/kg i.p. in rats, whereas 200 mg/kg was fatal for 60% of rats tested. Since gamendazole produced 100% efficacy, it was tested orally. At a dose of 6 mg/kg, 100% of rats were infertile 4 weeks after a single administration. Complete infertility was maintained for 2 weeks, followed by complete recovery in 4 of 7 rats. The other 3 never recovered fertility. Upon dosing 6 mg/kg orally for 7 days, it produced similar infertility results, but only 2 of 7 rats recovered fertility. There were no abnormalities in rates of conception or abnormal conception in rats who recovered fertility.
Pathology reports were conducted on gamendazole treated rats. At 25 mg/kg i.p., 6 mg/kg oral, and in animals that survived 200 mg/kg i.p., there were no remarkable findings, with no evidence of inflammation, necrosis, tumors, or hemorrhage. There was also a lack of observable behavioral effects at 25 mg/kg i.p., 6 mg/kg oral, and in animals that survived 200 mg/kg i.p. Gamendazole treatment had no effect on testosterone levels, and was reported to affect Sertoli cell function, leading to decreased levels of inhibin B. Low levels of inhibin B were correlated to the infertility of the rat.
References
Contraception for males
Experimental methods of birth control
Indazoles
Carboxylic acids
Trifluoromethyl compounds
Chlorobenzene derivatives | Gamendazole | Chemistry | 402 |
2,044,329 | https://en.wikipedia.org/wiki/Clark%20electrode | The Clark electrode is an electrode that measures ambient oxygen partial pressure in a liquid using a catalytic platinum surface according to the net reaction:
O2 + 4 e− + 4 H+ → 2 H2O
It improves on a bare platinum electrode by use of a membrane to reduce fouling and metal plating onto the platinum.
History
Leland Clark (Professor of Chemistry, Antioch College, Yellow Springs, Ohio, and Fels Research Institute, Yellow Springs, Ohio) had developed the first bubble oxygenator for use in cardiac surgery. However, when he came to publish his results, his article was refused by the editor since the oxygen tension in the blood coming out from the device could not be measured. This motivated Clark to develop the oxygen electrode.
The electrode, when implanted in vivo, will reduce oxygen and thus required stirring in order to maintain an equilibrium with the environment. Severinghaus improved the design by adding a stirred cuvette in a thermostat. A discrepancy between the measured partial pressure of oxygen (pO2) between blood samples and gaseous mixtures of identical pO2, meant that the modified electrode required calibration; consequently a microtonometer was added to the water thermostat.
Mechanism of action
The electrode compartment is isolated from the reaction chamber by a thin Teflon membrane; the membrane is permeable to molecular oxygen and allows this gas to reach the cathode, where it is electrolytically reduced.
The above reaction requires a steady stream of electrons to the cathode, which depends on the rate at which oxygen can reach the electrode surface. Increasing the voltage applied (between the Pt electrode and a second Ag electrode) will increase the rate of electrocatalysis. Clark affixed an oxygen permselective membrane over the Pt electrode. This limits the diffusion rate of oxygen to the Pt electrode.
Above a certain voltage, the current plateaus and increasing the potential any further does not result in a higher rate of electrocatalysis of the reaction. At this point, the reaction is diffusion-limited and depends only on the permeability properties of the membrane (which is ideally well characterized, the electrode being calibrated against known standard solutions) and by the oxygen gas concentration, which is the measured quantity.
Applications
The Clark oxygen electrode laid the basis for the first glucose biosensor (in fact the first biosensor of any type), invented by Clark and Lyons in 1962.
This sensor used a single Clark oxygen electrode coupled with a counter-electrode. As with the Clark electrode, a permselective membrane covers the Pt electrode. Now, however, the membrane is impregnated with immobilized glucose oxidase (GOx). The GOx will consume some of the oxygen as it diffuses towards the PT electrode, incorporating it into H2O2 and gluconic acid. The rate of reaction current is limited by the diffusion of both glucose and oxygen. This diffusion can be well characterized for a membrane for both the oxygen and glucose, leaving as the only variable the oxygen and glucose concentrations on the analyte-side of the glucose membrane, which is the quantity being measured.
References
External links
Clark-type Sensors Explained, The Gas Detector Encyclopedia, Edaphic Scientific Knowledge Base
Biosensors & Bioelectronics: Leland Clark
Clark Oxygen Electrode, precursor to today's modern biosensors - broken link
Electrodes
Gas sensors | Clark electrode | Chemistry | 703 |
21,946,966 | https://en.wikipedia.org/wiki/Restriction%20fragment%20mass%20polymorphism | Restriction Fragment Mass Polymorphism (RFMP) is a technology which digests DNA into oligonucleotide fragments, and detects variation of DNA sequences by molecular weight of the fragments. RFMP is a proprietary technology of GeneMatrix and can be utilized for genotyping viruses and microorganisms, and for human genome research. It is relatively restricted in usage due to the existence of many other genotyping products.
Overview
Restriction fragment mass polymorphism (RFMP) is an application of matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF), used for identifying individual nucleotides from a DNA fragment, most commonly used in labeling single nucleotide polymorphisms (SNP). RFMP was developed as a successor to the similar restriction fragment length polymorphism (RFLP) with the intent to allow for more SNPs. Rather than read out lengths of fragments as RFLP does, the individual nucleotides are read out using MALDI-TOF, which gives specific clarity over same-length site cutting.
Methodology
Like RFLP, the basic mechanism for RFMP is to run polymerase chain reaction (PCR) over a test sample. Modified PCR primers are used to create known restriction sites for enzymatic digestion. From the known fragment lengths, then, selection by length size can filter out DNA of interest. Finally, MALDI-TOF is run on the fragments of interest to produce a m/z (mass-to-charge ratio) identification spectra of the individual nucleotides.
A specific process, for example, would be Hong's 2008 strategy, outlined as the following:
Primers are modified with a GGATG recognition site and amplified with PCR.
The Fok-I enzyme is used to cut 9 (3’) and 13 (5’) bases upstream of the recognition site, leaving an overhang. BstF5I similarly cuts upstream at distances 2 (3’) and 0 (3’), making an additional overhang.
(This produces two oligonucleotide strands – a 7-mer and a 13-mer.)
Strands of either length are put under MALDI-TOF mass spectroscopy, to determine the individual nucleotides.
These steps, like any experimental methodology, are case-specific, and can vary between experimental setup's goals and/or constraints.
Application
RFMP is still primarily limited to South Korean medical literature, as it is an array assay that competes with many other specialized detection systems (whereas RFMP serves as a more general functionality).
There has been focus for RFMP to be used in HPV detection in recent years. This is motivated by fact that it has a sensitivity two log10-fold better than standard of care. However, this still does not put RFMP as the clear top choice in the HPV landscape as there are others such as the Roche Linear Array, Abbot Realtime genotype II, and Sysmex HISCL HCV Gr that experimentally outperform RFMP in terms of detection accuracy.
Other limitations that hinder RFMP's spread in the medical world are attributed to its lack of information on SNP mutation rate (e.g. masses have no correspondence to mutagenesis), as well as a general increase in user-handling difficulty compared to its peers.
See also
Restriction Fragment Length Polymorphism
External links
RFMP platform technology
References
DNA sequencing | Restriction fragment mass polymorphism | Chemistry,Biology | 712 |
473,774 | https://en.wikipedia.org/wiki/Transformation%20geometry | In mathematics, transformation geometry (or transformational geometry) is the name of a mathematical and pedagogic take on the study of geometry by focusing on groups of geometric transformations, and properties that are invariant under them. It is opposed to the classical synthetic geometry approach of Euclidean geometry, that focuses on proving theorems.
For example, within transformation geometry, the properties of an isosceles triangle are deduced from the fact that it is mapped to itself by a reflection about a certain line. This contrasts with the classical proofs by the criteria for congruence of triangles.
The first systematic effort to use transformations as the foundation of geometry was made by Felix Klein in the 19th century, under the name Erlangen programme. For nearly a century this approach remained confined to mathematics research circles. In the 20th century efforts were made to exploit it for mathematical education. Andrei Kolmogorov included this approach (together with set theory) as part of a proposal for geometry teaching reform in Russia. These efforts culminated in the 1960s with the general reform of mathematics teaching known as the New Math movement.
Use in mathematics teaching
An exploration of transformation geometry often begins with a study of reflection symmetry as found in daily life. The first real transformation is reflection in a line or reflection against an axis. The composition of two reflections results in a rotation when the lines intersect, or a translation when they are parallel. Thus through transformations students learn about Euclidean plane isometry. For instance, consider reflection in a vertical line and a line inclined at 45° to the horizontal. One can observe that one composition yields a counter-clockwise quarter-turn (90°) while the reverse composition yields a clockwise quarter-turn. Such results show that transformation geometry includes non-commutative processes.
An entertaining application of reflection in a line occurs in a proof of the one-seventh area triangle found in any triangle.
Another transformation introduced to young students is the dilation. However, the reflection in a circle transformation seems inappropriate for lower grades. Thus inversive geometry, a larger study than grade school transformation geometry, is usually reserved for college students.
Experiments with concrete symmetry groups make way for abstract group theory. Other concrete activities use computations with complex numbers, hypercomplex numbers, or matrices to express transformation geometry.
Such transformation geometry lessons present an alternate view that contrasts with classical synthetic geometry. When students then encounter analytic geometry, the ideas of coordinate rotations and reflections follow easily. All these concepts prepare for linear algebra where the reflection concept is expanded.
Educators have shown some interest and described projects and experiences with transformation geometry for children from kindergarten to high school. In the case of very young age children, in order to avoid introducing new terminology and to make links with students' everyday experience with concrete objects, it was sometimes recommended to use words they are familiar with, like "flips" for line reflections, "slides" for translations, and "turns" for rotations, although these are not precise mathematical language. In some proposals, students start by performing with concrete objects before they perform the abstract transformations via their definitions of a mapping of each point of the figure.
In an attempt to restructure the courses of geometry in Russia, Kolmogorov suggested presenting it under the point of view of transformations, so the geometry courses were structured based on set theory. This led to the appearance of the term "congruent" in schools, for figures that were before called "equal": since a figure was seen as a set of points, it could only be equal to itself, and two triangles that could be overlapped by isometries were said to be congruent.
One author expressed the importance of group theory to transformation geometry as follows:
I have gone to some trouble to develop from first principles all the group theory that I need, with the intention that my book can serve as a first introduction to transformation groups, and the notions of abstract group theory if you have never seen these.
See also
Chirality (mathematics)
Geometric transformation
Euler's rotation theorem
Motion (geometry)
Transformation matrix
References
Further reading
Heinrich Guggenheimer (1967) Plane Geometry and Its Groups, Holden-Day.
Roger Evans Howe & William Barker (2007) Continuous Symmetry: From Euclid to Klein, American Mathematical Society, .
Robin Hartshorne (2011) Review of Continuous Symmetry, American Mathematical Monthly 118:565–8.
Roger Lyndon (1985) Groups and Geometry, #101 London Mathematical Society Lecture Note Series, Cambridge University Press .
P.S. Modenov and A.S. Parkhomenko (1965) Geometric Transformations, translated by Michael B.P. Slater, Academic Press.
George E. Martin (1982) Transformation Geometry: An Introduction to Symmetry, Springer Verlag.
Isaak Yaglom (1962) Geometric Transformations, Random House (translated from the Russian).
Max Jeger (1966) Transformation Geometry (translated from the German).
Transformations teaching notes from Gatsby Charitable Foundation
Nathalie Sinclair (2008) The History of the Geometry Curriculum in the United States, pps. 63–66.
Zalman P. Usiskin and Arthur F. Coxford. A Transformation Approach to Tenth Grade Geometry, The Mathematics Teacher, Vol. 65, No. 1 (January 1972), pp. 21-30.
Zalman P. Usiskin. The Effects of Teaching Euclidean Geometry via Transformations on Student Achievement and Attitudes in Tenth-Grade Geometry, Journal for Research in Mathematics Education, Vol. 3, No. 4 (Nov., 1972), pp. 249-259.
A. N. Kolmogorov. Геометрические преобразования в школьном курсе геометрии, Математика в школе, 1965, Nº 2, pp. 24–29. (Geometric transformations in a school geometry course) (in Russian)
Fields of geometry
Symmetry
Geometry education | Transformation geometry | Physics,Mathematics | 1,228 |
51,048,565 | https://en.wikipedia.org/wiki/Eremurus%20%C3%97%20isabellinus | Eremurus × isabellinus is a hybrid of garden origin, derived from the crossing of E. stenophyllus with E. olgae. The first crossing was made by Sir Michael Foster at Great Shelford, England, at the end of the 19th century, and replicated in France by the Vilmorin nursery at Verrières-le-Buisson in 1902. The name of the hybrid is derived from the isabelline colour of the original F1 hybrid flowers. The genus is often known by the common names foxtail lily and desert candle.
Description
Eremurus × isabellinus produces stems 1.5 m high terminating in bottlebrush-like inflorescences in June and July. The narrow strap-like leaves form a mound of bluish green foliage at the base, which begins to die off as the plant flowers.
Cultivation
Eremurus × isabellinus is fully hardy, and best grown in full sun in a fertile, sandy, well-drained soil; it tolerates alkaline conditions.
The hybrid has given rise to around 20 cultivars, assembled into groups defined by origin: Shelfordi and Ruiter are the oldest; more recently, Amand, Highdown, and Erfo.
Selected cultivars
References
Asphodeloideae
Hybrid plants | Eremurus × isabellinus | Biology | 265 |
156,428 | https://en.wikipedia.org/wiki/Microtechnology | Microtechnology is technology whose features have dimensions of the order of one micrometre (one millionth of a metre, or 10−6 metre, or 1μm). It focuses on physical and chemical processes as well as the production or manipulation of structures with one-micrometre magnitude.
Development
Around 1970, scientists learned that by arraying large numbers of microscopic transistors on a single chip, microelectronic circuits could be built that dramatically improved performance, functionality, and reliability, all while reducing cost and increasing volume. This development led to the Information Revolution.
More recently, scientists have learned that not only electrical devices, but also mechanical devices, may be miniaturized and batch-fabricated, promising the same benefits to the mechanical world as integrated circuit technology has given to the electrical world. While electronics now provide the ‘brains’ for today's advanced systems and products, micro-mechanical devices can provide the sensors and actuators — the eyes and ears, hands and feet — which interface to the outside world.
Today, micromechanical devices are the key components in a wide range of products such as automobile airbags, ink-jet printers, blood pressure monitors, and projection display systems. It seems clear that in the not-too-distant future these devices will be as pervasive as electronics. The process has also become more precise, driving the dimensions of the technology down to sub-micrometer range as demonstrated in the case of advanced microelectric circuits that reached below 20 nm.
Micro electromechanical systems
The term MEMS, for Micro Electro Mechanical Systems, was coined in the 1980s to describe new, sophisticated mechanical systems on a chip, such as micro electric motors, resonators, gears, and so on. Today, the term MEMS in practice is used to refer to any microscopic device with a mechanical function, which can be fabricated in a batch process (for example, an array of microscopic gears fabricated on a microchip would be considered a MEMS device but a tiny laser-machined stent or watch component would not). In Europe, the term MST for Micro System Technology is preferred, and in Japan MEMS are simply referred to as "micromachines". The distinctions in these terms are relatively minor and are often used interchangeably.
Though MEMS processes are generally classified into a number of categories – such as surface machining, bulk machining, LIGA, and EFAB – there are indeed thousands of different MEMS processes. Some produce fairly simple geometries, while others offer more complex 3-D geometries and more versatility. A company making accelerometers for airbags would need a completely different design and process to produce an accelerometer for inertial navigation. Changing from an accelerometer to another inertial device such as a gyroscope requires an even greater change in design and process, and most likely a completely different fabrication facility and engineering team.
MEMS technology has generated a tremendous amount of excitement, due to the vast range of important applications where MEMS can offer previously unattainable performance and reliability standards. In an age where everything must be smaller, faster, and cheaper, MEMS offers a compelling solution. MEMS have already had a profound impact on certain applications such as automotive sensors and inkjet printers. The emerging MEMS industry is already a multibillion-dollar market. It is expected to grow rapidly and become one of the major industries of the 21st century. Cahners In-Stat Group has projected sales of MEMS to reach $12B by 2005. The European NEXUS group projects even larger revenues, using a more inclusive definition of MEMS.
Microtechnology is often constructed using photolithography. Lightwaves are focused through a mask onto a surface. They solidify a chemical film. The soft, unexposed parts of the film are washed away. Then acid etches away the material not protected.
Microtechnology's most famous success is the integrated circuit. It has also been used to construct micromachinery. As an offshoot of researchers attempting to further miniaturize microtechnology, nanotechnology emerged in the 1980s, particularly after the invention of new microscopy techniques. These produced materials and structures that have 1-100 nm in dimensions.
Items constructed at the microscopic level
The following items have been constructed on a scale of 1 micrometre using photolithography:
Electronics:
wires
resistors
transistors
thermionic valves
diodes
sensors
capacitors
Machinery:
electric motors
gears
levers
bearings
hinges
Fluidics:
valves
channels
pumps
turbines
See also
Microfabrication
References
External links
Institute for Micromachine and Microfabrication Research at Simon Fraser University
Nanotechnology
Semiconductor device fabrication
Technology by type | Microtechnology | Materials_science,Engineering | 978 |
14,975,857 | https://en.wikipedia.org/wiki/BSND | Bartter syndrome, infantile, with sensorineural deafness (Barttin), also known as BSND, is a human gene which is associated with Bartter syndrome.
This gene encodes an essential beta subunit for CLC chloride channels. These heteromeric channels localize to basolateral membranes of renal tubules and of potassium-secreting epithelia of the inner ear. Mutations in this gene have been associated with Bartter syndrome with sensorineural deafness.
References
External links
Further reading | BSND | Chemistry,Biology | 105 |
21,029,751 | https://en.wikipedia.org/wiki/Zone%20rouge | The zone rouge (English: red zone) is a chain of non-contiguous areas throughout northeastern France that the French government isolated after the First World War. The land, which originally covered more than , was deemed too physically and environmentally damaged by conflict for human habitation. Rather than attempt to immediately clean up the former battlefields, the land was allowed to return to nature. Restrictions within the Zone Rouge still exist today, although the control areas have been greatly reduced.
The zone rouge was defined just after the war as "Completely devastated. Damage to properties: 100%. Damage to Agriculture: 100%. Impossible to clean. Human life impossible".
Under French law, activities such as housing, farming, or forestry were temporarily or permanently forbidden in the Zone Rouge, because of the vast amounts of human and animal remains, and millions of items of unexploded ordnance contaminating the land. Some towns and villages were never permitted to be rebuilt after the war.
Main dangers
The areas are saturated with unexploded shells (including many gas shells), grenades, and rusting ammunition. Soils were heavily polluted by lead, mercury, chlorine, arsenic, various dangerous gases, acids, and human and animal remains. The area was also littered with ammunition depots and chemical plants. The land of the Western Front is covered in old trenches and shell holes.
Each year, numerous unexploded shells are recovered from former WWI battlefields in what is known as the iron harvest. According to the Sécurité Civile, the French agency in charge of the land management of Zone Rouge, 300 to 700 more years at this current rate will be needed to clean the area completely. Some experiments conducted in 2005–06 discovered up to 300 shells per hectare (120 per acre) in the top of soil in the worst areas.
Areas where 99% of all plants still die remain off limits. For example, there are two small areas of land close to Ypres and the Woëvre where arsenic constitutes up to 176 grams per kilogram (18%) in the soil. In the 1920's, chemical warfare shells containing arsenic were destroyed there by thermal treatment.
See also
French villages destroyed in the First World War
Demilitarized zone
Involuntary park
Iron harvest
No man's land
Pripyat
References
Further reading
Smith, Corinna Haven & Hill, Caroline R. Rising Above the Ruins in France: An Account of the Progress Made Since the Armistice in the Devastated Regions in Re-establishing Industrial Activities and the Normal Life of the People. New York: GP Putnam's Sons, 1920: 6.
De Sousa David, La Reconstruction et sa Mémoire dans les villages de la Somme 1918–1932, Editions La vague verte, 2002, 212 pages
Bonnard Jean-Yves, La reconstitution des terres de l'Oise après la Grande Guerre: les bases d'une nouvelle géographie du foncier, in Annales Historiques Compiégnoises 113–114, pp. 25–36, 2009.
Parent G.-H., 2004. Trois études sur la Zone Rouge de Verdun, une zone totalement sinistrée I.L'herpétofaune – II.La diversité floristique – III.Les sites d'intérêt botanique et zoologique à protéger prioritairement. Ferrantia, 288 pages
Bausinger, Tobias; Bonnaire, Eric; & Preuß, Johannes,. Exposure assessment of a burning ground for chemical ammunition on the Great War battlefields of Verdun, Science of the Total Environment 382:2–3, pp. 259–271, 2007.
External links
Map of the Western Front in 1918
Déminage à Douaumont
Battle of the Somme
Ecotoxicology
Environment of France
Environmental disasters in Europe
France geography articles needing translation from French Wikipedia
Geography of Somme (department)
World War I sites in France
Soil contamination | Zone rouge | Chemistry,Environmental_science | 810 |
1,521,654 | https://en.wikipedia.org/wiki/New%20Frontier%20Hotel%20and%20Casino | The New Frontier (formerly Hotel Last Frontier and The Frontier) was a hotel and casino on the Las Vegas Strip in Paradise, Nevada. The property began as a casino and dance club known as Pair O' Dice, opened in 1931. It was sold in 1941, and incorporated into the Hotel Last Frontier, which began construction at the end of the year. The Hotel Last Frontier opened on October 30, 1942, as the second resort on the Las Vegas Strip. The western-themed property included 105 rooms, as well as the Little Church of the West. The resort was devised by R.E. Griffith and designed by his nephew, William J. Moore. Following Griffith's death in 1943, Moore took over ownership and added a western village in 1948. The village consisted of authentic Old West buildings from a collector and would also feature the newly built Silver Slipper casino, added in 1950.
Resort ownership changed several times between different groups, beginning in 1951. A modernized expansion opened on April 4, 1955, as the New Frontier. It operated concurrently with the Last Frontier. Both were closed in 1965 and demolished a year later to make way for a new resort, which opened as the Frontier on July 29, 1967. Future casino mogul Steve Wynn was among investors in the ownership group, marking his entry into the Las Vegas gaming industry. The ownership group also included several individuals who had difficulty gaining approval from Nevada gaming regulators.
Businessman Howard Hughes bought out the group at the end of 1967. Like his other casino properties, he owned the Frontier through Hughes Tool Company, and later through Summa Corporation. In 1988, Summa sold the Frontier to Margaret Elardi, and her two sons became co-owners a year later. A 16-story hotel tower was added in 1990. The Elardi family declined to renew a contract with the Culinary Workers Union, and 550 workers went on strike on September 21, 1991. It became one of the longest strikes in U.S. history. Businessman Phil Ruffin eventually purchased the Frontier for $167 million. The sale was finalized on February 1, 1998, when Ruffin renamed the property back to the New Frontier. The strike ended on the same day, as Ruffin agreed to a union contract. Ruffin launched a $20 million renovation to update the aging property. His changes included the addition of a new restaurant, Gilley's Saloon.
Over the next decade, Ruffin considered several redevelopment projects for the site, but lack of financing hindered these plans. In May 2007, he agreed to sell the New Frontier to El Ad Properties for more than $1.2 billion. The resort closed on July 16, 2007, and demolition began later that year. The 16-story tower was imploded on November 13, 2007. It was the last of the Hughes-era casinos to be demolished. The 984-room property had been popular as a low-budget alternative to the larger resorts on the Strip. El Ad owned the Plaza Hotel in New York City and planned to replace the New Frontier with a Plaza-branded resort, but the project was canceled due to the Great Recession. Crown Resorts also scrapped plans to build the Alon Las Vegas resort. The site was purchased by Wynn Resorts in 2018, although plans to build the Wynn West resort were also shelved, and the land remains vacant.
The property hosted numerous entertainers throughout its operation, including Wayne Newton and Robert Goulet. It hosted the Las Vegas debuts of Liberace in 1944, and Elvis Presley in 1956, and also hosted the final performance of Diana Ross & The Supremes in 1970.
History
A portion of the property began as a casino and dance club known as Pair O' Dice. It opened on July 4, 1931, and was remodeled and enlarged during its first year. It was originally owned by casino dealer Frank Detra. Businessman Guy McAfee took over club operations in 1939. He remodeled the property and renamed it the 91 Club, after its location on Highway 91, which would later become the Las Vegas Strip. He purchased the club later in 1939, for $10,000.
Hotel Last Frontier (1942–65)
McAfee sold the 91 Club in late 1941, to a group based in Arizona. R.E. “Griff” Griffith, the brother of film director D.W. Griffith, and owner of a movie theater chain in the southwestern U.S., paid $1,000 per acre for the 35-acre site. In addition to theaters, Griffith also owned the El Rancho Hotel & Motel in Gallup, New Mexico, and planned to expand it into a hotel chain. Griffith had originally planned to build his next hotel in Deming, New Mexico, before traveling to Las Vegas and realizing that it presented better opportunities. He intended to construct a western-themed hotel-casino resort on the newly purchased land. However, his initial name for the project was already in use by the El Rancho Vegas, which opened in 1941 as the first resort on the Las Vegas Strip. Instead, Griffith named his property the Hotel Last Frontier, while maintaining the western theme.
Griffith hired architect William J. Moore, his nephew, to design the project, with emphasis on an authentic recreation of the Old West. Construction began on December 8, 1941, taking place around the 91 Club, which was incorporated into the new project as the Leo Carrillo Bar. It was named after Griffith's friend, entertainer Leo Carrillo. Building materials were difficult to acquire, due to a supply shortage caused by World War II. Moore purchased one or two abandoned mines in Pioche, Nevada, and sent crews to strip the sites of any usable materials. Moore also purchased two ranches in Moapa, Nevada, to supply meat and dairy for the resort.
The Hotel Last Frontier opened on October 30, 1942. It was the second hotel-casino resort to open on the Las Vegas Strip. The motel was mostly two stories, with some rooms on a third floor. It included 105 rooms at its opening, and an additional 100 would be added later. To maintain cool temperatures, cold water was carried through pipes in the walls of each room, originating from tunnels beneath the property.
Because Griffith and Moore were inexperienced in the gaming industry, they had the casino built at the rear of the property, not realizing that it should have been presented as the main attraction. The property included the Gay Nineties Bar, which had sat in the Arizona Club in Las Vegas, before being reassembled at the Last Frontier. The Frontier added the Little Church of the West in May 1943. The resort also included the El Corral Arena, used for rodeo events.
Griffith died of a heart attack in November 1943, and Moore took over the property. Moore conceived an idea to add the western-themed Last Frontier Village. It opened in November 1948, initially with three buildings while others would be added later. The village ultimately included restaurants, bars, and shops. The Little Church of the West was also incorporated into the village. Located at the property's northern end, the village included authentic Old West buildings saved by Doby Doc, a collector in Elko, Nevada. He served as curator of the attraction. The village also featured some newly built replicas created by the resort, including a Texaco gas station designed by Zick & Sharp. It offered free showers and restrooms to attract motorists to the resort. The Silver Slipper casino was added to the village in 1950.
The Last Frontier was sold in 1951, to a group led by McAfee. The new ownership included Jake Kozloff and Beldon Katleman, the latter of whom also owned the El Rancho Vegas. By 1954, Kozloff was the primary stockholder, and the ownership group now included Murray Randolph.
New Frontier (1955–65)
In June 1954, construction began on a $2 million expansion known as the New Frontier. The project included more rooms, new restaurants, and additional casino space. The Little Church of the West was relocated elsewhere on the property to make room for the new facilities. Later that year, Katleman sued several resort executives, including Kozloff, his brother William Kozloff, and Randolph. Katleman alleged that the trio had undisclosed partners invested in the resort, going against state law. He also alleged that the men began expansion of the resort without first obtaining a loan to cover the costs. The Nevada Tax Commission launched an investigation into the resort's hidden ownership.
An opening celebration for the New Frontier was held on April 4, 1955. It served as a modernized expansion of the Hotel Last Frontier, which continued to operate under its original name. Singer Mario Lanza was scheduled to perform for the opening, but canceled at the last minute due to laryngitis, forcing the property to refund $20,000 in tickets.
Jake Kozloff resigned as president and general manager a few weeks after the opening. He and Randolph sold their interest to a new investor group, which finalized their purchase in May 1955, after paying more than $1 million to creditors. Katleman had sought to prevent the sale, as the resort was heavily mortgaged under the new group's financial setup. Katleman had also gotten into a fist fight with Maury Friedman, a member of the group who was denied ownership by the tax commission. Friedman was approved for an ownership stake later in 1955, along with seven other new partners in the group. Katleman's 1954 suit against Kozloff and Randolph was settled a few months later.
An expansion project was announced later in 1955. The adjacent Royal Nevada hotel-casino, located north of the Frontier, was taken over by the latter's ownership group in 1956. The Royal Nevada then briefly served as an annex to the New Frontier. Later that year, a new group took over operations and invested $301,000 into the New Frontier, which was struggling financially. The group included Vera Krupp, the estranged wife of Alfried Krupp von Bohlen und Halbach. Krupp oversaw operations with Louis Manchon, a swimming pool contractor. The previous group, including Friedman, returned to take over operations in early March 1957, after Krupp declined to invest any further in the struggling resort. Krupp alleged that stockholders had misled her on the monetary potential of the New Frontier. The property owed approximately $100,000 to creditors, not including back taxes sought by the U.S. government. Federal agents seized more than $1 million in assets from the property, which closed its facilities on March 18, 1957, with the exception of the hotel. The New Frontier later went into bankruptcy. Restaurant and bar operations eventually resumed.
In mid-1958, a new operating group – led by Los Angeles shirt manufacturer Jack Barenfield – proposed a $400,000 investment to reopen the casino and operate it on a limited basis. The Nevada Gaming Control Board was skeptical that the group would have enough funds to keep the casino operational for long.
Warren Bayley, one of the primary owners of the Hacienda resort, reached a deal to take over the New Frontier from Katleman and Friedman. The $6.5 million deal was finalized on October 1, 1958. The property was leased to Bayley, who agreed to pay off its debts. Actor Preston Foster served as vice president for Frontier Properties, Inc. The casino area reopened in April 1959. Two years later, Idaho banker and construction company owner Frank Wester sought to take over the property. Wester was approved by state gaming regulators, but failed to follow through on the deal.
The Frontier (1967–98)
Bayley became the primary owner of the New Frontier Hotel in November 1964. He died a month later, and the casino was closed on New Year's Eve, in preparation for an expansion. The hotel and other facilities closed a few days later, and the property never reopened. Bankers Life purchased Frontier Properties Inc. in August 1965, and leased it to a new company, Vegas Frontier Inc., overseen by Friedman. Six months later, Friedman announced plans to demolish the existing facilities entirely for a larger Frontier resort to be built on the site. The demolition process reached its final stage in May 1966. The western village was included in the demolition, although the Little Church of the West and the Silver Slipper casino were kept.
Groundbreaking for a new Frontier hotel-casino took place on September 26, 1966, with Friedman set to oversee casino operations. The new project had more than a dozen investors, including future casino mogul Steve Wynn, who purchased a three-percent stake. The Frontier marked Wynn's entry into the Las Vegas gaming industry. It was later discovered that the Frontier project was financed with Detroit mob money, from a group led by Anthony Joseph Zerilli.
The $25 million Frontier opened on July 29, 1967, with a four-day celebration. It included 650 hotel rooms, entertainment venues, several restaurants, and convention space. The project was designed by Rissman & Rissman. The Frontier's roadside sign had a height of 184 feet, making it the tallest in Las Vegas. The sign, along with the Frontier's new "F" logo, was designed by Bill Clark of Ad Art. The sign featured 16-foot-tall letters, with the giant "F" logo resting at the top.
Several individuals in the new property, including Friedman, had difficulty gaining approval of state gaming regulators. Businessman Howard Hughes bought out the group in December 1967, paying $23 million for the Frontier. Like his other casino properties, it was originally operated through Hughes Tool Company, until Hughes' Summa Corporation took over in 1973. Hughes died three years later. A $5 million renovation concluded in 1978. Later that year, the Little Church of the West was relocated to the Hacienda resort, making room for the Fashion Show Mall to be built just south of the Frontier.
In December 1987, Summa agreed to sell the Frontier and Silver Slipper – the last of Hughes' Las Vegas gaming properties – to casino owner Margaret Elardi. She took over ownership of the Frontier on June 30, 1988, and acquired the Silver Slipper later that year, demolishing the latter to add a Frontier parking lot. In December 1989, Elardi's two sons, John and Tom, became part-owners with her in the Frontier. The 16-story Atrium Tower, consisting of 400 suites, was opened a month later.
Under the Elardis' ownership, the Frontier focused primarily on a low-budget clientele of slot players. It offered few amenities, at a time when new megaresorts were becoming popular on the Las Vegas Strip.
Strike
The Frontier had a labor agreement with the Culinary Workers Union that expired on July 1, 1989. Upon its expiration, general manager Tom Elardi said that the union presented the Frontier with two contract renewal choices, with no option to negotiate; he said the family would not have purchased the Frontier if they had known this would happen. Citing a reduction in salaries and worker benefits, 550 workers went on strike on September 21, 1991. Politicians such as Jesse Jackson expressed support for the strikers, who represented four unions, including Culinary.
The strike ran continuously on the sidewalk in front of the resort, and striking workers were occasionally violent towards patrons who crossed the picket line. In April 1993, California tourist Sean White and his family were verbally and physically assaulted by the strikers. Seven union workers were charged in the incident, and the union itself settled with the Whites after they filed a lawsuit. Sean White also sued the Frontier, seeking damages for his injuries and alleging inadequate security at the resort. He claimed that the property was aware of the strikers being particularly agitated on the night of the incident, yet did nothing to resolve the situation. The Frontier countered that the Whites provoked the strikers. Furthermore, Tom Elardi said that guests were always warned about possible verbal abuse from the strikers when making hotel reservations. He also said that, according to the National Labor Relations Board (NLRB), it would be illegal to label the strikers as "violent". In addition, Elardi said that Frontier security did not have the authority to help guests on public property, where the incident took place. A jury eventually ruled in the Frontier's favor, finding it not liable for events that take place on public property.
In late 1991, the Frontier ran controversial ads in the Los Angeles Times implying that the entire Strip was being targeted by the strike. The property eventually stopped running the ads after protests from other resorts. Business at the Frontier saw a 40-percent decrease during the first year of the strike. In 1993, Nevada governor Bob Miller appointed a fact finder to help resolve the strike, although these efforts failed after 28 meetings. Miller later called the Frontier an embarrassment to the state for its refusal to end the strike. Margaret Elardi wanted to settle with the union and end the strike, but her sons opposed the idea.
Numerous complaints against the Frontier were filed with the NLRB. In 1995, a federal court ruled that the resort had to pay back work-related benefits that it had cut off to striking workers. The NLRB later ruled in favor of the union, agreeing with the 1995 ruling and calling the dispute an unfair labor practice strike. Negotiations between the Culinary union and the Elardis took place in July 1996, but ended without a resolution, in part because Tom Elardi refused a Culinary mandate to rehire all of the striking workers: "I believe the ones who've been violent or who participated in major picket line misconduct shouldn't come back. The union says that's the only way they will settle, but I absolutely refuse to take them back".
Arthur Goldberg, chairman of Bally Entertainment, announced in July 1996 that there was interest in purchasing the Frontier and ending the strike. At the time, Hilton Hotels Corporation was in the process of acquiring Bally. Goldberg was willing to purchase the Frontier himself if Hilton should pass on it. His plan would potentially include demolishing all or part of the Frontier to make way for a 3,000-room resort. Wynn and casino rival Donald Trump were also rumored to have an interest in buying the Frontier. Trump passed on the property, as he found Elardi's $208 million asking price too high. Hilton and Goldberg also did not proceed with a purchase, and the strike continued.
Allegations
In late 1996, a former Frontier worker alleged that the Elardis ran a technologically advanced spy operation to monitor the strike. It was also used to monitor Frontier security guards, as well as officers of the Las Vegas Metropolitan Police Department whenever they came to view video footage of the strike. The operation allegedly included security cameras and listening devices, operated from a second-floor headquarters known as the 900 Room that was overseen by 15 people. The worker also said that the resort routinely sabotaged the strike, for instance by turning on nearby sprinklers or placing manure bags near a catering truck. Tom Elardi called the worker disgruntled. He said the 900 Room functioned only to monitor and maintain the exterior during the strike, denying that any sabotage had taken place.
Other former workers came forward to confirm the spying allegation, stating that there was a high level of paranoia relating to the strike. Some workers said that the Frontier had tapped its office phones to monitor conversations, allegations which led to an FBI investigation. Concerned that strikers might stay at the hotel to gain information, Frontier officials also had recording devices planted in certain guest rooms which were to be occupied only by confirmed members of the strike, allowing the hotel to spy on them. The spying operation allegedly went beyond the resort, as some workers said they were tasked with following strikers around. Others collected garbage from the Culinary headquarters in hopes of gaining incriminating information.
After the allegations came to light, strikers filed 75 criminal complaints against the Frontier, and the Nevada Gaming Control Board opened an investigation. Meanwhile, the AFL–CIO launched a campaign to raise awareness about the strike, with president John Sweeney calling the Frontier "one of the biggest corporate criminals" in American history. The AFL-CIO also opened a committee investigation into the strike. John Elardi later admitted that the 900 Room was used for spying, stating that he created it in 1992, without first consulting Margaret or Tom Elardi. He also acknowledged using sprinklers on the strikers, after police stopped responding to the resort's calls about trespassing picketers.
Resolution
In October 1997, businessman Phil Ruffin reached an agreement to buy the Frontier from the Elardis for $167 million. He also agreed to sign a contract with the union, putting an end to the strike. Ruffin's application for a gaming license was fast-tracked to expedite the sale and end the strike sooner. Prior to the announcement of Ruffin's purchase, the Nevada Gaming Control Board was prepared to file a complaint revoking the Frontier's gaming license, due to the property's conduct during the strike.
Ruffin completed his purchase on February 1, 1998, ending the 2,325-day strike. It was among the longest strikes in U.S. history, and the Culinary union had spent $26 million on it. Approximately 300 of the 550 striking workers returned to their jobs. Striking employees received a total of nearly $5 million in back-pay and trust fund contributions. On the day of the purchase, a celebration event was held at the resort, and was attended by 3,000 people.
New Frontier (1998–2007)
Upon taking ownership, Ruffin renamed the property back to the New Frontier. It had 986 rooms and a casino, and catered to a middle-class clientele. The resort had become outdated during the strike, and lacked basic features such as fulltime room service and a 24-hour coffee shop. Profits improved following a $20 million renovation project, which included new restaurants and a remodeled sportsbook.
Gilley's Saloon, a country western restaurant, was among the additions. It included a mechanical bull, a dance hall, and live music. The saloon opened in December 1998. Ruffin got the idea for the restaurant after seeing the 1980 film Urban Cowboy, which had featured the Gilley's Club in Texas, along with its mechanical bull. Ruffin subsequently partnered with country singer Mickey Gilley to open the saloon, inspired by the original club. Gilley's later offered bikini bull-riding and mud wrestling.
Ruffin intended to rebrand the hotel as a Radisson, and renovated the guest rooms to bring them up to standard. However, in 1999, he decided against this idea as he now had other plans for the property. In January 2000, Ruffin announced plans to demolish the New Frontier in five or six months to make way for a new casino resort, scheduled to open in 2002. The new project, known as City by the Bay, would include a San Francisco theme and more than 2,500 rooms. Ruffin said the new resort was necessary to stay competitive on the Las Vegas Strip. The project would cost up to $700 million. He put his redevelopment plans on hold in May 2000, because of difficulty raising the necessary funds. Ruffin said the project would eventually proceed. The New Frontier continued operations in the meantime, and remained profitable.
In 2002, Ruffin partnered with Trump to build Trump International Hotel Las Vegas. It was constructed on the Frontier property's southwest corner, taking up part of a rear parking lot. Meanwhile, Ruffin still had difficulty acquiring funds to build City by the Bay, and his plans evolved several times over the years. At one point, Ruffin considered a Trump-branded resort to replace the New Frontier. In 2003, Ruffin was in discussions with several casino operators about a possible joint venture for a new resort on the Frontier site. At the end of 2004, he said he would redevelop the New Frontier site on his own, stating that he had turned down a dozen offers from potential partners. By 2006, Ruffin's unnamed resort project was planned to include a 485-foot Ferris wheel. Later that year, Ruffin announced that the new casino resort would be named Montreux, after the Swiss town of the same name. The $2 billion resort would include 2,750 rooms.
However, by March 2007, Ruffin was in negotiations to sell the New Frontier to El Ad Properties, which owned the Plaza Hotel in New York City. A sale agreement was announced two months later, with El Ad paying approximately $35 million per acre for the 35-acre site. At more than $1.2 billion, it was the most expensive real estate transaction on the Strip. El Ad planned to demolish the New Frontier and build a $5 billion Plaza-branded resort in its place.
The New Frontier closed on July 16, 2007, at 12:01 a.m. The closing was a low-key event. At the time, the New Frontier operated the last remaining bingo room on the Strip, and was one of the few remaining casinos to still use coin-operated slot machines. El Ad completed its purchase three weeks after the closure.
The 984-room New Frontier had remained popular as a low-budget alternative to larger resorts nearby. However, it lacked the same popularity as previous resorts such as the Sands, Stardust, and Desert Inn. In 2006, readers of the Las Vegas Review-Journal voted it "Hotel Most Deserving of Being Imploded". Wynn, who now owned the Wynn Las Vegas resort across the street, called the aging Frontier "the single biggest toilet in Las Vegas".
The New Frontier was the last of the Hughes-era casinos to be demolished. After a five-minute fireworks show, the 16-story Atrium Tower was imploded on November 13, 2007, at 2:37 a.m. to the thousands of spectators that turned out to view the demolition. The tower was imploded by Controlled Demolition, Inc., which had worked on other Las Vegas hotel implosions. The interior was stripped down allowing for the insertion of dynamite, totaling 1,040 pounds and spread across 6,200 different areas of the tower. The implosion left a four-story pile of concrete, glass and steel remains. Two low-rise hotel wings were demolished with the use of an excavator, although the discovery of asbestos slowed the process down.
The roadside sign was left up until December 2008, when Wynn requested that it be taken down ahead of the opening for Encore Las Vegas, an addition to his Wynn property. The city's Neon Museum sought to save portions of the sign.
Redevelopment proposals
Following the closure of the New Frontier, there had been multiple redevelopment proposals.
The Plaza project failed to materialize, due to financial problems brought on by the Great Recession. Wynn offered to beautify the vacant site with landscaping, and was also approached by El Ad several times to take over the land and develop it. However, he declined as he considered such a project too much of a financial risk. Wynn blamed what he saw as anti-business policies of U.S. president Barack Obama, and a challenging level of debt as a consequence of El Ad having paid what proved too high a price for the property.
In 2014, Crown Resorts purchased the property for $280 million and partnered with Oaktree Capital Management. A year later, they announced plans to build a casino resort known as Alon Las Vegas. However, Crown Resorts pulled out of the project in 2016, and it was eventually canceled.
Wynn Resorts bought the land and four adjacent acres in early 2018, for $336 million. The company announced plans to build Wynn West, a new casino resort to complement the existing Wynn and Encore properties. Steve Wynn, amid sexual assault allegations against him, resigned from his company shortly after the announcement. Matt Maddox took over as CEO, and plans for Wynn West were shelved. In 2024, the county extended permits for the site, giving Wynn until April 2026 to begin construction on an unnamed resort expansion. The project would include additional casino space and a hotel tower with 1,100 rooms.
Entertainment
The Hotel Last Frontier opened with an entertainment venue known as the Ramona Room. Liberace made his Las Vegas debut at the showroom in 1944. The Mary Kaye Trio performed at the Hotel Last Frontier for approximately three years, starting in 1950. The Ramona Room had already been booked by other acts over the next six months, so a stage was added to a bar area for the trio to perform. They became the first lounge act to perform in Las Vegas, popularizing the concept.
The New Frontier addition in 1955 included a restaurant and showroom known as the Venus Room. A new Venus Room, with seating for 800, opened with the rebuilt Frontier in 1967. The new resort also included the 400-seat Post Time Theater. Elvis Presley made his Las Vegas debut at the New Frontier in 1956, but was poorly received. In the late 1950s, the New Frontier offered Holiday in Japan, a variety show featuring 60 performers from Tokyo.
Ronald Reagan entertained at the resort in the 1950s, as did Wayne Newton in the 1960s and 1970s. Other entertainers included Robert Goulet, Jimmy Durante, George Carlin, Ray Anthony, and Phil Harris.
Diana Ross & The Supremes gave their final performance in 1970, at the Frontier. Their performance was recorded for the album Farewell. In the early 1970s, the Frontier hosted the Miss Rodeo America pageant. Siegfried & Roy performed in Beyond Belief, a magic show that opened in 1981. It ran for 3,538 performances over a period of nearly seven years. When the Elardi family took over ownership in the late 1980s, they closed the showroom. After years without live entertainment, Ruffin added a 284-seat venue in 2000.
One new show, Legends of Comedy, featured entertainers who impersonated comedians such as Rodney Dangerfield, Jay Leno, and Roseanne Barr. In 2001, the New Frontier launched Rock 'n' Roll Legends, featuring impersonator singers. Numerous other shows ran at the resort in the 2000s, including a magic act, the Thunder From Down Under male revue, and a Frank Sinatra tribute show. Female impersonator Kenny Kerr also had a musical dance show at the property.
References
External links
Official website, archived via the Wayback Machine
New Frontier Implosion Video—the implosion starts at 1:50
New Frontier photo from November 3, 2007
Las Vegas Casino Demolition: Blowdown Documentary
1942 establishments in Nevada
1967 establishments in Nevada
2007 disestablishments in Nevada
Buildings and structures demolished by controlled implosion
Buildings and structures demolished in 2007
Casino hotels
Casinos completed in 1942
Casinos completed in 1967
Casinos in Paradise, Nevada
Defunct casinos in the Las Vegas Valley
Defunct hotels in the Las Vegas Valley
Demolished hotels in Clark County, Nevada
Hotel buildings completed in 1942
Hotel buildings completed in 1967
Hotels established in 1942
Hotels established in 1967
Landmarks in Nevada
Las Vegas Strip | New Frontier Hotel and Casino | Engineering | 6,322 |
74,341,487 | https://en.wikipedia.org/wiki/Nadel%20vanishing%20theorem | In mathematics, the Nadel vanishing theorem is a global vanishing theorem for multiplier ideals, introduced by A. M. Nadel in 1989. It generalizes the Kodaira vanishing theorem using singular metrics with (strictly) positive curvature, and also it can be seen as an analytical analogue of the Kawamata–Viehweg vanishing theorem.
Statement
The theorem can be stated as follows. Let X be a smooth complex projective variety, D an effective -divisor and L a line bundle on X, and is a multiplier ideal sheaves. Assume that is big and nef. Then
Nadel vanishing theorem in the analytic setting: Let be a Kähler manifold (X be a reduced complex space (complex analytic variety) with a Kähler metric) such that weakly pseudoconvex, and let F be a holomorphic line bundle over X equipped with a singular hermitian metric of weight . Assume that for some continuous positive function on X. Then
Let arbitrary plurisubharmonic function on , then a multiplier ideal sheaf is a coherent on , and therefore its zero variety is an analytic set.
References
Citations
Bibliography
Further reading
Theorems in algebraic geometry
Theorems in complex geometry | Nadel vanishing theorem | Mathematics | 251 |
14,725,514 | https://en.wikipedia.org/wiki/Anaplastic%20lymphoma%20kinase | Anaplastic lymphoma kinase (ALK) also known as ALK tyrosine kinase receptor or CD246 (cluster of differentiation 246) is an enzyme that in humans is encoded by the ALK gene.
Identification
Anaplastic lymphoma kinase (ALK) was originally discovered in 1994 in anaplastic large-cell lymphoma (ALCL) cells. ALCL is caused by a (2;5)(p23:q35) chromosomal translocation that generates the fusion protein NPM-ALK, in which the kinase domain of ALK is fused to the amino-terminal part of the nucleophosmin (NPM) protein. Dimerization of NPM constitutively activates the ALK kinase domain.
The full-length protein ALK was identified in 1997 by two groups. The deduced amino acid sequences revealed that ALK was a novel receptor tyrosine kinase (RTK), having an extracellular ligand-binding domain, a transmembrane domain, and an intracellular tyrosine kinase domain. While the tyrosine kinase domain of human ALK shares a high degree of similarity with that of the insulin receptor, its extracellular domain is unique among the RTK family in containing two MAM domains (meprin, A5 protein and receptor protein tyrosine phosphatase mu), an LDLa domain (low-density lipoprotein receptor class A) and a glycine-rich region. Based on overall homology, ALK is closely related to the leukocyte receptor tyrosine kinase (LTK) and, together with the insulin receptor, forms a subgroup in the RTK superfamily. The human ALK gene encodes a protein 1,620 amino acids long with a molecular weight of 180 kDa.
Since the original discovery of the receptor in mammals, several orthologs of ALK have been identified: dAlk in the fruit fly (Drosophila melanogaster) in 2001, scd-2 in the nematode (Caenorhabditis elegans) in 2004, and DrAlk in the zebrafish (Danio rerio) in 2013.
The ligands of the human ALK/LTK receptors were identified in 2014: FAM150A (AUGβ) and FAM150B (AUGα), two small secreted peptides that strongly activate ALK signaling. In invertebrates, ALK-activating ligands are Jelly belly (Jeb) in Drosophila, and hesitation behaviour 1 (HEN-1) in C. elegans. No such ligands have been reported yet in zebrafish or other vertebrates.
Mechanism
Following binding of the ligand, the full-length receptor ALK dimerizes, changes conformation, and autoactivates its own kinase domain, which in turn phosphorylates other ALK receptors in trans on specific tyrosine amino acid residues. ALK phosphorylated residues serve as binding sites for the recruitment of several adaptor and other cellular proteins, such as GRB2, IRS1, Shc, Src, FRS2, PTPN11/Shp2, PLCγ, PI3K, and NF1. Other reported downstream ALK targets include FOXO3a, CDKN1B/p27kip, cyclin D2, NIPA, RAC1, CDC42, p130CAS, SHP1, and PIKFYVE.
Phosphorylated ALK activates multiple downstream signal transduction pathways, including MAPK-ERK, PI3K-AKT, PLCγ, CRKL-C3G, and JAK-STAT.
Function
The receptor ALK plays a pivotal role in cellular communication and in the normal development and function of the nervous system. This observation is based on the extensive expression of ALK messenger RNA (mRNA) throughout the nervous system during mouse embryogenesis. In vitro functional studies have demonstrated that ALK activation promotes neuronal differentiation of PC12 or neuroblastoma cell lines.
ALK is critical for embryonic development in Drosophila. Flies lacking the receptor die due to failure of founder cell specification in embryonic visceral muscle. However, while ALK knockout mice exhibit defects in neurogenesis and testosterone production, they remain viable, suggesting that ALK is not critical to their developmental processes.
ALK regulates retinal axon targeting, growth and size, synapse development at the neuromuscular junction, behavioral responses to ethanol, and sleep. It restricts and constrains learning and long-term memory and small-molecule inhibitors of the ALK receptor can improve learning, long-term memory, and extend healthy lifespan. ALK is also a candidate thinness gene, as its genetic deletion leads to resistance to diet- and leptin-mutation-induced obesity.
Pathology
The ALK gene can be oncogenic in three ways – by forming a fusion gene with any of several other genes, by gaining additional gene copies or with mutations of the actual DNA code for the gene itself.
Anaplastic large-cell lymphoma
The 2;5 chromosomal translocation is associated with approximately 60% of anaplastic large-cell lymphomas (ALCLs), type ALK-positive anaplastic large cell lymphoma and very rare cases of ALCL type primary cutaneous anaplastic large cell lymphoma. The translocation creates a fusion gene consisting of the ALK (anaplastic lymphoma kinase) gene and the nucleophosmin (NPM) gene: the 3' half of ALK, derived from chromosome 2 and coding for the catalytic domain, is fused to the 5' portion of NPM from chromosome 5. The product of the NPM-ALK fusion gene is oncogenic.
In a smaller fraction of ALCL patients, the 3' half of ALK is fused to the 5' sequence of TPM3 gene, encoding for tropomyosin 3. In rare cases, ALK is fused to other 5' fusion partners, such as TFG, ATIC, CLTC1, TPM4, MSN, ALO17, MYH9.
Adenocarcinoma of the lung
The EML4-ALK fusion gene is responsible for approximately 3-5% of non-small-cell lung cancer (NSCLC). The vast majority of cases are adenocarcinomas. Patients with this ALK rearrangement have the following clinicopathologic characteristics: Young age at diagnosis (median 50 years), female gender, nonsmoker/light smoker, adenocarcinoma histology with specific morphologic patterns such as cribriform and solid signet ring, expression of thyroid transcription factor 1, tendency to metastasize to pleura or pericardium, frequently with more metastases than other molecular types, and predominantly metastases to the central nervous system. The standard test used to detect this gene in tumor samples is fluorescence in situ hybridization (FISH) by a US FDA approved kit. Recently Roche Ventana obtained approval in China and European Union countries to test this mutation by immunohistochemistry. Other techniques like reverse-transcriptase PCR (RT-PCR) can also be used to detect lung cancers with an ALK gene fusion but not recommended. ALK lung cancers are found in patients of all ages, although on average these patients tend to be younger. ALK lung cancers are more common in light cigarette smokers or nonsmokers, but a significant number of patients with this disease are current or former cigarette smokers. EML4-ALK-rearrangement in NSCLC is exclusive and not found in EGFR- or KRAS-mutated tumors.
Gene rearrangements and overexpression in other tumours
Familial cases of neuroblastoma
Inflammatory myofibroblastic tumor
Adult and pediatric renal cell carcinomas
Esophageal squamous cell carcinoma
Breast cancer, notably the inflammatory subtype
Colonic adenocarcinoma
Glioblastoma multiforme
Anaplastic thyroid cancer
ALK inhibitors
Xalkori (crizotinib), produced by Pfizer, was approved by the FDA for treatment of late stage lung cancer on August 26, 2011. Early results of an initial Phase I trial with 82 patients with ALK induced lung cancer showed an overall response rate of 57%, a disease control rate at 8 weeks of 87% and progression free survival at 6 months of 72%.
In patients affected by relapsed or refractory ALK+ Anaplastic Large Cell Lymphoma, crizotinib produced objective response rates ranging from 65% to 90% and 3 year progression free survival rates of 60-75%. No relapse of the lymphoma was ever observed after the initial 100 days of treatment. Treatment must be continued indefinitely at present.
Ceritinib was approved by the FDA in April 2014 for the treatment of patients with anaplastic lymphoma kinase (ALK)-positive metastatic non-small cell lung cancer (NSCLC) who have progressed on or are intolerant to crizotinib.
Entrectinib (RXDX-101) is a selective tyrosine kinase inhibitor developed by Ignyta, Inc., with specificity, at low nanomolar concentrations, for all of three Trk proteins (encoded by the three NTRK genes, respectively) as well as the ROS1, and ALK receptor tyrosine kinases. An open label, multicenter, global phase 2 clinical trial called STARTRK-2 is currently underway to test the drug in patients with ROS1/NTRK/ALK gene rearrangements.
See also
Cluster of differentiation
Notes and references
Notes
References
Further reading
External links
ALK Correlations, Experiments, Publications and Clinical Trials
GeneReviews/NCBI/NIH/UW entry on ALK-Related Neuroblastoma Susceptibility
OMIM entries on ALK-Related Neuroblastoma Susceptibility
Clusters of differentiation
Tyrosine kinase receptors | Anaplastic lymphoma kinase | Chemistry | 2,165 |
43,017,519 | https://en.wikipedia.org/wiki/Direct%20clustering%20algorithm | Direct clustering algorithm (DCA) is a methodology for identification of cellular manufacturing structure within an existing manufacturing shop. The DCA was introduced in 1982 by H.M. Chan and D.A. Milner The algorithm restructures the existing machine / component (product) matrix of a shop by switching the rows and columns in such a way that a resulting matrix shows component families (groups) with corresponding machine groups. See Group technology. The algorithm is executable in manual way but was already suitable for computer use of the time.
Procedure
The cellular manufacturing structure consists of several machine groups (production cells) where corresponding product groups (products with similar technology) are being exclusively manufactured. In aim of identification of possible cellular manufacturing structure within an existing manufacturing shop the DCA methodology roughly provides following procedure:
Setting up a matrix where one dimension represents machines, the other products. All intersections where a product requires a machine is filled with "1", all others are filled with "0".
The position of the columns and order of the rows is than changed. The algorithm provides the rules for column changing and row changing in aim of concentration of matrix cells containing "1" in several groups.
The resulting matrix shows groups of products with corresponding machines aligned by the matrix diagonal.
The experience
The DCA methodology would give a perfect result in an ideal case where there are no overlapping machines or products between the groups. The overlapping in most real cases represents further challenge for the methodology users. The "Formation of Machine Cells/ Part Families in Cellular Manufacturing Systems Using an ART-Modified Single Linkage Clustering Approach – A Comparative Study" by M. Murugan and V. Selladurai shows the comparison of DCA to some other methodologies of the same purpose.
References
External links
Saving Time With Quick Response Manufacturing (QRM)
Lean manufacturing | Direct clustering algorithm | Engineering | 371 |
25,269,252 | https://en.wikipedia.org/wiki/Coordinate%20singularity | In mathematics and physics, a coordinate singularity occurs when an apparent singularity or discontinuity occurs in one coordinate frame that can be removed by choosing a different frame.
An example is the apparent (longitudinal) singularity at the 90 degree latitude in spherical coordinates. An object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (i.e., jumping from longitude 0 to longitude 180 degrees). In fact, longitude is not uniquely defined at the poles. This discontinuity, however, is only apparent; it is an artifact of the coordinate system chosen, which is singular at the poles. A different coordinate system would eliminate the apparent discontinuity, e.g. by replacing the latitude/longitude representation with an -vector representation.
English theoretical physicist Stephen Hawking aptly summed this up, when once asking the question, "What lies north of the North Pole?".
See also
Chronometric singularity
Imaginary time
Mathematical singularity
No-boundary proposal
Schwarzschild metric#Singularities and black holes
References
Mathematical analysis | Coordinate singularity | Mathematics | 237 |
62,035,892 | https://en.wikipedia.org/wiki/Matthew%20Choptuik | Matthew William Choptuik (born 1961) is a Canadian theoretical physicist specializing in numerical relativity.
Choptuik graduated from University of British Columbia with a master's degree in 1982 and a Ph.D. advised by William Unruh in 1986. He became an associate professor in 1995 at the University of Texas at Austin. In 1999 he became a member of the Institute for Theoretical Physics at the University of California, Santa Barbara and in the same year he became a professor at University of British Columbia.
In 1993, he discovered critical phenomena in gravitational collapse via numerical studies. He showed—under non-generic initial conditions —the possibility of the occurrence of naked singularity in general relativity with scalar matter. This had previously been the subject of a bet between Stephen Hawking, Kip Thorne and John Preskill. Hawking lost the bet after Choptuik's publication, but renewed it under non-generic initial conditions.
Choptuik was the 2001 awardee of the Rutherford Memorial Medal. In 2003 he received the CAP-CRM Prize in Theoretical and Mathematical Physics. In 2003 he became a fellow of the American Physical Society. In 2002, he became an honorary doctor of Brandon University.
References
External links
Homepage
1961 births
Living people
Canadian expatriate academics in the United States
Fellows of the American Physical Society
20th-century Canadian physicists
Theoretical physicists
21st-century Canadian physicists
University of British Columbia Faculty of Science alumni
University of Texas at Austin faculty
University of California, Santa Barbara faculty
Academic staff of the University of British Columbia | Matthew Choptuik | Physics | 313 |
12,610,397 | https://en.wikipedia.org/wiki/Digital%20clock%20manager | A digital clock manager (DCM) is an electronic component available on some field-programmable gate arrays (FPGAs) (notably ones produced by Xilinx). A digital clock manager is useful for manipulating clock signals inside the FPGA, and to avoid clock skew which would introduce errors in the circuit.
Uses
Digital clock managers have the following applications:
Multiplying or dividing an incoming clock (which can come from outside the FPGA or from a Digital Frequency Synthesizer [DFS]).
Making sure the clock has a steady duty cycle.
Adding a phase shift with the additional use of a delay-locked loop.
Eliminating clock skew within an FPGA design.
See also
Phase-locked loop
References
Gate arrays
Electronic oscillators
Integrated circuits
Digital electronics
Electronic design | Digital clock manager | Technology,Engineering | 163 |
10,167,825 | https://en.wikipedia.org/wiki/Large-signal%20model | Large-signal modeling is a common analysis method used in electronic engineering to describe nonlinear devices in terms of the underlying nonlinear equations. In circuits containing nonlinear elements such as transistors, diodes, and vacuum tubes, under "large signal conditions", AC signals have high enough magnitude that nonlinear effects must be considered.
"Large signal" is the opposite of "small signal", which means that the circuit can be reduced to a linearized equivalent circuit around its operating point with sufficient accuracy.
Differences between Small Signal and Large Signal
A small signal model takes a circuit and based on an operating point (bias) and linearizes all the components. Nothing changes because the assumption is that the signal is so small that the operating point (gain, capacitance, etc.) doesn't change.
A large signal model, on the other hand, takes into account the fact that the large signal actually affects the operating point, as well as that elements are non-linear and circuits can be limited by power supply values to avoid variation in operating point. A small signal model ignores simultaneous variations in the gain and supply values.
Large Signal Models (LSMs) in Artificial Intelligence
In the domain of artificial (machine) intelligence, Large Signal Models enable human-centric interactions and knowledge discovery of signal data similar to how prompts allow users to query an LLM based on unstructured text from the web. Users can ask general questions about relationships between the focus dataset and results from pre-compiled LSTM built on a signal dataset across a large range of domains. This is achieved by layering in latent pattern detection and knowledge graph-based (KG-based) explainability into an LSTM inference pipeline.
See also
Diode modelling
Transistor models#Large-signal nonlinear models
References
Electronic device modeling
Electrical circuits | Large-signal model | Physics,Engineering | 369 |
9,969,189 | https://en.wikipedia.org/wiki/Edit%20conflict | An edit conflict is a computer problem that may occur when multiple editors edit the same file and cannot merge without losing part or all of their edit. The conflict occurs when an editor gets a copy of a shared document file, changes the copy and attempts to save the changes to the original file, which has been altered by another editor after the copy was obtained.
Resolution
The simplest way to resolve an edit conflict is to ignore intervening edits and overwrite the current file. This may lead to a substantial loss of information, and alternative methods are often employed to resolve or prevent conflicts:
Manual resolution, where the editor determines which version to retain and may manually incorporate edits into the current version of the file.
Store backups or file comparisons of each edit, so there are the previous versions of the file can still be accessed once the original is overwritten.
File locking, which limits the file to one editor at a time to prevent edit conflicts. Computer writer Gary B. Shelly notes that many wiki systems "will block the contributor who is attempting to edit the page from being able to do so until the contributor currently editing the page saves changes or remains idle on the page for an extended period of time."
Merge, by determining if the edits are in unrelated parts of the file and combining without user intervention.
Occurrences
The problem is encountered on heavily edited articles in wikis (frequency higher in articles related to a current event or person), distributed data systems (e.g., Google Sites), and revision control systems not using file locking, as well as other high-traffic pages. If a significant amount of new text is involved, the editor who receives an "edit conflict" error message can cut and paste the new text into a word processor or similar program for further editing, or can paste that text directly into a newer version of the target document. Simple copyediting can be done directly on the newer version, and then saved.
See also
Concurrent Versions System
Apache Subversion
Git (software)
References
Wiki concepts
Version control | Edit conflict | Engineering | 411 |
31,231,313 | https://en.wikipedia.org/wiki/Dilberjin%20Tepe | Dilberjin Tepe, also Dilberjin or Delbarjin (Persian: دلبَرجین), is the modern name for the remains of an ancient town in modern (northern) Afghanistan. The town was perhaps founded in the time of the Achaemenid Empire. Under the Kushan Empire it became a major local centre. After the Kushano-Sassanids the town was abandoned.
Archaeological remains
The town proper was about in size. Dilbarjin had a city wall built under the Kushan rule. In the middle of the town there was a round citadel, built at about the same time. In the north-east corner of the town was excavated a temple complex. Here were found many wall paintings, some in a purely Hellenistic style. Originally the temple was perhaps dedicated to the Dioscuri, of which a mural in Hellenistic style has been recovered. A long inscription in the kushan language was also discovered, dated to the early great Kushans, around the period of Kanishka I, on paleographic grounds, as it seems slightly younger than the inscription of Surkh Kotal. Outside the city walls there were still substantial buildings. Finds include inscriptions in Bactrian, most of them too destroyed to provide any historical information. There were fragments of sculpture and many coins.
Wall paintings
The paintings of Dilberjin Tepe belong to the 5th-6th century CE, or even as early as the 4th century CE according to some authorities, based on numismatic evidence. The paintings have some similarity with those of Balalyk Tepe, and some from Bamiyan. A comparison with the swordsmen at Kizil Caves would also suggest a date from the 5th century to the early 6th century CE. The same authors consider that the paintings at Balalyk Tepe are about a century older than the paintings at Dilberjin, dating from the end of the 6th century to the early 7th century CE.
These murals are general thought to represent Hephthalites, with their characteristic tunics with a single lapel folded to the right, cropped hair and ornaments.
A famous mural shows a row of warriors in kaftan, relatively similar to the mural from Kyzyl.
A much later fresco showing an Indian scene, with Shiva and Parvati on the bull Nandi, has been dated to the 8th century CE.
Coinage
Coins of many periods were found at the site, including Hephthalite coins, but those of the Kushano-Sasanians and the Kidarites were the most numerous from the early Sasanian period to have been found on the site. About 72 such coins were found, belonging to Ardashir I, Peroz I, Hormiz I, as well as each type of the Varahran I, that is, the coins first struck under Varahran, and then those struck on the model of Varahran by the Kidarite rulers Kirada, Peroz and Kidara I. These coins suggest that the murals themselves should be dated to the late 4th century CE or early 5th century CE at the latest.
Pillaging and damage
In 2023, Iconem reported the detection of massive damage that had occurred to the site.
Paintings
See also
Tavka Kurgan
Penjikent
Dalverzin Tepe
Kara Tepe
Fayaz Tepe
Balalyk tepe
References
Sources
Warwick Ball: Archaeological Gazetteer of Afghanistan : Catalogue des sites archéologiques d'Afghanistan, Paris 1982, p. 91-92
И. T. Кругликова, Дилъбепджин, Москва 1974
И. T. Кругликова, Г.A.Пугаченкова, Дилъбепджин, Москва 1977
External links
DELBARJĪN on Iranicaonline.org
Archaeological sites in Afghanistan
Kushan Empire
Castor and Pollux | Dilberjin Tepe | Astronomy | 823 |
55,265 | https://en.wikipedia.org/wiki/Road%20train | A road train, also known as a land train or long combination vehicle (LCV) is a semi-trailer used to move road freight more efficiently than single-trailer semi-trailers. It consists of one semi-trailer or more connected together with or without a prime-mover. It typically has to be at least three trailers and one prime-mover. Road trains are often used in areas where other forms of heavy transport (freight train, cargo aircraft, container ship) are not feasible or practical.
History
Early road trains consisted of traction engines pulling multiple wagons. The first identified road trains operated into South Australia's Flinders Ranges from the Port Augusta area in the mid-19th century. They displaced bullock teams for the carriage of minerals to port and were, in turn, superseded by railways.
During the Crimean War, a traction engine was used to pull multiple open trucks. By 1898 steam traction engine trains with up to four wagons were employed in military manoeuvres in England.
In 1900, John Fowler & Co. provided armoured road trains for use by the British Armed Forces in the Second Boer War. Lord Kitchener stated that he had around 45 steam road trains at his disposal.
A road train devised by Captain Charles Renard of the French Engineering Corps was displayed at the 1903 Paris Salon. After his death, Daimler, which had acquired the rights, attempted to market it in the United Kingdom. Four of these vehicles were successfully delivered to Queensland, Australia, before the company ceased production upon the start of World War I.
In the 1930s/40s, the government of Australia operated an AEC Roadtrain to transport freight and supplies into the Northern Territory, replacing the Afghan camel trains that had been trekking through the deserts since the late 19th century. This truck pulled two or three Dyson four-axle self-tracking trailers. At , the AEC was grossly underpowered by today's standards, and drivers and offsiders (a partner or assistant) routinely froze in winter and sweltered in summer due to the truck's open cab design and the position of the engine radiator, with its cooling fan, behind the seats.
Australian Kurt Johannsen, a bush mechanic, is recognised as the inventor of the modern road train. After transporting stud bulls to an outback property, Johannsen was challenged to build a truck to carry 100 head of cattle instead of the original load of 20. Provided with financing of about 2000 pounds and inspired by the tracking abilities of the Government roadtrain, Johannsen began construction. Two years later his first road train was running.
Johannsen's first road train consisted of a United States Army World War II surplus Diamond-T tank carrier, nicknamed "Bertha", and two home-built self-tracking trailers. Both wheel sets on each trailer could steer, and therefore could negotiate the tight and narrow tracks and creek crossings that existed throughout Central Australia in the earlier part of the 20th century. Freighter Trailers in Australia viewed this improved invention and went on to build self-tracking trailers for Kurt and other customers, and went on to become innovators in transport machinery for Australia.
This first example of the modern road train, along with the AEC Government Roadtrain, forms part of the huge collection at the National Road Transport Hall of Fame in Alice Springs, Northern Territory.
In 2023, Janus launched the first BEV triple road train with 620 kWh battery, also the world's heaviest street-legal BEV truck at 170 tonnes (gross weight).
Usage
Australia
The term road train is used in Australia and typically means a prime mover hauling two or more trailers, other than a B-double. In contrast with a more common semi-trailer towing one trailer or semi-trailer, the diesel prime mover of a road train hauls two or more trailers or semi-trailers. Australia has the longest and heaviest road-legal road trains in the world, weighing up to .
Double (two-trailer) road train combinations are allowed on some roads in most states of Australia, including specified approaches to the ports and industrial areas of Adelaide, South Australia and Perth, Western Australia. An A-double road train should not be confused with a B-double, which is allowed access to most of the country and in all major cities.
In South Australia, B-triples up to and two-trailer road trains to were only permitted to travel on a small number of approved routes in the north and west of the state, including access to Adelaide's north-western suburban industrial and export areas such as Port Adelaide, Gillman and Outer Harbour via Salisbury Highway, Port Wakefield Road and Augusta Highway before 2017. A project named Improving Road Transport for the Agriculture Industry added of key routes permitted to operate vehicles over in 2015–2018.
Triple (three-trailer) road trains operate in western New South Wales, western Queensland, South Australia, Western Australia and the Northern Territory, with the last three states also allowing AB-quads (B double with two additional trailers coupled behind). Darwin is the only capital city in the world where triples and quads are allowed to within of the central business district (CBD).
Strict regulations regarding licensing, registration, weights, and experience apply to all operators of road trains throughout Australia.
Road trains are used for transporting all manner of materials: common examples are livestock, fuel, mineral ores, and general freight. Their cost-effective transport has played a significant part in the economic development of remote areas; some communities are totally reliant on regular service.
When road trains get close to populated areas, the multiple dog-trailers are unhooked, the dollies removed and then connected individually to multiple trucks at "assembly" yards.
When the flat-top trailers of a road train need to be transported empty, it is common practice to stack them. This is commonly referred to as "doubled-up" or "doubling-up". Sometimes, if many trailers are required to be moved at one time, they will be triple-stacked, or "tripled-up".
Higher Mass Limits (HML) Schemes are now in all jurisdictions in Australia, allowing trucks to carry additional weight beyond general mass limits. Some roads in some states regularly allowing up to 4 trailers at long and . On private property like mines, highway restrictions on trailer length, weight and count may not apply. Some of the heaviest road trains carrying ore are multiple unit with a diesel engine in each trailer, controlled by the tractor.
Diesel sales in Australia (per year) are around 32 billion litres, of which some is used by road trains. In order to reduce emissions and running cost, trials are made with road trains powered by batteries.
United States
In the United States, trucks on public roads are limited to two trailers (two and a dolly to connect; the limit is end to end). Some states allow three trailers, although triples are usually restricted to less populous states such as Idaho, Oregon, and Montana, plus the Ohio Turnpike and Indiana East–West Toll Road. Triples are used for long-distance less-than-truckload freight hauling (in which case the trailers are shorter than a typical single-unit trailer) or resource hauling in the interior west (such as ore or aggregate). Triples are sometimes marked with "LONG LOAD" banners both front and rear. "Turnpike doubles"—tractors towing two full-length trailers—are allowed on the New York Thruway and Massachusetts Turnpike (Interstate 90), Florida's Turnpike, Kansas Turnpike (Kansas City – Wichita route) as well as the Ohio and Indiana toll roads. Colorado allows what are known as "Rocky Mountain Doubles" which is one full length trailer and an additional trailer. The term "road train" is not commonly used in the United States; "turnpike train" has been used, generally in a pejorative sense.
In the western United States LCVs are allowed on many Interstate highways. The only LCVs allowed nationwide are STAA doubles.
On private property like farms, highway restrictions on trailer length and count do not apply. Bales of straw, for example, are sometimes moved in wagon trains of up to 20 trailers an eighth of a mile long (carrying a total of 3,600 bales).
Europe
In Finland, Sweden, Germany, the Netherlands, Denmark, Belgium, and some roads in Norway, trucks with trailers are allowed to be long. In Finland, a length of has been allowed since January 2019. In Sweden, this length has been allowed on several major roads, including all of E4, since August 2023. 34.5 meters allows two 40 foot containers.
Elsewhere in the European Union, the limit is (Norway ). The trucks are of a cab-over-engine design, with a flat front and a high floor, about above ground. The Scandinavian countries are less densely populated than the other EU countries, and distances, especially in Finland and Sweden, are long. Until the late 1960s, vehicle length was unlimited, giving rise to long vehicles to cost effectively handle goods. As traffic increased, truck lengths became more of a concern and they were limited, albeit at a more generous level than in the rest of Europe.
In the United Kingdom in 2009, a two-year desk study of Longer Heavier Vehicles (LHVs), including up to 11-axle, long, combinations, ruled out all road-train-type vehicles for the foreseeable future.
In 2010, Sweden was performing tests on log-hauling trucks, weighing up to and measuring and haulers for two 40 ft containers, measuring in total. In 2015, a pilot began in Finland to test a 104-tonne timber lorry which was and had 13 axles. Testing of the special lorry was limited to a predefined route in northern Finland
Since 2015, Spain has permitted B-doubles with a length of up to and weighing up to 60 tonnes to travel on certain routes. In July 2024, after 5 years of testing, HCTs have been permitted on Spanish territory, with lengths of up to 32 meters (105 ft) and 70 gross tonnes.
Since 2016, Eoin Gavin Transport, Shannon and Dennison Trailers, Kildare have been trialling B-doubles on the Irish motorways. In Feb 2024, The Pallet Network announced four B-doubles to operate between Dublin, Cork and Galway.
In 2020, a small number of road trains were operating between Belgium and the Netherlands.
Mexico
In Mexico road trains exist in a limited capacity due to the sizes of roads in its larger cities, and they are only allowed to pull 2 trailers joined with a pup or dolly created for this purpose. Recently the regulations tend to be more severe and strict to avoid overloading and accidents, to adhere to the federal rules of transportation. Truck drivers must obtain a certificate to certify that the driver is capable to manipulate and drive that type of vehicle.
All the tractor vehicles that make road train type transport in the country (along with the normal security requirements) need to have visual warnings like;
"Warning Double Semi-Trailer" () alert located in the frontal fenders of the tractor and in the rear part of each trailer,
yellow turn and warning lights to be more visible to other drivers,
a seal for the entire vehicle approving the use as double semi trailer,
federal license plates in every trailer, dolly, and tractor unit.
Some major cargo enterprises in the country use this form to cut costs of carrying all type of goods in some regions where other forms of transportation are too expensive compared to it due to the difficult geography of the country.
The Mexican road train equivalent form in Australian Standard is the A-Double form, the difference is that the Mexican road trains can be hauled with a long distance tractor truck.
Zimbabwe
In Zimbabwe, they are only used in one highway, Ngezi – Makwiro road. They make use of 42 m long road trains pulling three trailers.
Trailer arrangements
A-double
An A-double consists of a prime mover towing a normal lead trailer with a towing hitch such as a Ringfeder coupling affixed to it at the rear. A fifth wheel dolly is then affixed to the hitch allowing another standard trailer to be attached. Eleven-axle coal tipping sets carrying to Port Kembla, Australia are described as A-doubles. The set depicted has a tare weight of and is capable of carrying of coal. Note the shield at the front of the second trailer to direct tipped coal from the first trailer downwards.
Pros include the ability to use standard semi-trailers and the potential for very large loads. Cons mainly include very tricky reversing due to the multiple articulation points across two different types of coupling.
B-double
A B-double consists of a prime mover towing a specialised lead trailer that has a fifth-wheel mounted on the rear towing another semi-trailer, resulting in two articulation points. It may also be known as a B-train, interlink in South Africa, B-double in Australia, tandem tractor-trailer, tandem rig or double in North America. They may typically be up to long. The fifth wheel coupling is located at the rear of the lead (first) trailer and is mounted on a "tail" section commonly located immediately above the lead trailer axles. In North America this area of the lead trailer is often referred to as the "bridge". The twin-trailer assembly is hooked up to a tractor unit via the tractor unit's fifth wheel in the customary manner.
An advantage of the B-train configuration is its inherent stability when compared to most other twin trailer combinations, the turntable mounted on the forward trailer results in the B-train not requiring a converter dolly as with all other road train configurations. It is this feature above all else that has ensured its continued development and global acceptance. Reversing is simpler as all articulation points are on fifth wheel couplings.
B-train trailers are used to transport many types of load and examples include tanks for liquid and dry-bulk, flat-beds and curtain-siders for deck-loads, bulkers for aggregates and wood residuals, refrigerated trailers for chilled and frozen goods, vans for dry goods, logging trailers for forestry work and cattle liners for livestock.
In Australia, standard semi-trailers are permitted on almost any road. B-doubles are more heavily regulated, but routes are made available by state governments for almost anywhere that significant road freight movement is required.
Around container ports in Australia exists what is known as a super B-double; a B-double with an extra axle (total of 4) on the lead trailer and either three or four axle set on the rear trailer. This allows the super B-Double to carry combinations of two 40 foot containers, four 20 foot containers, or a combination of one 40 foot container and two twenty foot containers. However, because of their length and low accessibility into narrow streets, these vehicles are restricted in where they can go and are generally used for terminal-to-terminal work, i.e. wharf to container holding park or wharf-to-wharf. The rear axle on each trailer can also pivot slightly while turning to prevent scrubbing out the edges of the tyres due to the heavy loads placed on them.
B-triple
Same as B-double, but with an additional lead trailer behind the prime mover. The B-train principle has been exploited in Australia, where configurations such as B triples, double-B doubles and 2AB quads are permitted on some routes. These are run in most states of Australia where double road trains are allowed. Australia's National Transport Commission proposed a national framework for B-triple operations that includes basic vehicle specifications and operating conditions that the commission anticipates will replace the current state-by-state approach, which largely discourages the use of B-triples for interstate operation. In South Australia, B-triples up to and two-trailer road trains to are generally only permitted on specified routes, including access to industrial and export areas near Port Adelaide from the north.
B quad
In 2018, B quad was also allowed in states Victoria, New South Wales and Queensland, which enables more economical transport.
AB triple
An AB triple consists of a standard trailer with a B-Double behind it using a converter dolly, with a trailer order of Standard, Dolly, B-Train, Standard. The final trailer may be either a B-Train with no trailer attached to it or a standard trailer. Alternatively, a BA triple sees this configuration reversed, consisting of a B-double with a converter dolly and standard trailer behind it.
A-triple
In South Australia, larger road trains up to (three full trailers) are only permitted on certain routes in the Far North.
BAB quad
A BAB quad consists of two B-double units linked with a converter dolly, with trailer order of Prime Mover, B-Train, Dolly, B-Train.
ABB quad
ABB quad consists of one standard trailer and B-triple units linked with a converter dolly.
AAB quad
AAB quad consists of A-double and B-double units linked with a converter dolly. Alternatively, a BAA quad sees this configuration reversed, first the B-double, then the A-double.
A quad
In some parts of Australia, 'super quad' road trains up to are permitted, consisting of four standard trailers connected via three converter dollies.
C-train
A C-train is a semi-trailer attached to a turn table on a C-dolly. Unlike in an A-Train, the C-dolly is connected to the tractor or another trailer in front of it with two drawbars, thus eliminating the drawbar connection as an articulation point. One of the axles on a C-dolly is self-steerable to prevent tire scrubbing. C-dollies are not permitted in Australia, due to the lack of articulation.
Dog-trailer (dog trailer)
A dog-trailer (also called a pup) is a short trailer with a permanent dolly, with a single A-frame drawbar that fits into the Ringfeder or pintle hook on the rear of the truck or trailer in front, giving the whole unit two or more articulation points and very little roll stiffness. These are commonly used in Australia, particularly for end tipper applications like shown above. They are normally limited to a single dog trailer behind a short bodied (independently load carrying) truck with a standard length limit of 19 metres (20 under design permits). A quad dog trailer in combination with a bodied truck is able to carry more weight than a truck and single semi-trailer of the same length limit and access restrictions, as well as carrying two different materials as separate loads, such as with tipper bodies and fluid tankers.
Interstate road transport registration in Australia
In 1991, at a special Premiers' Conference, Australian heads of government signed an inter-governmental agreement to establish a national heavy vehicle registration, regulation and charging scheme: the Federal Interstate Registration Scheme (FIRS). Its requirements are as follows:
Due to the "eastern" and "western" mass limits in Australia, two different categories of registration were enacted. The second digit of the registration plate showed what mass limit was allowed for that vehicle. If a vehicle had a 'V' as the second letter, its mass limits were in line with the eastern states mass limits, which were:
Steer axle, 1 axle, 2 tyres:
Steer axle, 2 axles, 2 tyres per axle: Non load sharing suspension
Load sharing suspension
Single axle, dual tyres:
Tandem axle, dual tyres:
Tri-axle, dual tyres or 'super single' tyres:
Gross combination mass on a 6-axle vehicle not to exceed
If a vehicle had an X as the second letter, its mass limits were in line with the western states mass limits, which were:
Steer axle, 1 axle, 2 tyres:
Steer axle, 2 axles, 2 tyres per axle
Non load sharing suspension : Load sharing suspension
Single axle, dual tyres:
Tandem axle, dual tyres:
Tri-axle, dual tyres or "super single" tyres:
Gross combination mass on a 6-axle vehicle not to exceed
The second digit of the registration being a T designates a trailer.
One of the main criteria of the registration is that intrastate operation is not permitted. The load has to come from one state or territory and be delivered to another. Many grain carriers were reported and prosecuted for cartage from the paddock to the silos. However, if the load went to a port silo, they were given the benefit of the doubt, as that grain was more than likely to be going overseas.
Signage
Australian road trains have horizontal signs front and back with high black uppercase letters on a reflective yellow background reading "ROAD TRAIN". The sign(s) must have a black border and be at least long and high and be placed between and above the ground on the fore or rearmost surface of the unit.
In the case of B-triples in Western Australia, they are signed front and rear with "ROAD TRAIN" until they cross the WA/SA border where they are then signed with "LONG VEHICLE" in the front and rear.
Converter dollies must have a sign affixed horizontally to the rearmost point, complying to the same conditions, reading "LONG VEHICLE". This is required for when a dolly is towed behind a trailer.
Combination lengths
B-double max. Western Australia, max.
B-triple up to max.
NTC modular B-triple max. (uses 2× conventional B-double lead trailers)
Pocket road train max. (Western Australia only) This configuration is classed as a "Long Vehicle".
Double road train or AB road train max.
Triple and ABB or BAB-quad road trains max.
Operating weights
Operational weights are based on axle group masses, as follows:
Single axle (steer tyre)
Single axle (steer axle with 'super single' tyres)
Single axle (dual tyres)
Tandem axle grouping
Tri-axle grouping
Therefore,
A B-double (single axle steering, tandem drive, and two tri-axle groups) would have an operational weight of .
A double road train (single axle steering, tandem drive, tri-axle, tandem, tri-axle) would have an operational weight of .
A triple is .
Quads weigh in at .
Concessional weight limits, which increase allowable weight to accredited operators can see (for example) a quad weighing up to .
If a tri-drive prime mover is utilised, along with tri-axle dollies, weights can reach nearly .
Speed limits
The Australian national heavy vehicle speed limit is , except New South Wales and Queensland where the speed limit for any road train is . B triple road trains have a speed limit of 100 km/h (62 mph) in Queensland.
In Canada, there has been no difference between the speed limits between cars and road trains, which range from on two-lane roads and between on three-lane roads.
In Europe, the speed limit for heavy goods trucks is usually . There is a law on having speed limiters which makes it impossible to drive heavy trucks faster than . These limits are normally the same for road trains. There is not a wish to encourage trucks to overtake slightly slower trucks on motorways because it obstacles the left lane, although common anyway e.g. when heavy road trains lose speed uphill.
World's longest road trains
Below is a list of longest road trains driven in the world. Most of these had no practical use, as they were put together and driven across relatively short distances for the express purpose of record-breaking.
In 1989, a trucker named "Buddo" tugged 12 trailers down the main street of Winton.
In 1993, "Plugger" Bowden took the record with a Mack CLR pulling 16 trailers.
A few months later this effort was surpassed by Darwin driver Malcolm Chisholm with a , 21-trailer rig extending .
In April 1994 Bob Hayward and Andrew Aichison organised another attempt using a 1988 Mack Super-Liner 500 hp V8 belonging to Plugger Bowden who drove 29 stock trailers measuring 439.169 metres a distance of 4.5 km into Bourke. The record was published in the next Guinness Book of Records.
Then the record went back to Winton with 34 trailers.
On 3 April 1999, the town of Merredin, officially made it into the Guinness Book of Records, when Marleys Transport made a successful attempt on the record for the world's longest road train. The record was created when 45 trailers, driven by Greg Marley, weighing and measuring were pulled by a Kenworth 10×6 K100G for .
On 19 October 2000, Doug Gould set the first of his records in Kalgoorlie, when a roadtrain made up of 79 trailers, measuring and weighing , was pulled by a Kenworth C501T driven by Steven Matthews a distance of .
On 29 March 2003, the record was surpassed near Mungindi, by a road train consisting of 87 trailers and a single prime mover (measuring in length).
The record returned to Kalgoorlie, on 17 October 2004, when Doug Gould assembled 117 trailers for a total length of . The attempt nearly failed, as the first prime mover's main driveshaft broke when taking off. A second truck was quickly made available, and pulled the train a distance of .
In 2004, the record was again broken by a group from Clifton, Queensland which used a standard Mack truck to pull 120 trailers a distance of about .
On 18 February 2006, an Australian built Mack truck with 113 semi-trailers, and long, pulled the load to recapture the record for the longest road train (multiple loaded trailers) ever pulled with a single prime mover. It was on the main road of Clifton, Queensland, that 70-year-old John Atkinson claimed a new record, pulled by a tri-drive Mack Titan.
Outside Australia
On 12 April 2016 in Gothenburg, Sweden, a Volvo FH16 750 pulled 20 trailers with double-stacked containers with a total length of 300 meters (984 ft) and with a total weight of 750 tonnes.
Gallery
See also
Air brake (road vehicle)
Articulated bus
Brake
B-train
Containerization
Container on barge
Container ship
Dolly (trailer)
Federal Bridge Weight Formula
Fifth wheel coupling
Gladhand connector
Intermodal freight transport
Jackknifing
Longer Heavier Vehicle
National Network – highway and interstate system
Overland train
Ringfeder coupling devices
Road transport in Australia
Rolling highway – freight trucks by rail
Semi-trailer truck – large trucks such as road trains and articulated lorries
Shipping container
Top intermodal container companies list
Trackless train
Transport
References
External links
Australian Road Train Association
Australian National Heavy Vehicles Accreditation Scheme.
Northern Territory Road Train road safety TV commercials.
South Australian Roads road train gazette
NSW Roads and Traffic Authority road train operators gazette
NSW Roads and Traffic Authority Restricted Access Vehicles route map index
NSW Roads and Traffic Authority Reflective sign standards
U.S. department of Transportation, Federal Highway Administration, Chapter VII, Safety.
The U.S. Department of Transportation's Western Uniformity Scenario Analysis.
British Columbia Government Licensing Bulletin 6
British Columbia Government Licensing Bulletin 41
Roadmap of technologies able to halve energy use per passenger mile includes the dynamically coupled, heterogeneous type of roadtrain
Road trains and electrification of transport
Combination Vehicles for Commercial Drivers License
Trucks
Trains
Train
Articulated vehicles
Train
Trailers | Road train | Technology | 5,595 |
298,547 | https://en.wikipedia.org/wiki/Immunity%20%28medicine%29 | In biology, immunity is the state of being insusceptible or resistant to a noxious agent or process, especially a pathogen or infectious disease. Immunity may occur naturally or be produced by prior exposure or immunization.
Innate and adaptive
The immune system has innate and adaptive components. Innate immunity is present in all metazoans, immune responses: inflammatory responses and phagocytosis. The adaptive component, on the other hand, involves more advanced lymphatic cells that can distinguish between specific "non-self" substances in the presence of "self". The reaction to foreign substances is etymologically described as inflammation while the non-reaction to self substances is described as immunity. The two components of the immune system create a dynamic biological environment where "health" can be seen as a physical state where the self is immunologically spared, and what is foreign is inflammatorily and immunologically eliminated. "Disease" can arise when what is foreign cannot be eliminated or what is self is not spared.
Innate immunity, also known as native immunity, is a semi-specific and widely distributed form of immunity. It is defined as the first line of defense against pathogens, representing a critical systemic response to prevent infection and maintain homeostasis, contributing to the activation of an adaptive immune response. It does not adapt to specific external stimulus or a prior infection, but relies on genetically encoded recognition of particular patterns.
Adaptive or acquired immunity is the active component of the host immune response, mediated by antigen-specific lymphocytes. Unlike the innate immunity, the acquired immunity is highly specific to a particular pathogen, including the development of immunological memory. Like the innate system, the acquired system includes both humoral immunity components and cell-mediated immunity components.
Adaptive immunity can be acquired either 'naturally' (by infection) or 'artificially' (through deliberate actions such as vaccination). Adaptive immunity can also be classified as 'active' or 'passive'. Active immunity is acquired through the exposure to a pathogen, which triggers the production of antibodies by the immune system. Passive immunity is acquired through the transfer of antibodies or activated T-cells derived from an immune host either artificially or through the placenta; it is short-lived, requiring booster doses for continued immunity.
The diagram below summarizes these divisions of immunity. Adaptive immunity recognizes more diverse patterns. Unlike innate immunity it is associated with memory of the pathogen.
History of theories
For thousands of years mankind has been intrigued with the causes of disease and the concept of immunity. The prehistoric view was that disease was caused by supernatural forces, and that illness was a form of theurgic punishment for "bad deeds" or "evil thoughts" visited upon the soul by the gods or by one's enemies. In Classical Greek times, Hippocrates, who is regarded as the Father of Medicine, diseases were attributed to an alteration or imbalance in one of the four humors (blood, phlegm, yellow bile or black bile). The first written descriptions of the concept of immunity may have been made by the Athenian Thucydides who, in 430 BC, described that when the plague hit Athens: "the sick and the dying were tended by the pitying care of those who had recovered, because they knew the course of the disease and were themselves free from apprehensions. For no one was ever attacked a second time, or not with a fatal result".
Active immunotherapy may have begun with Mithridates VI of Pontus (120-63 BC) who, to induce active immunity for snake venom, recommended using a method similar to modern toxoid serum therapy, by drinking the blood of animals which fed on venomous snakes. He is thought to have assumed that those animals acquired some detoxifying property, so that their blood would contain transformed components of the snake venom that could induce resistance to it instead of exerting a toxic effect. Mithridates reasoned that, by drinking the blood of these animals, he could acquire a similar resistance. Fearing assassination by poison, he took daily sub-lethal doses of venom to build tolerance. He is also said to have sought to create a 'universal antidote' to protect him from all poisons. For nearly 2000 years, poisons were thought to be the proximate cause of disease, and a complicated mixture of ingredients, called Mithridate, was used to cure poisoning during the Renaissance. An updated version of this cure, Theriacum Andromachi, was used well into the 19th century. The term "immunes" is also found in the epic poem "Pharsalia" written around 60 BC by the poet Marcus Annaeus Lucanus to describe a North African tribe's resistance to snake venom.
The first clinical description of immunity which arose from a specific disease-causing organism is probably A Treatise on Smallpox and Measles ("Kitab fi al-jadari wa-al-hasbah{{}}, translated 1848A "al-Razi". 2003 The Columbia Electronic Encyclopedia, Sixth Edition. Columbia University Press (from Answers.com, 2006.)) written by the Islamic physician Al-Razi in the 9th century. In the treatise, Al Razi describes the clinical presentation of smallpox and measles and goes on to indicate that exposure to these specific agents confers lasting immunity (although he does not use this term).
Until the 19th century, the miasma theory was also widely accepted. The theory viewed diseases such as cholera or the Black Plague as being caused by a miasma, a noxious form of "bad air". If someone was exposed to the miasma in a swamp, in evening air, or breathing air in a sickroom or hospital ward, they could catch a disease. Since the 19th century, communicable diseases came to be viewed as being caused by germs/microbes.
The modern word "immunity" derives from the Latin immunis, meaning exemption from military service, tax payments or other public services.
The first scientist who developed a full theory of immunity was Ilya Mechnikov who revealed phagocytosis in 1882. With Louis Pasteur's germ theory of disease, the fledgling science of immunology began to explain how bacteria caused disease, and how, following infection, the human body gained the ability to resist further infections.
In 1888 Emile Roux and Alexandre Yersin isolated diphtheria toxin, and following the 1890 discovery by Behring and Kitasato of antitoxin based immunity to diphtheria and tetanus, the antitoxin became the first major success of modern therapeutic immunology.
In Europe, the induction of active immunity emerged in an attempt to contain smallpox. Immunization has existed in various forms for at least a thousand years, without the terminology. The earliest use of immunization is unknown, but, about 1000 AD, the Chinese began practicing a form of immunization by drying and inhaling powders derived from the crusts of smallpox lesions. Around the 15th century in India, the Ottoman Empire, and east Africa, the practice of inoculation (poking the skin with powdered material derived from smallpox crusts) was quite common. This practice was first introduced into the west in 1721 by Lady Mary Wortley Montagu. In 1798, Edward Jenner introduced the far safer method of deliberate infection with cowpox virus, (smallpox vaccine), which caused a mild infection that also induced immunity to smallpox. By 1800, the procedure was referred to as vaccination. To avoid confusion, smallpox inoculation was increasingly referred to as variolation, and it became common practice to use this term without regard for chronology. The success and general acceptance of Jenner's procedure would later drive the general nature of vaccination developed by Pasteur and others towards the end of the 19th century. In 1891, Pasteur widened the definition of vaccine in honour of Jenner, and it then became essential to qualify the term by referring to polio vaccine, measles vaccine etc.
Passive immunity
Passive immunity is the immunity acquired by the transfer of ready-made antibodies from one individual to another. Passive immunity can occur naturally, such as when maternal antibodies are transferred to the foetus through the placenta, and can also be induced artificially, when high levels of human (or horse) antibodies specific for a pathogen or toxin are transferred to non-immune individuals. Passive immunization is used when there is a high risk of infection and insufficient time for the body to develop its own immune response, or to reduce the symptoms of ongoing or immunosuppressive diseases. Passive immunity provides immediate protection, but the body does not develop memory, therefore the patient is at risk of being infected by the same pathogen later.
Naturally acquired passive immunity
A fetus naturally acquires passive immunity from its mother during pregnancy. Maternal passive immunity is antibody-mediated immunity. The mother's antibodies (MatAb) are passed through the placenta to the fetus by an FcRn receptor on placental cells. This occurs around the third month of gestation. IgG is the only antibody isotype that can pass through the placenta.
Passive immunity is also provided through the transfer of IgA antibodies found in breast milk that are transferred to the gut of a nursing infant, protecting against bacterial infections, until the newborn can synthesize its antibodies. Colostrum present in mothers milk is an example of passive immunity.
Artificially acquired passive immunity
Artificially acquired passive immunity is a short-term immunization induced by the transfer of antibodies, which can be administered in several forms; as human or animal blood plasma, as pooled human immunoglobulin for intravenous (IVIG) or intramuscular (IG) use, and in the form of monoclonal antibodies (MAb). Passive transfer is used prophylactically in the case of immunodeficiency diseases, such as hypogammaglobulinemia. It is also used in the treatment of several types of acute infection, and to treat poisoning. Immunity derived from passive immunization lasts for only a short period of time, and there is also a potential risk for hypersensitivity reactions, and serum sickness, especially from gamma globulin of non-human origin.
The artificial induction of passive immunity has been used for over a century to treat infectious disease, and before the advent of antibiotics, was often the only specific treatment for certain infections. Immunoglobulin therapy continued to be a first line therapy in the treatment of severe respiratory diseases until the 1930s, even after sulfonamide lot antibiotics were introduced.
Transfer of activated T-cells
Passive or "adoptive transfer" of cell-mediated immunity, is conferred by the transfer of "sensitized" or activated T-cells from one individual into another. It is rarely used in humans because it requires histocompatible (matched) donors, which are often difficult to find. In unmatched donors this type of transfer carries severe risks of graft versus host disease. It has, however, been used to treat certain diseases including some types of cancer and immunodeficiency. This type of transfer differs from a bone marrow transplant, in which (undifferentiated) hematopoietic stem cells are transferred.
Active immunity
When B cells and T cells are activated by a pathogen, memory B-cells and T- cells develop, and the primary immune response results. Throughout the lifetime of an animal, these memory cells will "remember" each specific pathogen encountered, and can mount a strong secondary response if the pathogen is detected again. The primary and secondary responses were first described in 1921 by English immunologist Alexander Glenny although the mechanism involved was not discovered until later. This type of immunity is both active and adaptive because the body's immune system prepares itself for future challenges. Active immunity often involves both the cell-mediated and humoral aspects of immunity as well as input from the innate immune system.
Naturally acquired
Naturally acquired active immunity occurs as the result of surviving an infection. When a person is exposed to a live pathogen and develops a primary immune response, this leads to immunological memory. Many disorders of immune system function can affect the formation of active immunity, such as immunodeficiency (both acquired and congenital forms) and immunosuppression.
Artificially acquired
Artificially acquired active immunity can be induced by a vaccine, a substance that contains antigen. A vaccine stimulates a primary response against the antigen without causing symptoms of the disease. The term vaccination was coined by Richard Dunning, a colleague of Edward Jenner, and adapted by Louis Pasteur for his pioneering work in vaccination. The method Pasteur used entailed treating the infectious agents for those diseases, so they lost the ability to cause serious disease. Pasteur adopted the name vaccine as a generic term in honor of Jenner's discovery, which Pasteur's work built upon.
In 1807, Bavaria became the first group to require their military recruits to be vaccinated against smallpox, as the spread of smallpox was linked to combat. Subsequently, the practice of vaccination would increase with the spread of war.
There are four types of traditional vaccines:
Inactivated vaccines are composed of micro-organisms that have been killed with chemicals and/or heat and are no longer infectious. Examples are vaccines against flu, cholera, plague, and hepatitis A. Most vaccines of this type are likely to require booster shots.
Live, attenuated vaccines are composed of micro-organisms that have been cultivated under conditions which disable their ability to induce disease. These responses are more durable, however, they may require booster shots. Examples include yellow fever, measles, rubella, and mumps.
Toxoids are inactivated toxic compounds from micro-organisms in cases where these (rather than the micro-organism itself) cause illness, used prior to an encounter with the toxin of the micro-organism. Examples of toxoid-based vaccines include tetanus and diphtheria.
Subunit, recombinant, polysaccharide, and conjugate vaccines are composed of small fragments or pieces from a pathogenic (disease-causing) organism. A characteristic example is the subunit vaccine against Hepatitis B virus.
In addition, there are some newer types of vaccines in use:
Outer Membrane Vesicle (OMV) vaccines contain the outer membrane of a bacterium without any of its internal components or genetic material. Thus, ideally, they stimulate an immune response effective against the original bacteria without the risk of an infection.
Genetic vaccines deliver nucleic acid that codes for an antigen into host cells, which then produce that antigen, stimulating an immune response. This category of vaccine includes DNA vaccines, RNA vaccines, and viral vector vaccines, which differ in the chemical form of nucleic acid and how it is delivered into host cells.
A variety of vaccine types are under development; see Experimental Vaccine Types.
Most vaccines are given by hypodermic or intramuscular injection as they are not absorbed reliably through the gut. Live attenuated polio and some typhoid and cholera vaccines are given orally in order to produce immunity based in the bowel.
Hybrid immunity
Hybrid immunity is the combination of natural immunity and artificial immunity. Studies of hybrid-immune people found that their blood was better able to neutralize the Beta and other variants of SARS-CoV-2 than never-infected, vaccinated people. Moreover, on 29 October 2021, the Centers for Disease Control and Prevention (CDC) concluded that "Multiple studies in different settings have consistently shown that infection with SARS-CoV-2 and vaccination each result in a low risk of subsequent infection with antigenically similar variants for at least 6 months. Numerous immunologic studies and a growing number of epidemiologic studies have shown that vaccinating previously infected individuals significantly enhances their immune response and effectively reduces the risk of subsequent infection, including in the setting of increased circulation of more infectious variants. ..."
Genetics
Immunity is determined genetically. Genomes in humans and animals encode the antibodies and numerous other immune response genes. While many of these genes are generally required for active and passive immune responses (see sections above), there are also many genes that appear to be required for very specific immune responses. For instance, Tumor Necrosis Factor (TNF) is required for defense of tuberculosis in humans. Individuals with genetic defects in TNF may get recurrent and life-threatening infections with tuberculosis bacteria (Mycobacterium tuberculosis) but are otherwise healthy. They also seem to respond to other infections more or less normally. The condition is therefore called Mendelian susceptibility to mycobacterial disease (MSMD) and variants of it can be caused by other genes related to interferon production or signaling (e.g. by mutations in the genes IFNG, IL12B, IL12RB1, IL12RB2, IL23R, ISG15, MCTS1, RORC, TBX21, TYK2, CYBB, JAK1, IFNGR1, IFNGR2, STAT1, USP18, IRF1, IRF8, NEMO, SPPL2A'').
See also
Antiserum
Antivenin
Cell-mediated immunity
Herd immunity
Heterosubtypic immunity
Hoskins effect
Humoral immunity
Immunology
Inoculation
Premunity
Vaccine-naive
Virgin soil epidemic
References
External links
The Center for Modeling Immunity to Enteric Pathogens (MIEP)
Immunology | Immunity (medicine) | Biology | 3,619 |
5,712,191 | https://en.wikipedia.org/wiki/Fructose%206-phosphate | Fructose 6-phosphate (sometimes called the Neuberg ester) is a derivative of fructose, which has been phosphorylated at the 6-hydroxy group. It is one of several possible fructosephosphates. The β-D-form of this compound is very common in cells. The great majority of glucose is converted to fructose 6-phosphate upon entering a cell. Fructose is predominantly converted to fructose 1-phosphate by fructokinase following cellular import.
History
The name Neuberg ester comes from the German biochemist Carl Neuberg. In 1918, he found that the compound (later identified as fructose 6-phosphate) was produced by mild acid hydrolysis of fructose 2,6-bisphosphate.
In glycolysis
Fructose 6-phosphate lies within the glycolysis metabolic pathway and is produced by isomerisation of glucose 6-phosphate. It is in turn further phosphorylated to fructose-1,6-bisphosphate.
See also
Mannose phosphate isomerase
References
Monosaccharide derivatives
Organophosphates
Pentose phosphate pathway
Phosphate esters
Glycolysis | Fructose 6-phosphate | Chemistry | 259 |
2,135,962 | https://en.wikipedia.org/wiki/Snapshot%20%28computer%20storage%29 | In computer systems, a snapshot is the state of a system at a particular point in time. The term was coined as an analogy to that in photography.
Rationale
A full backup of a large data set may take a long time to complete. On multi-tasking or multi-user systems, there may be writes to that data while it is being backed up. This prevents the backup from being atomic and introduces a version skew that may result in data corruption. For example, if a user moves a file into a directory that has already been backed up, then that file would be completely missing on the backup media, since the backup operation had already taken place before the addition of the file. Version skew may also cause corruption with files which change their size or contents underfoot while being read.
One approach to safely backing up live data is to temporarily disable write access to data during the backup, either by stopping the accessing applications or by using the locking API provided by the operating system to enforce exclusive read access. This is tolerable for low-availability systems (on desktop computers and small workgroup servers, on which regular downtime is acceptable). High-availability 24/7 systems, however, cannot bear service stoppages.
To avoid downtime, high-availability systems may instead perform the backup on a snapshot—a read-only copy of the data set frozen at a point in time—and allow applications to continue writing to their data. Most snapshot implementations are efficient and can create snapshots in O(1). In other words, the time and I/O needed to create the snapshot does not increase with the size of the data set; by contrast, the time and I/O required for a direct backup is proportional to the size of the data set. In some systems once the initial snapshot is taken of a data set, subsequent snapshots copy the changed data only, and use a system of pointers to reference the initial snapshot. This method of pointer-based snapshots consumes less disk capacity than if the data set was repeatedly cloned.
Implementations
Volume managers
Some Unix systems have snapshot-capable logical volume managers. These implement copy-on-write on entire block devices by copying changed blocksjust before they are to be overwritten within "parent" volumesto other storage, thus preserving a self-consistent past image of the block device. Filesystems on such snapshot images can later be mounted as if they were on a read-only media.
Some volume managers also allow creation of writable snapshots, extending the copy-on-write approach by disassociating any blocks modified within the snapshot from their "parent" blocks in the original volume. Such a scheme could be also described as performing additional copy-on-write operations triggered by the writes to snapshots.
On Linux, Logical Volume Manager (LVM) allows creation of both read-only and read-write snapshots. Writable snapshots were introduced with the LVM version 2 (LVM2).
File systems
Some file systems, such as WAFL, fossil for Plan 9 from Bell Labs, and ODS-5, internally track old versions of files and make snapshots available through a special namespace. Others, like UFS2, provide an operating system API for accessing file histories. In NTFS, access to snapshots is provided by the Volume Shadow-copying Service (VSS) in Windows XP and Windows Server 2003 and Shadow Copy in Windows Vista. Melio FS provides snapshots via the same VSS interface for shared storage. Snapshots have also been available in the NSS (Novell Storage Services) file system on NetWare since version 4.11, and more recently on Linux platforms in the Open Enterprise Server product.
EMC's Isilon OneFS clustered storage platform implements a single scalable file system that supports read-only snapshots at the file or directory level. Any file or directory within the file system can be snapshotted and the system will implement a copy-on-write or point-in-time snapshot dynamically based on which method is determined to be optimal for the system.
On Linux, the Btrfs and OCFS2 file systems support creating snapshots (cloning) of individual files. Additionally, Btrfs also supports the creation of snapshots of subvolumes. On AIX, JFS2 also support snapshots.
See also
Application checkpointing
Persistence (computer science)
Sandbox (computer security)
Storage Hypervisor
System image
Virtual machine
Notes
References
External links
Backup
Fault-tolerant computer systems
Persistence | Snapshot (computer storage) | Technology,Engineering | 962 |
55,515,710 | https://en.wikipedia.org/wiki/NGC%20483 | NGC 483 is a spiral galaxy in the constellation Pisces. It is located approximately 192 million light-years from Earth and was discovered on November 11, 1827 by astronomer John Herschel.
See also
Spiral galaxy
List of NGC objects (1–1000)
References
External links
SEDS
Spiral galaxies
Pisces (constellation)
0483
4961
Astronomical objects discovered in 1827 | NGC 483 | Astronomy | 78 |
2,903,598 | https://en.wikipedia.org/wiki/Upsilon%20Bo%C3%B6tis | Upsilon Boötis (υ Boötis) is a single, orange-hued star in the northern constellation of Boötes. It is a fourth magnitude star that is visible to the naked eye. Based upon an annual parallax shift of 12.38 mas as seen from the Earth, it is located about 263 light years from the Sun. The star is moving closer to the Sun with a radial velocity of −6 km/s.
This is an evolved K-type giant star with a stellar classification of K5.5 III. Astroseismology was used to obtain a mass estimate of 1.11 times the mass of the Sun, while interferometric measurements give a size of about 38 times the Sun's radius. It is radiating about 332 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 3,920 K.
References
External links
K-type giants
Boötes
Bootis, Upsilon
Durchmusterung objects
Bootis, 05
120477
067459
5200 | Upsilon Boötis | Astronomy | 215 |
63,219,935 | https://en.wikipedia.org/wiki/Levitation%20based%20inertial%20sensing | Levitation based inertial sensing is a new and rapidly growing technique for measuring linear acceleration, rotation and orientation of a body. Based on this technique, inertial sensors such as accelerometers and gyroscopes, enables ultra-sensitive inertial sensing. For example, the world's best accelerometer used in the LISA Pathfinder in-flight experiment is based on a levitation system which reaches a sensitivity of and noise of .
History
The pioneering work related to the microparticle levitation was performed by Artur Ashkin in 1970. He demonstrated optical trapping of dielectric microspheres for the first time, forming an optical levitation system, by using a focused laser beam in air and liquid. This new technology was later named "optical tweezer" and applied in biochemistry and biophysics. Later, significant scientific progress on optically levitated systems was made, for example the cooling of the center of mass motion of a micro- or nanoparticle in the millikelvin regime. Very recently a research group published a paper showing motional quantum ground state cooling of a levitated nanoparticle. In addition, levitation based on electrostatic and magnetic approaches have also been proposed and realized.
Levitation systems have shown high force sensitivities in the range. For example, an optically levitated dielectric particle has been shown to exhibit force sensitivities beyond ~ . Thus, levitation systems show promise for ultra-sensitive force sensing, such as detection of short-range interactions. By levitating micro- or mesoparticles with a relatively large mass, this system can be employed as a high-performance inertial sensor, demonstrating nano-g sensitivity.
Method
One possible working principle behind a levitation based inertial sensing system is the following. By levitating a micro-object in vacuum and after a cool-down process, the center of mass motion of the micro-object can be controlled and coupled to the kinematic states of the system. Once the system's kinematic state changes (in other words, the system undergoes linear or rotational acceleration), the center of mass motion of the levitated micro-object is affected and yields a signal. This signal is related to the changes of the system's kinematic states and can be read out.
Regarding levitation techniques, there are generally three different approaches: optical, electrostatic and magnetic.
Applications
The sub-attonewton force sensitivity of levitation based system could show promise for applications in many different fields, such as Casimir force sensing, gravitational wave detection and inertial sensing. For inertial sensing, levitation based system could be used to make high-performance accelerometers and gyroscopes employed in inertial measurement units (IMUs) and inertial navigation systems (INSs). These are used in such applications as drone navigation in tunnels and mines, guidance of unmanned aerial vehicles (UAVs), or stabilization of micro-satellites. Levitation based Inertial sensors that have sufficient sensitivity and low noise () for measurements in the seismic band ( to ) can be used in the field of seismometry, in which current inertial sensors cannot meet the requirements.
There are already some commercial products on the market. One example is the iOSG Superconducting gravity sensor, which is based on magnetic levitation and shows a noise of .
Advantages
The future trends in inertial sensing require that inertial sensors have lower cost, higher performance, and smaller in size. Levitation based inertial sensing systems have already shown high performance. For example, the accelerometer used in the LISA Pathfinder in-flight experiment has a sensitivity of and noise of .
References
Levitation
Sensors | Levitation based inertial sensing | Physics,Technology,Engineering | 784 |
18,952,376 | https://en.wikipedia.org/wiki/Center%20for%20Transportation%20and%20Logistics%20Neuer%20Adler | Center for Transportation and Logistics Neuer Adler (CNA) is a German association for industries active in the transport and logistics sectors. The name "Neuer Adler" alludes to Adler, the first railway locomotive in Germany.
References
External links
Logistics in Germany
Professional associations based in Germany
Transport associations in Germany
Organisations based in Bavaria | Center for Transportation and Logistics Neuer Adler | Physics | 66 |
46,275,114 | https://en.wikipedia.org/wiki/Radionomy | Radionomy was an online platform that provided tools for operating online radio stations. It was part of Radionomy Group, a company which later acquired the online streaming platform SHOUTcast from Nullsoft, and eventually consolidated Radionomy into its SHOUTcast service.
Concept
Radionomy, a portmanteau of "radio" and "autonomy," is a platform that facilitates user-driven creation and consumption of online radio content. Through the Radionomy Musical Platform (RMO), users possess the autonomy to curate and program their online radio stations, incorporating elements such as music, commentary, and radio jingles. The platform empowers users to contribute original audio content, including musical compositions and jingles, and offers the capability for live broadcasts.
To ensure compliance with copyright regulations, Radionomy secures licensing from SABAM, enabling the legal use of music content. The platform sustains its operations and fulfills royalty obligations by incorporating advertising into broadcasts, limited to a maximum of four minutes per hour. This advertising model serves as a primary revenue source, supporting the platform's commitment to facilitating user-created online radio experiences while adhering to legal and financial considerations.
History
Radionomy was founded in September 2007 by four Belgian entrepreneurs: Alexandre Saboundjian Gilles Bindels, Cedric van Kan and Yves Baudechon.
2008
17 January Radionomy held a press conference at the Eiffel Tower in Paris, and announced the public launch of the planned business April 17, 2008.
Late February the alpha version of the Radio Manager is broadcast from a community of beta testers selected based on their radio project. This is the beginning of the beta test.
17 April the Radionomy site opens to the Belgian and French public, allowing visitors to listen to Internet radio stations created on the platform.
17 June Radionomy has released its beta.
2010
Unknown after several beta waves, live function is incorporated into all web radios, whatever the creation date and the number of radio listeners.
2011
February 15 opening of the feature "Play the radio" allowing all producers radios can have a website pre-designed.
March the launch of the advertising Adionomy that allows advertisers to broadcast their advertising on the web radios targeting listeners.
30 May Radionomy invests in Hotmixradio.
2012
June 28 Radionomy announces the signing of an agreement with the US digital advertising platform Targetspot.
August 29 Adionomy board was launched, new governance in the world of digital radio in France.
5 September Radionomy announces the opening of its US headquarters in San Francisco.
18 September Radionomy launched G2, the new version of the platform. This includes updates to the site radionomy.com, and the release of the Radio Manager Online online platform which replaced the older Radio Manager desktop application. Facebook, iPhone, and iPad applications were also released.
late October Alexander Saboundjian, CEO of Radionomy, became manager of Hotmixradio, instead of its founder Olivier Riou.
2013
18 September Radionomy won the award "International Excellence in Online Audio" awarded at the RAIN Summit in Orlando, Florida.
December 16 Radionomy acquired U.S.-based advertising Targetspot.
2014
17 January Radionomy formalizes the acquisition of Winamp and SHOUTcast from AOL. However, TechCrunch has reported that the sale of Winamp and Shoutcast is worth between $5 and $10 million, with AOL taking a 12% stake (a financial, not strategic, investment) in Radionomy in the process.
2015
On 17 December 2015, Vivendi acquired a 64.4% majority stake in Radionomy. Its shareholders including its employees and a U.S based investment company Union Square Ventures, retained its stake in the company.
2016
26 February In a lawsuit filed at a California federal court, a group of Sony brands – including Arista Records, LaFace Records and Sony Music Entertainment – accused Radionomy of copyright infringement. The case was settled out of court shortly thereafter.
2017
AudioValley acquired a majority stake in Radionomy.
"In August 2017, AudioValley acquired the 64.4% stake held by Vivendi in Radionomy Group BV. AudioValley now owns 98.53% of the company's capital."
2020
On 1 January, Radionomy shut down its streaming service and migrated towards the Shoutcast platform. This move was part of the group's wish to offer all digital radio producers new professional-quality tools to better meet their needs.
2022
5 July AudioValley renames itself to Targetspot, with its focus shifting to its digital audio monetisation business.
22 November - Azerion Group N.V. acquires Radionomy Group B.V. and all of its subsidiaries (Targetspot and Shoutcast) from Targetspot SA.
2023
February - Targetspot SA rebrands to Llama Group, based on its remaining Winamp subsidiary consisting of Bridger, Jamendo and Winamp.
List of properties formerly owned by Radionomy
In addition to its own online radio aggregation service, Radionomy owned audio and radio-related digital properties:
Hotmixradio
Jamendo
SHOUTcast
Targetspot
Winamp
References
External links
Belgian companies established in 2008
Internet radio | Radionomy | Technology | 1,084 |
45,223,069 | https://en.wikipedia.org/wiki/CASS4 | Cas scaffolding protein family member 4 is a protein that in humans is encoded by the CASS4 gene.
History and discovery
CASS4 (Crk associated substrate 4) is the fourth and last described member of the CAS protein family. CASS4 was detected by Singh et al. in 2008 following in silico screening of databases describing expressed sequence tags from an evolutionarily diverse group of organisms, using the CAS-related proteins (p130Cas, NEDD9/HEF1 and EFS) mRNAs as templates. Singh et al. subsequently cloned and characterized the CASS4 gene, originally assigning the name HEPL (HEF1-EFS-p130Cas-like) for similarity to the other three defined CAS genes. The official name was subsequently changed to CASS4 by the Human Genome Organization (HUGO) Gene Nomenclature Committee (HGNC).
Gene
The chromosomal location of the CASS4 gene is 20q13.31, with genomic coordinates of 20: 56411548-56459340 on the forward strand in GRChB38p2. While its HGNC-approved symbol is CASS4, this gene has multiple synonyms, including "HEF-like protein", "HEF1-Efs-p130Cas-like", HEFL, HEPL and C20orf32 ("chromosome 20 open reading frame 32"). Official IDs assigned to this gene include 15878 (HGNC), 57091 (Entrez Gene) and ENSG00000087589 (Ensembl). In humans four transcript variants are known. The first and second each contain 7 exons and encode the same full-length protein isoform a (786 amino acids, considered the major isoform), the third one contains 6 exons and encodes a shorter isoform b (732 amino acids) and the fourth one contains 5 exons and encodes the shortest isoform c (349 amino acids). Cumulatively, the CASS4 transcripts are most highly expressed in spleen and lung among normal tissues, and are highly expressed in ovarian and leukemia cell lines.
To date, little effort has been applied to the direct study of transcriptional regulation of CASS4. The SABiosciences' DECODE database, based on the UCSC Bioinformatics Genome Browser, proposes several transcriptional regulators for CASS4 based on its promoter region sequence: NF-κβ, p53, LCR-F1 (NFE2-L1, nuclear factor, erythroid 2-like1), MAX1, C/EBPα, CHOP-10 (C/EBP homologous protein 10), POU3F1 (POU domain, class 3, transcription factor 1, aka Oct-6), Areb6 (ZEB1, Zinc finger E-box binding homeobox 1). These are compatible with regulation relevant to lymphocytes and deregulation in cancer.
Protein family
In vertebrates, the CAS protein family contains four members: p130Cas/BCAR1, NEDD9/HEF1, EFS and CASS4. There are no paralogous genes for this family in acoelomates, pseudocoelomates, and nematodes, while a single ancestral member is found in Drosophila. Evolutionary divergence of the CAS proteins family members is discussed by Singh et al. in detail.
Structure
All CAS protein family members have common structural characteristics. CAS proteins have an amino terminal SH3 domain enabling interaction with poly-proline motif-containing proteins such as FAK. Carboxy-terminal to this, they possess an unstructured domain containing multiple SH2 binding site motifs, which when tyrosine-phosphorylated allow interaction with SH2 domain containing proteins. Further to the carboxy-terminus, they have a four-helix bundle rich in serine residues, and a second highly conserved four-helix bundle that has been recognized as functionally and structurally similar to a focal adhesion targeting [FAT] domain. For the better studied members of the CAS family (BCAR1 and NEDD9), all of these domains have been defined as crucial for recognition and binding by other proteins, reflecting the primary role of CAS family proteins as cell signaling cascades mediators.
Isoform "a" of human CASS4 is considered the predominant species, and at 786 amino acids is the longest one. Amino acid sequence homology of this isoform of human CASS4 with other family members is 26% overall identity and 42% similarity. Using a yeast two-hybrid approach, the CASS4 protein SH3 domain was shown to interact with the FAK C-terminus, despite the lowest overall similarity to other SH3 domains in the CAS group. In addition, human CASS4 has a limited number of candidate SH2-binding sites, estimated at 10, which is similar to EFS (estimated at 9) and in contrast to p130Cas/BCAR1 and NEDD9, which have 20 and 18 respectively. The CASS4 C-terminus has a short region of CAS family homology, but lacks obvious similarity at the level of primary amino acid sequence. It also lacks a YDYVHL sequence at the N-terminal end of the FAT-like carboxy-terminal domain, even though this motif is conserved among the other three CAS family proteins and is an important binding site for the Src SH2 domain. Although this lack of sequence similarity may mean a reduced functionality of the CASS4 protein, molecular modeling analysis performed by Singh and colleagues using p130CAS/BCAR1 structures as templates suggested an almost identical fold between CASS4 and p130CAS/BCAR1 within their SH3 domains, and substantial similarity within 432-591 residues of CASS4 and 449-610 residues of p130Cas/BCAR1 at the level of secondary and tertiary structures. Also, the similar periodicity of α-helices and β-sheets in both CASS4 and p130Cas/BCAR1 provides another confirmation for the idea of well-conserved structures within the family members.
Function
The exact function of CASS4 and its role in development and human pathologies have been subject to little investigation compared to other family members. The primary study exploring CASS4 function was the initial report by Singh et al., who showed the direct interaction between CASS4 and FAK, and CASS4 regulation of FAK activation, affecting cellular adhesion, migration and motility. Unusually, CASS4 depletion had a bimodal affect, causing some cells to have lower velocity and others to have higher velocity than control cells, suggesting a potential role in maintaining homeostasis. This work also suggested the function of CASS4 may be cell-type specific and dependent upon the presence or absence of expression of other CAS family members. Direct binding has also been identified between CASS4 and CRKL, an SH2- and SH3 domain-containing adaptor protein that has been also shown to interact with another CAS family member, p130Cas/BCAR1, in regulation of cellular motility and migration. Because of the high degree of homology in interaction domains and some identified common partners, CASS4 is likely to share some functions with other CAS family members. These include association with FAK and Src family kinases at focal adhesions to transmit integrin-initiated signals to downstream effectors, which results in cytoskeleton reorganization and changes in motility and invasion.
Disease association
Altered expression or modification of CASS4 has been proposed as relevant to several human pathologies, typically based on detection of changes in CASS4 in high throughput screening, although the role of CASS4 in the pathology of these conditions has not yet been studied directly. These findings are summarized in Table 1; some examples are provided below.
Cancer
Many CAS family proteins have altered activity and functional roles in cancer progression and metastasis, with functional roles in influencing cellular adhesion, migration and drug resistance. Changes in CASS4 may also be associated with human malignancies. CASS4 function was linked to non-small cell lung cancer (NSCLC) in a study by Miao et al. that correlated elevated CASS4 expression with lymph node metastasis and high TNM stage. In addition, this study detected a significant difference in cytoplasmic accumulation of CASS4 protein between high (H1299 and BE1) and low (LTE and A549) metastatic potential lung cancer cell lines. These may suggest CASS4 as a possible prognostic marker in clinical management of NSCLC.
Alzheimer's disease
CASS4 and corresponding SNP - rs7274581 T/C has been identified in a large meta-analysis as a locus for lower susceptibility to Alzheimer's disease (AD). However this SNP was not found predictive in a follow-up study.
In a genome wide association screen (GWAS), CASS4 showed a significant correlation with clinical pathological features of AD such as neurofibrillary tangles and neuritic plaques. Two additional CASS4 SNPs were reported to be associated with AD susceptibility: rs6024870, and rs16979934 T/G. Given the likely conserved CAS-family cytoskeletal function of CASS4, it has been speculated that it may have a role in axonal transport and influence the expression of the amyloid precursor protein (APP) and tau, which are pathologically affected in AD. Several possible mechanisms for CASS4 action in AD have been proposed.
Immunopathological conditions
An association of CASS4 with atopic asthma has been shown. CASS4 has also been reported to be an eosinophil-associated gene, with expression in sputum cells increased more than 1.5-fold after whole lung allergen challenge. Moreover, the CASS4 mRNA was upregulated in cells collected by bronchoalveolar lavage after segmental broncho-provocation with an allergen. Reciprocally, the CASS4 mRNA was downregulated when this procedure was performed following administration of mepolizumab (a humanized monoclonal anti-IL-5 antibodies which reduces excessive eosinophilia). This suggests CASS4 activity may be associated with immune response in the context of atopic asthma development.
Cystic fibrosis
CASS4 has been reported to play a modifying role in cystic fibrosis severity, progression and comorbid conditions. The CAS family member NEDD9 has also been shown to interact directly with AURKA (encoding Aurora-A kinase) to regulate cell cycle and ciliary resorption; it is possible that CASS4 may similarly interact with aurora-A kinase.
Thrombosis
CASS4 signaling may contribute to platelet activation and aggregation. A PKA/PKG phosphorylation site has been identified in CASS4 on residue S305 in the unstructured domain containing SH2-binding motifs; the functional significance of this phosphorylation is currently unknown. Significantly increased phosphorylation on S249 of CASS4, also in the unstructured domain, after platelet stimulation with the oxidized phospholipid KODA-PC (9-keto-12-oxo-10-dodecenoic acid ester of 2-lyso-phosphocholine, a CD36 receptor agonist) versus thrombin treatment, which may implicate CASS4 mediated signaling in platelet hyperreactivity.
Clinical significance
There are currently no therapeutic approaches targeting CASS4, and in the absence of a catalytic domain and no extracellular moieties, it may be challenging to generate such an agent. However, CASS4 may ultimately be relevant in clinical practice as a possible marker to assess prognosis and outcome in cases of NSCLC (and possibly other types of cancer). At present, its greatest clinical value is likely to be as a predictive variant for severity and onset of Alzheimer's disease and cystic fibrosis.
Notes
References
External links
Proteins | CASS4 | Chemistry | 2,543 |
12,728,109 | https://en.wikipedia.org/wiki/Soil%20respiration | Soil respiration refers to the production of carbon dioxide when soil organisms respire. This includes respiration of plant roots, the rhizosphere, microbes and fauna.
Soil respiration is a key ecosystem process that releases carbon from the soil in the form of CO2. CO2 is acquired by plants from the atmosphere and converted into organic compounds in the process of photosynthesis. Plants use these organic compounds to build structural components or respire them to release energy. When plant respiration occurs below-ground in the roots, it adds to soil respiration. Over time, plant structural components are consumed by heterotrophs. This heterotrophic consumption releases CO2 and when this CO2 is released by below-ground organisms, it is considered soil respiration.
The amount of soil respiration that occurs in an ecosystem is controlled by several factors. The temperature, moisture, nutrient content and level of oxygen in the soil can produce extremely disparate rates of respiration. These rates of respiration can be measured in a variety of methods. Other methods can be used to separate the source components, in this case the type of photosynthetic pathway (C3/C4), of the respired plant structures.
Soil respiration rates can be largely affected by human activity. This is because humans have the ability to and have been changing the various controlling factors of soil respiration for numerous years. Global climate change is composed of numerous changing factors including rising atmospheric CO2, increasing temperature and shifting precipitation patterns. All of these factors can affect the rate of global soil respiration. Increased nitrogen fertilization by humans also has the potential to affect rates over the entire planet.
Soil respiration and its rate across ecosystems is extremely important to understand. This is because soil respiration plays a large role in global carbon cycling as well as other nutrient cycles. The respiration of plant structures releases not only CO2 but also other nutrients in those structures, such as nitrogen. Soil respiration is also associated with positive feedback with global climate change. Positive feedback is when a change in a system produces response in the same direction of the change. Therefore, soil respiration rates can be affected by climate change and then respond by enhancing climate change.
Sources of carbon dioxide in soil
All cellular respiration releases energy, water and CO2 from organic compounds. Any respiration that occurs below-ground is considered soil respiration. Respiration by plant roots, bacteria, fungi and soil animals all release CO2 in soils, as described below.
Tricarboxylic acid (TCA) cycle
The tricarboxylic acid (TCA) cycle – or citric acid cycle – is an important step in cellular respiration. In the TCA cycle, a six carbon sugar is oxidized. This oxidation produces the CO2 and H2O from the sugar. Plants, fungi, animals and bacteria all use this cycle to convert organic compounds to energy. This is how the majority of soil respiration occurs at its most basic level. Since the process relies on oxygen to occur, this is referred to as aerobic respiration.
Fermentation
Fermentation is another process in which cells gain energy from organic compounds. In this metabolic pathway, energy is derived from the carbon compound without the use of oxygen. The products of this reaction are carbon dioxide and usually either ethyl alcohol or lactic acid. Due to the lack of oxygen, this pathway is described as anaerobic respiration. This is an important source of CO2 in soil respiration in waterlogged ecosystems where oxygen is scarce, as in peat bogs and wetlands. However, most CO2 released from the soil occurs via respiration and one of the most important aspects of below-ground respiration occurs in the plant roots.
Root respiration
Plants respire some of the carbon compounds which were generated by photosynthesis. When this respiration occurs in roots, it adds to soil respiration. Root respiration accounts for approximately half of all soil respiration. However, these values can range from 10 to 90% depending on the dominant plant types in an ecosystem and conditions under which the plants are subjected. Thus, the amount of CO2 produced through root respiration is determined by the root biomass and specific root respiration rates. Directly next to the root is the area known as the rhizosphere, which also plays an important role in soil respiration.
Rhizosphere respiration
The rhizosphere is a zone immediately next to the root surface with its neighboring soil. In this zone there is a close interaction between the plant and microorganisms. Roots continuously release substances, or exudates, into the soil. These exudates include sugars, amino acids, vitamins, long chain carbohydrates, enzymes and lysates which are released when roots cells break. The amount of carbon lost as exudates varies considerably between plant species. It has been demonstrated that up to 20% of carbon acquired by photosynthesis is released into the soil as root exudates. These exudates are decomposed primarily by bacteria. These bacteria will respire the carbon compounds through the TCA cycle; however, fermentation is also present. This is due to the lack of oxygen due to greater oxygen consumption by the root as compared to the bulk soil, soil at a greater distance from the root. Another important organism in the rhizosphere are root-infecting fungi or mycorrhizae. These fungi increase the surface area of the plant root and allow the root to encounter and acquire a greater amount of soil nutrients necessary for plant growth. In return for this benefit, the plant will transfer sugars to the fungi. The fungi will respire these sugars for energy thereby increasing soil respiration. Fungi, along with bacteria and soil animals, also play a large role in the decomposition of litter and soil organic matter.
Soil animals
Soil animals graze on populations of bacteria and fungi as well as ingest and break up litter to increase soil respiration. Microfauna are made up of the smallest soil animals. These include nematodes and mites. This group specializes on soil bacteria and fungi. By ingesting these organisms, carbon that was initially in plant organic compounds and was incorporated into bacterial and fungal structures will now be respired by the soil animal. Mesofauna are soil animals from in length and will ingest soil litter. The fecal material will hold a greater amount of moisture and have a greater surface area. This will allow for new attack by microorganisms and a greater amount of soil respiration. Macrofauna are organisms from , such as earthworms and termites. Most macrofauna fragment litter, thereby exposing a greater amount of area to microbial attack. Other macrofauna burrow or ingest litter, reducing soil bulk density, breaking up soil aggregates and increasing soil aeration and the infiltration of water.
Regulation of soil respiration
Regulation of CO2 production in soil is due to various abiotic, or non-living, factors. Temperature, soil moisture and nitrogen all contribute to the rate of respiration in soil.
Temperature
Temperature affects almost all aspects of respiration processes. Temperature will increase respiration exponentially to a maximum, at which point respiration will decrease to zero when enzymatic activity is interrupted. Root respiration increases exponentially with temperature in its low range when the respiration rate is limited mostly by the TCA cycle. At higher temperatures the transport of sugars and the products of metabolism become the limiting factor. At temperatures over , root respiration begins to shut down completely. Microorganisms are divided into three temperature groups; cryophiles, mesophiles and thermophiles. Cryophiles function optimally at temperatures below , mesophiles function best at temperatures between 20 and and thermophiles function optimally at over . In natural soils many different cohorts, or groups of microorganisms exist. These cohorts will all function best at different conditions, so respiration may occur over a very broad range. Temperature increases lead to greater rates of soil respiration until high values retard microbial function, this is the same pattern that is seen with soil moisture levels.
Soil moisture
Soil moisture is another important factor influencing soil respiration. Soil respiration is low in dry conditions and increases to a maximum at intermediate moisture levels until it begins to decrease when moisture content excludes oxygen. This allows anaerobic conditions to prevail and depress aerobic microbial activity. Studies have shown that soil moisture only limits respiration at the lowest and highest conditions with a large plateau existing at intermediate soil moisture levels for most ecosystems. Many microorganisms possess strategies for growth and survival under low soil moisture conditions. Under high soil moisture conditions, many bacteria take in too much water causing their cell membrane to lyse, or break. This can decrease the rate of soil respiration temporarily, but the lysis of bacteria causes for a spike in resources for many other bacteria. This rapid increase in available labile substrates causes short-term enhanced soil respiration. Root respiration will increase with increasing soil moisture, especially in dry ecosystems; however, individual species' root respiration response to soil moisture will vary widely from species to species depending on life history traits. Upper levels of soil moisture will depress root respiration by restricting access to atmospheric oxygen. With the exception of wetland plants, which have developed specific mechanisms for root aeration, most plants are not adapted to wetland soil environments with low oxygen. The respiration dampening effect of elevated soil moisture is amplified when soil respiration also lowers soil redox through bioelectrogenesis. Soil-based microbial fuel cells are becoming popular educational tools for science classrooms.
Nitrogen
Nitrogen directly affects soil respiration in several ways. Nitrogen must be taken in by roots to promote plant growth and life. Most available nitrogen is in the form of NO3−, which costs 0.4 units of CO2 to enter the root because energy must be used to move it up a concentration gradient. Once inside the root the NO3− must be reduced to NH3. This step requires more energy, which equals 2 units of CO2 per molecule reduced. In plants with bacterial symbionts, which fix atmospheric nitrogen, the energetic cost to the plant to acquire one molecule of NH3 from atmospheric N2 is 2.36 CO2. It is essential that plants uptake nitrogen from the soil or rely on symbionts to fix it from the atmosphere to assure growth, reproduction and long-term survival.
Another way nitrogen affects soil respiration is through litter decomposition. High nitrogen litter is considered high quality and is more readily decomposed by microorganisms than low quality litter. Degradation of cellulose, a tough plant structural compound, is also a nitrogen limited process and will increase with the addition of nitrogen to litter.
Methods of measurement
Different methods exist for the measurement of soil respiration rate and the determination of sources. Methods can be divided into field- and laboratory-based methods. The most common field methods include the use of long-term stand alone soil flux systems for measurement at one location at different times; survey soil respiration systems for measurement of different locations and at different times. The use of stable isotope ratios can be used both in laboratory of field measurements.
Soil respiration can be measured alone or with added nutrients and (carbon) substrates that supply food sources to the microorganisms. Soil respiration without any additions of nutrients and substrates is called the basal soil respiration (BR). With the addition of nutrients (often nitrogen and phosphorus) and substrates (e.g. sugars), it is called the substrate-induced soil respiration (SIR). In both BR and SIR measurements, the moisture content can be adjusted with water.
Field methods
Long-term stand-alone soil flux systems for measurement at one location over time
These systems measure at one location over long periods of time. Since they only measure at one location, it is common to use multiple stations to reduce measuring error caused by soil variability over small distances. Soil variability may be tested with survey soil respiration instruments.
The long-term instruments are designed to expose the measuring site to ambient conditions as much as is possible between measurements.
Types of long-term stand-alone instruments
Closed, non-steady state systems
Closed systems take short-term measurements (typically over few minutes only) in a chamber sealed over the soil. The rate of soil CO2 efflux is calculated on the basis of CO2 increased inside the chamber. As it is within the nature of closed chambers that CO2 continues to accumulate, measurement periods are reduced to a minimum to achieve a detectable, linear concentration increase, avoiding an excessive build-up of CO2 inside the chamber over time.
Both individual assay information and diurnal CO2 respiration measuring information is accessible. It is also common for such systems to also measure soil temperature, soil moisture and PAR (photosynthetically active radiation). These variables are normally recorded in the measuring file along with CO2 values.
For determination of soil respiration and the slope of CO2 increase, researchers have used linear regression analysis, the Pedersen (2001) algorithm, and exponential regression. There are more published references for linear regression analysis; however, the Pedersen algorithm and exponential regression analysis methods also have their following. Some systems offer a choice of mathematical methods.
When using linear regression, multiple data points are graphed and the points can be fitted with a linear regression equation, which will provide a slope. This slope can provide the rate of soil respiration with the equation , where F is the rate of soil respiration, b is the slope, V is the volume of the chamber and A is the surface area of the soil covered by the chamber. It is important that the measurement is not allowed to run over a longer period of time as the increase in CO2 concentration in the chamber will also increase the concentration of CO2 in the porous top layer of the soil profile. This increase in concentration will cause an underestimation of soil respiration rate due to the additional CO2 being stored within the soil.
Open, steady-state systems
Open mode systems are designed to find soil flux rates when measuring chamber equilibrium has been reached. Air flows through the chamber before the chamber is closed and sealed. This purges any non-ambient CO2 levels from the chamber before measurement. After the chamber is closed, fresh air is pumped into the chamber at a controlled and programmable flow rate. This mixes with the CO2 from the soil, and after a time, equilibrium is reached. The researcher specifies the equilibrium point as the difference in CO2 measurements between successive readings, in an elapsed time. During the assay, the rate of change slowly reduces until it meets the customer's rate of change criteria, or the maximum selected time for the assay. Soil flux or rate of change is then determined once equilibrium conditions are reached within the chamber. Chamber flow rates and times are programmable, accurately measured, and used in calculations. These systems have vents that are designed to prevent a possible unacceptable buildup of partial CO2 pressure discussed under closed mode systems. Since the air movement inside the chamber might cause increased chamber pressure, or external winds may produce reduced chamber pressure, a vent is provided that is designed to be as wind proof as possible.
Open systems are also not as sensitive to soil structure variation, or to boundary layer resistance issues at the soil surface. Air flow in the chamber at the soil surface is designed to minimize boundary layer resistance phenomena.
Hybrid Mode Systems
A hybrid system also exists. It has a vent that is designed to be as wind proof as possible, and prevent possible unacceptable partial CO2 pressure buildup, but is designed to operate like a closed mode design system in other regards.
Survey soil respiration systems – for testing the variation of CO2 respiration at different locations and at different times
These are either open or closed mode instruments that are portable or semi-portable. They measure CO2 soil respiration variability at different locations and at different times. With this type of instrument, soil collars that can be connected to the survey measuring instrument are inserted into the ground and the soil is allowed to stabilize for a period of time. The insertion of the soil collar temporarily disturbs the soil, creating measuring artifacts. For this reason, it is common to have several soil collars inserted at different locations. Soil collars are inserted far enough to limit lateral diffusion of CO2. After soil stabilization, the researcher then moves from one collar to another according to experimental design to measure soil respiration.
Survey soil respiration systems can also be used to determine the number of long-term stand-alone temporal instruments that are required to achieve an acceptable level of error. Different locations may require different numbers of long-term stand-alone units due to greater or lesser soil respiration variability.
Isotope methods
Plants acquire CO2 and produce organic compounds with the use of one of three photosynthetic pathways. The two most prevalent pathways are the C3 and C4 processes. C3 plants are best adapted to cool and wet conditions while C4 plants do well in hot and dry ecosystems. Due to the different photosynthetic enzymes between the two pathways, different carbon isotopes are acquired preferentially. Isotopes are the same element that differ in the number of neutrons, thereby making one isotope heavier than the other. The two stable carbon isotopes are 12C and 13C. The C3 pathway will discriminate against the heavier isotope more than the C4 pathway. This will make the plant structures produced from C4 plants more enriched in the heavier isotope and therefore root exudates and litter from these plants will also be more enriched. When the carbon in these structures is respired, the CO2 will show a similar ratio of the two isotopes. Researchers will grow a C4 plant on soil that was previously occupied by a C3 plant or vice versa. By taking soil respiration measurements and analyzing the isotopic ratios of the CO2 it can be determined whether the soil respiration is mostly old versus recently formed carbon. For example, maize, a C4 plant, was grown on soil where spring wheat, a C3 plant, was previously grown. The results showed respiration of C3 SOM in the first 40 days, with a gradual linear increase in heavy isotope enrichment until day 70. The days after 70 showed a slowing enrichment to a peak at day 100. By analyzing stable carbon isotope data it is possible to determine the source components of respired SOM that was produced by different photosynthetic pathways.
Substrate-induced respiration in the field using stable isotopes
One problem in the measurement of soil respiration in the field is that respiration of microorganisms can not be distinguished from respiration from plant roots and soil animals. This can be overcome using stable isotope techniques. Cane sugar is a C4 – sugar which can act as an isotopic tracer. Cane sugar has a slightly higher abundance of 13C (δ13C ≈ −10‰) than the endogenous (natural) carbon in a C3 ecosystem (δ13C=−25 to −28‰). Cane sugar can be sprayed on the soil in a solution and will infiltrate the upper soil, Only microorganisms will respire the added sugar because roots exclusively respire carbon products that are assimilated by the plant via photosynthesis. By analyses of the δ13C of the CO2 evolving from the soil with or without adding cane sugar, the fraction of C3 (root and microbial) and C4 (microbial respiration) can be calculated.
Field respiration using stable isotopes can be used as a tool to measure microbial respiration in-situ without disturbing the microbial communities by mixing soil nutrients, oxygen, and soil contaminants that may be present.
Responses to human disturbance
Throughout the past 160 years, humans have changed land use and industrial practices, which have altered the climate and global biogeochemical cycles. These changes have affected the rate of soil respiration around the planet. In addition, increasingly frequent extreme climatic events such as heat waves (involving high temperature disturbances and associated intense droughts), followed by intense rainfall, impact on microbial communities and soil physico-chemistry and may induce changes in soil respiration.
Elevated carbon dioxide
Since the Industrial Revolution, humans have emitted vast amounts of CO2 into the atmosphere. These emissions have increased greatly over time and have increased global atmospheric CO2 levels to their highest in over 750,000 years. Soil respiration increases when ecosystems are exposed to elevated levels of CO2. Numerous free air CO2 enrichment (FACE) studies have been conducted to test soil respiration under predicted future elevated CO2 conditions. Recent FACE studies have shown large increases in soil respiration due to increased root biomass and microbial activity. Soil respiration has been found to increase up to 40.6% in a sweetgum forest in Tennessee and poplar forests in Wisconsin under elevated CO2 conditions. It is extremely likely that CO2 levels will exceed those used in these FACE experiments by the middle of this century due to increased human use of fossil fuels and land use practices.
Climate warming
Due to the increase in temperature of the soil, CO2 levels in our atmosphere increase, and as such the mean average temperature of the Earth is rising. This is due to human activities such as forest clearing, soil denuding, and developments that destroy autotrophic processes. With the loss of photosynthetic plants covering and cooling the surface of the soil, the infrared energy penetrates the soil heating it up and causing a rise in heterotrophic bacteria. Heterotrophs in the soil quickly degrade the organic matter and soil structure crumbles, thus it dissolves into streams and rivers into the sea. Much of the organic matter swept away in floods caused by forest clearing goes into estuaries, wetlands and eventually into the open ocean. Increased turbidity of surface waters causes biological oxygen demand and more autotrophic organisms die. Carbon dioxide levels rise with increased respiration of soil bacteria after temperatures rise due to loss of soil cover.
As mentioned earlier, temperature greatly affects the rate of soil respiration. This may have the most drastic influence in the Arctic. Large stores of carbon are locked in the frozen permafrost. With an increase in temperature, this permafrost is melting and aerobic conditions are beginning to prevail, thereby greatly increasing the rate of respiration in that ecosystem.
Changes in precipitation
Due to the shifting patterns of temperature and changing oceanic conditions, precipitation patterns are expected to change in location, frequency and intensity. Larger and more frequent storms are expected when oceans can transfer more energy to the forming storm systems. This may have the greatest impact on xeric, or arid, ecosystems. It has been shown that soil respiration in arid ecosystems shows dynamic changes within a raining cycle. The rate of respiration in dry soil usually bursts to a very high level after rainfall and then gradually decreases as the soil dries. With an increase in rainfall frequency and intensity over area without previous extensive rainfall, a dramatic increase in soil respiration can be inferred.
Nitrogen fertilization
Since the onset of the Green Revolution in the middle of the last century, vast amounts of nitrogen fertilizers have been produced and introduced to almost all agricultural systems. This has led to increases in plant available nitrogen in ecosystems around the world due to agricultural runoff and wind-driven fertilization. As discussed earlier, nitrogen can have a significant positive effect on the level and rate of soil respiration. Increases in soil nitrogen have been found to increase plant dark respiration, stimulate specific rates of root respiration and increase total root biomass. This is because high nitrogen rates are associated with high plant growth rates. High plant growth rates will lead to the increased respiration and biomass found in the study. With this increase in productivity, an increase in soil activities and therefore respiration can be assured.
Importance
Soil respiration plays a significant role in the global carbon and nutrient cycles as well as being a driver for changes in climate. These roles are important to our understanding of the natural world and human preservation.
Global carbon cycling
Soil respiration plays a critical role in the regulation of carbon cycling at the ecosystem level and at global scales. Each year approximately 120 petagrams (Pg) of carbon are taken up by land plants and a similar amount is released to the atmosphere through ecosystem respiration. The global soils contain up to 3150 Pg of carbon, of which 450 Pg exist in wetlands and 400 Pg in permanently frozen soils. The soils contain more than four times the carbon as the atmosphere. Researchers have estimated that soil respiration accounts for 77 Pg of carbon released to the atmosphere each year. This level of release is greater than the carbon release due to anthropogenic sources (56 Pg per year) such as fossil fuel burning. Thus, a small change in soil respiration can seriously alter the balance of atmosphere CO2 concentration versus soil carbon stores. Much like soil respiration can play a significant role in the global carbon cycle, it can also regulate global nutrient cycling.
Nutrient cycling
A major component of soil respiration is from the decomposition of litter which releases CO2 to the environment while simultaneously immobilizing or mineralizing nutrients. During decomposition, nutrients such as nitrogen are immobilized by microbes for their own growth. As these microbes are ingested or die, nitrogen is added to the soil. Nitrogen is also mineralized from the degradation of proteins and nucleic acids in litter. This mineralized nitrogen is also added to the soil. Due to these processes, the rate of nitrogen added to the soil is coupled with rates of microbial respiration. Studies have shown that rates of soil respiration were associated with rates of microbial turnover and nitrogen mineralization. Alterations of the global cycles can further act to change the climate of the planet.
Climate change
As stated earlier, the CO2 released by soil respiration is a greenhouse gas that will continue to trap energy and increase the global mean temperature if concentrations continue to rise. As global temperature rises, so will the rate of soil respiration across the globe thereby leading to a higher concentration of CO2 in the atmosphere, again leading to higher global temperatures. This is an example of a positive feedback loop. It is estimated that a rise in temperature by 2 °C will lead to an additional release of 10 Pg carbon per year to the atmosphere from soil respiration. This is a larger amount than current anthropogenic carbon emissions. There also exists a possibility that this increase in temperature will release carbon stored in permanently frozen soils, which are now melting. Climate models have suggested that this positive feedback between soil respiration and temperature will lead to a decrease in soil stored carbon by the middle of the 21st century.
Summary
Soil respiration is a key ecosystem process that releases carbon from the soil in the form of carbon dioxide. Carbon is stored in the soil as organic matter and is respired by plants, bacteria, fungi and animals. When this respiration occurs below ground, it is considered soil respiration. Temperature, soil moisture and nitrogen all regulate the rate of this conversion from carbon in soil organic compounds to CO2. Many methods are used to measure soil respiration; however, the closed dynamic chamber and use of stable isotope ratios are two of the most prevalent techniques. Humans have altered atmospheric CO2 levels, precipitation patterns and fertilization rates, all of which have had a significant role on soil respiration rates. The changes in these rates can alter the global carbon and nutrient cycles as well as play a significant role in climate change.
References
Su B. (2005) Interactions between ecosystem carbon, nitrogen and water cycles under global change: Results from field and mesocosm experiments. University of Oklahoma, Norman, OK.
Flanagan L, and Veum A. (1974) Relationships between respiration, weight loss, temperature and moisture in organic residues in tundra. Soil Organisms and decomposition in Tundra. 249–277.
External links
Belowground respiration, Duke University
Soil biology | Soil respiration | Biology | 5,756 |
660,960 | https://en.wikipedia.org/wiki/Hashimoto%27s%20thyroiditis | Hashimoto's thyroiditis, also known as chronic lymphocytic thyroiditis, Hashimoto's disease and autoimmune thyroiditis, is an autoimmune disease in which the thyroid gland is gradually destroyed.
Early on, symptoms may not be noticed. Over time, the thyroid may enlarge, forming a painless goiter. Most people eventually develop hypothyroidism with accompanying weight gain, fatigue, constipation, hair loss, and general pains. After many years the thyroid typically shrinks in size. Potential complications include thyroid lymphoma. Further complications of hypothyroidism can include high cholesterol, heart disease, heart failure, high blood pressure, myxedema, and potential problems in pregnancy.
Hashimoto's thyroiditis is thought to be due to a combination of genetic and environmental factors. Risk factors include a family history of the condition and having another autoimmune disease. Diagnosis is confirmed with blood tests for TSH, Thyroxine (T4), antithyroid autoantibodies, and ultrasound. Other conditions that can produce similar symptoms include Graves' disease and nontoxic nodular goiter.
Hashimoto's is typically not treated unless there is hypothyroidism, or the presence of a goiter, when it may be treated with levothyroxine. Those affected should avoid eating large amounts of iodine; however, sufficient iodine is required especially during pregnancy. Surgery is rarely required to treat the goiter.
Hashimoto's thyroiditis has a global prevalence of 7.5%, and varies greatly by region. The highest rate is in Africa, and the lowest in Asia. In the US white people are affected more often than black. It is more common in low to middle income groups. Females are more susceptible with a 17.5% rate of prevalence compared to 6% in males. It is the most common cause of hypothyroidism in developed countries. It typically begins between the ages of 30 and 50. Rates of the disease have increased. It was first described by the Japanese physician Hakaru Hashimoto in 1912. Studies in 1956 discovered that it was an autoimmune disorder.
Signs and symptoms
Signs
Early stages of autoimmune thyroiditis may have a normal physical exam with or without a goiter. A goiter is a diffuse, often symmetric, swelling of the thyroid gland visible in the anterior neck that may develop. The thyroid gland may become firm, large, and lobulated in Hashimoto's thyroiditis, but changes in the thyroid can also be non-palpable. Enlargement of the thyroid is due to lymphocytic infiltration, and fibrosis.
While their role in the initial destruction of the follicles is unclear, antibodies against thyroid peroxidase or thyroglobulin are relevant, as they serve as biomarkers for detecting the disease and its severity. They are thought to be the secondary products of the T cell-mediated destruction of the gland.
As lymphocytic infiltration progresses, patients may exhibit signs of hypothyroidism in multiple bodily systems, including, but not limited to, a larger goiter, weight gain, cold intolerance, fatigue, myxedema, constipation, menstrual disturbances, pale or dry skin, and dry, brittle hair, depression, and ataxia. Extended thyroid hormone deficiency may lead to muscle fibre changes, with fast-twitching type II being replaced by slow-twitching type-I fibers, resulting in muscle weakness, muscle pain, stiffness, and rarely, pseudohypertrophy.
While rare, more serious complications of the hypothyroidism resulting from autoimmune thyroiditis are pericardial effusion, pleural effusion, both of which require further medical attention, and myxedema coma, which is an endocrine emergency.
Patients with goiters who have had autoimmune thyroiditis for many years might see their goiter shrink in the later stages of the disease due to destruction of the thyroid.
Graves disease may occur before or after the development of autoimmune thyroiditis.
Symptoms
Many symptoms are attributed to the development of Hashimoto's thyroiditis. Symptoms can include: fatigue, weight gain, pale or puffy face, feeling cold, joint and muscle pain, constipation, dry and thinning hair, heavy menstrual flow or irregular periods, depression, a slowed heart rate, problems getting pregnant, miscarriages, and myopathy.
Some patients in the early stage of the disease may experience symptoms of hyperthyroidism due to the release of thyroid hormones from intermittent thyroid destruction (aka destructive thyrotoxicosis).
While most symptoms are attributed to hypothyroidism, similar symptoms are observed in Hashimoto's patients with normal thyroid hormone levels. According to one study, these symptoms may include lower quality of life, and issues of the "digestive system (abdominal distension, constipation and diarrhea), endocrine system (chilliness, gain weight and facial edema), neuropsychiatric system (forgetfulness, anxiety, depressed, fatigue, insomnia, irritability, and indifferent [sic]) and mucocutaneous system (dry skin, pruritus, and hair loss)."
In non-medical settings, the term "flare" is used to refer to a sudden exacerbation of symptoms, whether hyper or hypo.
Causes
The causes of Hashimoto's thyroiditis are complex. Around 80% of the risk of developing an autoimmune thyroid disorder is due to genetic factors, while the remaining 20% is related to environmental factors (such as iodine, drugs, infection, stress, radiation).
Genetics
Thyroid autoimmunity can be familial. Many patients report a family history of autoimmune thyroiditis or Graves' disease. The strong genetic component is borne out in studies on monozygotic twins, with a concordance of 38–55%, with an even higher concordance of circulating thyroid antibodies not in relation to clinical presentation (up to 80% in monozygotic twins). Neither result was seen to a similar degree in dizygotic twins, offering strong favour for high genetic etiology.
The genes implicated vary in different ethnic groups and the impact of these genes on the disease differs significantly among people from different ethnic groups. A gene that has a large effect in one ethnic group's risk of developing Hashimoto's thyroiditis might have a much smaller effect in another ethnic group.
The incidence of autoimmune thyroid disorders is increased in people with chromosomal disorders, including Turner, Down, and Klinefelter syndromes.
HLA genes
The first gene locus associated with autoimmune thyroid disease was the major histocompatibility complex (MHC) region on chromosome 6p21. It encodes human leukocyte antigens (HLAs). Specific HLA alleles have a higher affinity to auto-antigenic thyroidal peptides and can contribute to autoimmune thyroid disease development. Specifically, in Hashimoto's disease, aberrant expression of HLA II on thyrocytes has been demonstrated. They can present thyroid autoantigens and initiate autoimmune thyroid disease. Susceptibility alleles are not consistent in Hashimoto's disease. In Caucasians, various alleles are reported to be associated with the disease, including DR3, DR5, and DQ7.
CTLA-4 genes
CTLA-4 is the second major immune-regulatory gene related to autoimmune thyroid disease. CTLA-4 gene polymorphisms may contribute to the reduced inhibition of T-cell proliferation and increase susceptibility to autoimmune response. CTLA-4 is a major thyroid autoantibody susceptibility gene. A linkage of the CTLA-4 region to the presence of thyroid autoantibodies was demonstrated by a whole-genome linkage analysis. CTLA-4 was confirmed as the main locus for thyroid autoantibodies.
PTPN22 gene
PTPN22 is the most recently identified immune-regulatory gene associated with autoimmune thyroid disease. It is located on chromosome 1p13 and expressed in lymphocytes. It acts as a negative regulator of T-cell activation. Mutation in this gene is a risk factor for many autoimmune diseases. Weaker T-cell signaling may lead to impaired thymic deletion of autoreactive T cells, and increased PTPN22 function may result in inhibition of regulatory T cells, which protect against autoimmunity.
Immune-related genes
IFN-γ promotes cell-mediated cytotoxicity against thyroid mutations causing increased production of IFN-γ were associated with the severity of hypothyroidism. Severe hypothyroidism is associated with mutations leading to lower production of IL-4 (Th2 cytokine suppressing cell-mediated autoimmunity), lower secretion of TGF-β (inhibitor of cytokine production), and mutations of FOXP3, an essential regulatory factor for the regulatory T cells (Tregs) development. Development of Hashimoto's disease was associated with mutation of the gene for TNF-α (stimulator of the IFN-γ production), causing its higher concentration.
Existential (aka endogenous environmental)
Sex
Study of healthy Danish twins divided to three groups (monozygotic and dizygotic same sex, and opposite sex twin pairs) estimated that genetic contribution to thyroid peroxidase antibodies susceptibility was 61% in males and 72% in females, and contribution to thyroglobulin antibodies susceptibility was 39% in males and 75% in females.
The high female predominance in thyroid autoimmunity may be associated with the X chromosome. It contains sex and immune-related genes responsible for immune tolerance.
A higher incidence of thyroid autoimmunity was reported in patients with a higher rate of X-chromosome monosomy in peripheral white blood cells.
X-chromosome inactivation
Another potential mechanism might be skewed X-chromosome inactivation, leading to the escape of X-linked self-antigens from presentation in the thymus and loss of T-cell tolerance.
Pregnancy
In one population study, two or more births were a risk factor for developing autoimmune hypothyroidism in pre-menopausal women.
Environmental
Medications
Certain medications or drugs have been associated with altering and interfering with thyroid function. There are two main mechanisms of interference:
Altering thyroid hormone serum transfer proteins. Estrogen, tamoxifen, heroin, methadone, clofibrate, 5-fluorouracil, mitotane, and perphenazine all increase thyroid binding globulin (TBG) concentration. Androgens, anabolic steroids such as danazol, glucocorticoids, and slow release nicotinic acid all decrease TBG concentrations. Furosemide, fenoflenac, mefenamic acid, salicylates, phenytoin, diazepam, sulphonylureas, free fatty acids, and heparin all interfere with thyroid hormone binding to TBG and/or transthyretin.
Altering extra-thryoidal metabolism of thyroid hormone. Propylthiouracil, glucocorticoids, propranolol, iondinated contrast agents, amiodarone, and clomipramine all inhibit conversion of T4 and T3. Phenobarbital, rifampin, phenytoin and carbamazepine all increase hepatic metabolism. Finally, cholestryamine, colestipol, aluminium hydroxide, ferrous sulphate, and sucralfate are all drugs that decrease T4 absorption or enhance excretion.
Iodine
Excessive iodine intake is a well-established environmental factor for triggering thyroid autoimmunity. Thyroid autoantibodies are found to be more prevalent in geographical areas with a higher dietary iodine levels. Several mechanisms by which iodine may promote thyroid autoimmunity have been proposed:
Via thyroglobulin iodination: Iodine exposure leads to higher iodination of thyroglobulin, increasing its immunogenicity by creating new iodine-containing epitopes or exposing cryptic epitopes. It may facilitate presentation of thyroglobulin by antigen-presenting cells, and enhance the binding affinity of the T-cell receptor. "Sufficiently Iodinated" thyroglobulin may activate Tg-specific T-cells.
Via thyrocyte damage: Iodine exposure has been shown to increase the level of reactive oxygen species. They enhance the expression of the intracellular adhesion molecule-1 on the thyrocytes, which could attract the immunocompetent cells into the thyroid gland. Additionally, such oxidative elements may bind to "proteins, nucleic acids and membrane lipids" forming compounds which damage cell integrity. Oxidative stress may cause cell necrosis and the release of autoantigens. Iodine also promotes thyrocyte apoptosis.
Via immune cell behaviour: Iodine has an influence on immune cells (stimulation of macrophages, augmented maturation of dendritic cells, increased number of T cells, stimulated B-cell immunoglobulin production).
Data from the Danish Investigation of Iodine Intake and Thyroid Disease shows that within two cohorts (males, females) with moderate and mild iodine deficiency, the levels of both thyroid peroxidase and thyroglobulin antibodies are higher in females, and prevalence rates of both antibodies increase with age.
Comorbidities
Comorbid autoimmune diseases are a risk factor for developing Hashimoto's thyroiditis, and the opposite is also true. Another thyroid disease closely associated with Hashimoto's thyroiditis is Graves' disease. Autoimmune diseases affecting other organs most commonly associated with Hashimoto's thyroiditis include celiac disease, type 1 diabetes, vitiligo, alopecia, Addison disease, Sjogren's syndrome, and rheumatoid arthritis Autoimmune thyroiditis has also been seen in patients with autoimmune polyendocrine syndromes type 1 and 2.
Other
Other environmental factors include selenium deficiency, infectious diseases such as hepatitis C, rubella, and possibly Covid-19, toxins, dietary factors, radiation exposure, and gut dysbiosis.
Mechanism
The pathophysiology of autoimmune thyroiditis is not well understood. However, once the disease is established, its core processes have been observed:
Hashimoto's Thyroiditis is a T-lymphocyte mediated attack on the thyroid gland. T helper 1 cells trigger macrophages and cytotoxic lymphocytes to destroy thyroid follicular cells, while T helper 2 cells stimulate the excessive production of B cells and plasma cells which generate antibodies against the thyroid antigens, leading to thyroiditis. The three major antibodies are: Thyroid peroxidase Antibodies (TPOAb), Thyroglobulin Antibodies (TgAb), and Thyroid stimulating hormone receptor Antibodies (TRAb), with TPOAb and TgAb being most commonly implicated in Hashimotos. They are hypothesized to develop as a result of thyroid damage, where T-lymphocytes are sensitized to residual thyroid peroxidase and thyroglobulin, rather than as the initial cause of thyroid damage. However, they may exacerbate further thyroid destruction by binding the complement system and triggering apoptosis of thyroid cells. TPO antibody levels may correlate with the degree of lymphocyte infiltration of the thyroid.
Gross morphological changes within the thyroid are seen in the general enlargement, which is far more locally nodular and irregular than more diffuse patterns (such as that of hyperthyroidism). While the capsule is intact and the gland itself is still distinct from surrounding tissue, microscopic examination can provide a more revealing indication of the level of damage.
Hypothyroidism is caused by replacement of follicular cells with parenchymatous tissue.
Partial regeneration of the thyroid tissue can occur, but this has not been observed to normalise hormonal levels.
Pathology
Gross pathology of a thyroid with autoimmune thyroiditis may show an symmetrically enlarged thyroid. It is often paler in color, in comparison to normal thyroid tissue which is reddish-brown.
Microscopic examination (histology) will show diffuse parenchymal infiltration by lymphocytes including plasma B-cells. The lymphocytes are predominately T-lymphocytes with a representation of both CD4 positive and CD8 positive cells. The plasma cells are polyclonal, with present germinal centers resembling the structure of a lymph node (aka secondary lymphoid follicles, not to be confused with the normally present colloid-filled follicles that constitute the thyroid).
Atrophic colloid follicles are lined by Hürthle cells (cells with intensely eosinophilic and granular cytoplasm, which are a metaplasia of the normal cuboidal cells that line the thyroid follicles).
Fibrous tissue may be found throughout the affected thyroid as well. In late stages of the disease, the thyroid may be atrophic. Severe thyroid atrophy presents often with denser fibrotic bands of collagen that remains within the confines of the thyroid capsule.
Generally, pathological findings of the thyroid are related to the amount of existing thyroid function - the more infiltration and fibrosis, the less likely a patient will have normal thyroid function.
A rare but serious complication is thyroid lymphoma, generally the B-cell type, non-Hodgkin lymphoma.
Diagnosis
Tests
Some or all of the following tests may be performed, in any order:
Physical exam
Physicians will often start by assessing reported symptoms and performing a thorough physical exam, including a neck exam. On gross examination, a hard goiter that is not painful to the touch often presents; other symptoms seen with hypothyroidism, such as periorbital myxedema, depend on the current state of progression of the response, especially given the usually gradual development of clinically relevant hypothyroidism.
Antithyroid antibodies tests
Tests for antibodies against thyroid peroxidase, thyroglobulin, and thyrotropin receptors can detect autoimmune processes against the thyroid. However, seronegative (without circulating autoantibodies) thyroiditis is also possible. There may be circulating antibodies before the onset any symptoms.
Ultrasound
An ultrasound may be useful in detecting Hashimoto thyroiditis, especially in those with seronegative thyroiditis, or when patients have normal laboratory values but symptoms of autoimmune thyroiditis. Key features detected in the ultrasound of a person with Hashimoto's thyroiditis include "echogenicity, heterogeneity, hypervascularity, and presence of small cysts." Images obtained with ultrasound can evaluate the size of the thyroid, reveal the presence of nodules, or provide clues to the diagnosis of other thyroid conditions.
Nuclear medicine
Nuclear imaging showing thyroid uptake can also be helpful in diagnosing thyroid function, particularly differential diagnosis.
TSH plasma serum concentration test
To detect if the pituitary is stimulating an underperforming thyroid to produce more thyroid hormone. Thyroid-stimulating hormone (TSH) secretion from the anterior pituitary increases in response to decreased serum thyroid hormones. If elevated, it signifies hypothyroidism. The elevation is usually a marked increase over the normal range. TSH is the preferred initial test of thyroid function as it has a higher sensitivity to changes in thyroid status than free T4.
Biotin can cause this test to read "falsely low". Time of day can affect the results of this test; TSH peaks early in the morning and slumps in the late afternoon to early evening, with "a variation in TSH by a mean of between 0.95 mIU/mL to 2.0 mIU/mL". Hypothyroidism is diagnosed more often in samples taken soon after waking.
T3 or T4 levels test
To detect a lack of thyroid hormones (hypothyroidism), or excess of thyroid hormones (hyperthyroidism). The two thyroid hormones are Thyroxine (T4) and Tri-iodothyronine (T3). T4 and T3 can be measured by their total amount, or free amount. As the free amount reflects the amount available to body tissues, the most treatment-relevant measures for thyroid disorders are Free T3 and Free T4. Typically, Free T4 is the preferred test for hypothyroidism, as Free T3 immunoassay tests are less reliable at detecting low levels of thyroid hormone, and they are more susceptible to interference. Free T4 levels will usually be lowered, but sometimes might be normal.
Immunoassay tests of Free T4 and Free T3 may overestimate concentrations, particularly at low thyroid hormone levels, which is why results are typically read in conjunction with TSH, a more sensitive measure. LC-MSMS assays are rarer, but they are "highly specific, sensitive, precise, and can detect hormones found in low concentrations."
Muscle Biopsy
Muscle biopsy is not necessary for diagnosis of myopathy due to hypothyroid muscle fibre changes, however it may reveal confirmatory features.
Treatment
There is no cure for Hashimoto's Thyroiditis. There is currently no known way to stop auto-immune lymphocytes infiltrating the thyroid or to stimulate regeneration of thyroid tissue. However, the condition can be managed.
Managing hormone levels
Hypothyroidism caused by Hashimoto's thyroiditis is treated with thyroid hormone replacement agents such as levothyroxine (LT4), liothyronine (LT3), or desiccated thyroid extract (T4+T3). A tablet or liquid taken once a day generally keeps the thyroid hormone levels normal. In most cases, the treatment needs to be taken for the rest of the person's life.
The standard of care is levothyroxine (LT4) therapy, which is an oral medication identical in molecular structure to endogenous thyroxine (T4). Levothyroxine sodium has a sodium salt added to increase the gastrointestinal absorption of levothyroxine. Levothyroxine has the benefits of a long half-life leading to stable thyroid hormone levels, ease of monitoring, excellent safety and efficacy record, and usefulness in pregnancy as it can cross the fetal blood-brain barrier.
Levothyroxine dosing to normalise TSH is based on the amount of residual endogenous thyroid function and the patient’s weight, particularly lean body mass. The dose can be adjusted based upon each patient, for example, the dose may be lowered for elderly patients or patients with certain cardiac conditions, but should be increased in pregnant patients. It should be administered on a consistent schedule. Levothyroxine may be dosed daily or weekly, however weekly dosing may be associated with higher TSH levels, elevated thyroid hormone levels, and transient "echocardiographic changes in some patients following 2-4 h of thyroxine intake".
Some patients elect combination therapy with both levothyroxine and liothyronine, which is identical in molecular structure to tri-iodothyronine (T3), however studies of combination therapy are limited, and five meta-analyses/reviews "suggested no clear advantage of the combination therapy." However, subgroup analysis found that patients who remain the most symptomatic while taking levothyroxine may benefit from therapy containing liothyronine.
There is a lack of evidence around the benefits, long-term effects and side effects of dessicated thyroid extract. It is no longer recommended for the treatment of hypothyroidism.
Side Effects
Side effects of thyroid replacement therapy are associated with "inadequate or excessive doses." Symptoms to watch for include, but are not limited to, anxiety, tremor, weight loss, heat sensitivity, diarrhea, and shortness of breath. More worrisome symptoms include atrial fibrillation and bone density loss.
Long term over-treatment is associated with increased mortality and dementia.
Monitoring
Thyroid Stimulating Hormone (TSH) is the laboratory value of choice for monitoring response to treatment with levothyroxine. When treatment is first initiated, TSH levels may be monitored as often as a frequency of every 6–8 weeks. Each time the dose is adjusted, TSH levels may be measured at that frequency until the correct dose is determined. Once titrated to a proper dose, TSH levels will be monitored yearly. The target level for TSH is the subject of debate, with factors like age, sex, individual needs and special circumstances such as pregnancy being considered. Recent studies suggest that adjusting therapy based on thyroid hormone levels (T4 and/or T3) may be important.
Monitoring liothyronine treatment or combination treatment can be challenging. Liothyronine can suppress TSH to a greater extent than levothyroxine. Short-acting Liothyronine's short half-life can result in large fluctuations of free T3 over the course of 24 hours.
Patients may have to adjust their dosage several times over the course of the disease. Endogenous thyroid hormone levels may fluctuate, particularly early in the disease. Patients may sometimes develop hyperthyroidism, even after long-term treatment. This can be due to a number of factors including acute attacks of destructive thyrotoxicosis (autoimmune attacks on the thyroid resulting in rises in thyroid hormone levels as thyroid hormones leak out of the damaged tissues). This is usually followed by hypothyroidism.
Reverse T3
Measuring reverse tri-iodothyronine (rT3) is often mentioned in the lay (non-medical) press as a possible marker to inform T4 or T3 therapy, "however, there is currently no evidence to support this application" as of 2023. Although cited in the lay press as a possible competitor to T3, it is unlikely that rT3 causes hypothyroid symptoms by out-competing T3 for thyroid hormone receptors, as it has a binding affinity 200 times weaker. It is also unlikely that rT3 causes poor T4 to T3 conversion; despite being demonstrated in vivo to have the potential to inhibit DIO-mediated T4 to T3 conversion, this is considered improbable at normal body hormone concentrations.
Persistent Symptoms
Multiple studies have demonstrated persistent symptoms in Hashimotos patients with normal thyroid hormone levels (euthyroid) and an estimated 10%-15% of patients treated with levothyroxine monotherapy are dissatisfied due to persistent symptoms of hypothyroidism. Several different hypothesised causes are discussed in the medical literature:
Low tissue tri-iodothyronine (T3) hypothesis
Peripheral tissue T4 to T3 conversion may be inadequate. Patients on LT4 monotherapy may have blood T3 levels low or below the normal range, and/or may have local T3 deficiency in some tissues.
Although both molecules can have biological effects, thyroxine (T4) is considered the "storage form" of thyroid hormone, while tri-iodothyronine (T3) is considered the active form used by body tissues. The body must convert thyroxine into tri-iodothyronine in order to have biological effects. Tri-iodothyronine is produced primarily by conversion in the liver, kidney, skeletal muscle and pituitary gland. Possible reasons for poor conversion include:
Insufficient nutrients. Sufficient levels of the micronutrients zinc, selenium, iron, and possibly vitamin A are important for adequate conversion.
Conversion rates may decline with age.
Possible contribution of gene polymorphisms. Since deiodinase type 2 is necessary for T4 to T3 conversion in some peripheral tissues, "patients with DIO2 gene polymorphisms may have variable peripheral T3 availability", leading to localised hypothyroidism in some tissues. For these patients, levothyroxine monotherapy may not be sufficient and patients may have improvement on combination therapy. Thr92Ala DIO2 polymorphism is present in 12–36% of the population.
Patients with impaired conversion may be recommended combination therapy of both levothyroxine and liothyronine. As standard immunoassay tests can overestimate blood T4 and T3 levels, Ultrafiltration LC-MSMS T4 and T3 tests may help to identify patients who would benefit from additional T3.
Inadequate markers hypothesis
There is ongoing debate about how to define euthyroidism and whether TSH is its best indicator. TSH may be useful to detect poor thyroid output and may reflect the state of thyroid hormones in the hypothalamic-pituitary-thyroid axis, but not the presence of hormones in other body tissues. As a result, LT4 monotherapy may not result in a "truly biochemically euthyroid state." Patients may express a preference for "low normal or below normal TSH values" and/or T4 and T3 monitoring. The monitoring of other biomarkers that reflect the action of thyroid hormone on tissues has also been proposed.
As immunoassay Free T3 and Free T4 tests can overestimate levels, particularly at low thyroid hormone levels, hypothyroidism may be undertreated. LC-MSMS tests may provide more reliable measures.
Extra-thyroidal effects of autoimmunity hypothesis
It is hypothesised that autoimmunity may play some role in euthyroid symptoms. Hypothesised mechanisms include the proposal that TPO-antibody-producing lymphocytes may travel out of the thyroid to other tissue, creating symptoms and inflammation due to cross-reaction, or "the inflammatory nature of [...] persistently increased circulating cytokine levels." Multiple studies find that antibodies coincide with symptoms even in euthyroid patients, and higher levels are associated with increased symptoms, however "the found association does not prove a causality". No treatment currently exists for Hashimotos autoimmunity, although observed wellbeing improvements after surgical thyroid removal are hypothesised to be due to removing the autoimmune stimulus.
Physical and psychosocial co-morbidities hypothesis
It is hypothesised that symptoms may not be due to Hashimotos or hypothyroidism, but some other "physical and psychosocial co-morbidities".
Other influences on thyroid hormone levels
Zinc may increase free T3 levels. A small pilot study found Ashwagandha Root may increase T3 and T4 levels, however, there's a lack of strong evidence of this benefit and Ashwagandha has a potential to cause adrenal insufficiency.
Improving wellbeing
Some patients may perceive improved wellbeing while in thyrotoxicosis, however overtreatment has risks (known risks for levothyroxine and unknown risks for liothyronine).
One study demonstrated surgical thyroid removal may substantially improve fatigue and wellbeing, see Surgery considerations.
Reducing antibodies
It is not established that reducing antithyroid antibodies in Hashimoto's has benefits. A systematic review and meta-analysis of selenium trials found that while selenium reduces TPO antibodies, there was a lack of evidence of effects on "disease remission, progression, lowered levothyroxine dose or improved quality of life".
Selenium, vitamin D, and metformin can reduce thyroid peroxidase antibodies. There is preliminary evidence that Levothyroxine, Aloe Vera Juice and Black Cumin Seed may reduce thyroid peroxidase antibodies. Metformin can reduce thyroglobulin antibodies.
It is not established that a gluten-free diet can reduce antibodies when there is no comorbid coeliac disease. Gluten free diets have been shown in several studies to reduce antibodies, and in other studies to have no effect, however there were significant confounding issues in these studies, including not ruling out comorbid coeliac disease.
One study found Surgical thyroid removal can substantially reduce anti-thyroid antibody levels, see Surgery considerations.
Surgery considerations
Surgery is not the initial treatment of choice for autoimmune disease, and uncomplicated Hashimoto's thyroiditis is not an indication for thyroidectomy. Patients generally may discuss surgery with their doctor if they are experiencing significant pressure symptoms, or cosmetic concerns, or have nodules present on ultrasound. One well-conducted study of patients with troublesome general symptoms and with anti-thyroperoxidase (anti-TPO) levels greater than 1000 IU/ml (normal <100 IU/ml) showed that total thyroidectomy caused the symptoms to resolve and median anti-thyroid peroxidase levels to reduce from 2232 to 152 IU/mL, but post-operative complications were higher than expected: infection (4.1%), permanent hypoparathyroidism (4.1%) and recurrent laryngeal nerve injury (5.5%).
Other
As of 2022, there has been only one study of low-dose naltrexone in Hashimotos, which did not demonstrate efficacy, therefore nothing supports its use; Removing dairy products in those without lactose intolerance has not been found to be supported. While soy isoflavones have the potential to theoretically affect T3 and T4 production, studies in those with sufficient iodine find no effect.
Prognosis
Overt, symptomatic thyroid dysfunction is the most common complication, with about 5% of people with subclinical hypothyroidism and chronic autoimmune thyroiditis progressing to thyroid failure every year. Transient periods of thyrotoxicosis (over-activity of the thyroid) sometimes occur, and rarely the illness may progress to full hyperthyroid Graves' disease with active orbitopathy (bulging, inflamed eyes).
Rare cases of fibrous autoimmune thyroiditis present with severe shortness of breath and difficulty swallowing, resembling aggressive thyroid tumors, but such symptoms always improve with surgery or corticosteroid therapy. Although primary thyroid B-cell lymphoma affects fewer than one in 1000 persons, it is more likely to affect those with long-standing autoimmune thyroiditis, as there is a 67- to 80-fold increased risk of developing primary thyroid lymphoma in patients with Hashimoto's thyroiditis.
Myopathy as a result of muscle fibre changes due to thyroid hormone deficiency may take months or years of thyroid hormone treatment to resolve.
Anti-thyroid antibodies
Thyroid peroxidase antibodies typically (but not always) decline in patients treated with levothyroxine, with decreases varying between 10% and 90% after a follow-up of 6 to 24 months. One study of patients treated with levothyroxine observed that 35 out of 38 patients (92%) had declines in thyroid peroxidase antibody levels over five years, lowering by 70% on average. 6 of the 38 patients (16%) had thyroid peroxidase antibody levels return to normal.
Children
Many children diagnosed with Hashimoto's disease will experience the same progressive course of the disease that adults do. However, of children who develop anti-thyroid antibodies and hypothyroidism, up to 50% are later observed to have normal antibodies and thyroid hormone levels.
One case of true remission has been observed in a 12-year-old girl. Her thyroid was observed via ultrasound to progress from early inflammation to severe end-stage Hashimoto's thyroiditis with hypothyroidism, and then return to "almost normal with only minimal features of inflammation" and euthyroidism.
Epidemiology
Hashimoto's Disease is estimated to affect 2% of the world's population. About 1.0 to 1.5 in 1000 people have this disease at any time.
Sex
Anyone may develop this disease, but it occurs between 8 and 15 times more often in women than in men. Some research suggests a connection to the role of the placenta as an explanation for the sex difference. The difference in prevalence amongst genders is due to the effects of sex hormones.
High iodine consumption
Autoimmune thyroiditis has a higher prevalence in societies that have a higher intake of iodine in their diet, such as the United States and Japan, and among people who are genetically susceptible. It is the most common cause of hypothyroidism in areas of sufficient iodine. Also, the rate of lymphocytic infiltration increased in areas where the iodine intake was once low, but increased due to iodine supplementation.
Iodine deficiency disorder is combated using an increase in iodine in a person's diet. When a dramatic change occurs in a person's diet, they become more at-risk of developing hypothyroidism and other thyroid disorders. Treating iodine deficiency disorder with high salt intakes should be done carefully and cautiously as risk for Hashimoto's may increase.
Geographic influence of dietary trends
Geography plays a large role in which regions have access to diets with low or high iodine. Iodine levels in both water and salt should be heavily monitored in order to protect at-risk populations from developing hypothyroidism.
Geographic trends of hypothyroidism vary across the world as different places have different ways of defining disease and reporting cases. Populations that are spread out or defined poorly may skew data in unexpected ways.
North America
Hashimoto's thyroiditis may affect up to 5% of the United States' population. Hashimoto's thyroiditis disorder is thought to be the most common cause of primary hypothyroidism in North America.
Age
It has been shown that the prevalence of positive tests for thyroid antibodies increases with age, "with a frequency as high as 33 percent in women 70 years old or older."
Hashimoto's thyroiditis can occur at any age, including children, but more commonly appears in middle age, particularly for men. Incidence peaks in the fifth decade of life, but patients are usually diagnosed between age 30–50. The highest prevalence from one study was found in the elderly members of the community.
Race
The prevalence of Hashimoto's varies geographically. The highest rate is in Africa, and the lowest in Asia. In the US, the African-American population experiences it less commonly but has greater associated mortality.
Autoimmune diseases
Those that already have an autoimmune disease are at greater risk of developing Hashimoto's as the diseases generally coexist with each other. See Causes > Comorbidities, above.
Secular trends
The secular trends of hypothyroidism reveal how the disease has changed over the course of time given changes in technology and treatment options. Even though ultrasound technology and treatment options have improved, the incidence of hypothyroidism has increased according to data focused on the US and Europe. Between 1993 and 2001, per 1000 women, the disease was found varying between 3.9 and 4.89. Between 1994 and 2001, per 1000 men, the disease increased from 0.65 to 1.01.
Changes in the definition of hypothyroidism and treatment options modify the incidence and prevalence of the disease overall. Treatment using levothyroxine is individualized, and therefore allows the disease to be more manageable with time but does not work as a cure for the disease.
History
Also known as Hashimoto's disease, Hashimoto's thyroiditis is named after Japanese physician Hakaru Hashimoto (1881−1934) of the medical school at Kyushu University, who first described the symptoms of persons with struma lymphomatosa, an intense infiltration of lymphocytes within the thyroid, in 1912 in the German journal called . This paper was made up of 30 pages and 5 illustrations all describing the histological changes in the thyroid tissue. Furthermore, all results in his first study were collected from four women. These results explained the pathological characteristics observed in these women especially the infiltration of lymphocyte and plasma cells as well as the formation of lymphoid follicles with germinal centers, fibrosis, degenerated thyroid epithelial cells and leukocytes in the lumen. He described these traits to be histologically similar to those of Mikulic's disease. As mentioned above, once he discovered these traits in this new disease, he named the disease struma lymphomatosa. This disease emphasized the lymphocyte infiltration and formation of the lymphoid follicles with germinal centers, neither of which had ever been previously reported.
Despite Dr. Hashimoto's discovery and publication, the disease was not recognized as distinct from Reidel's thyroiditis, which was a common disease at that time in Europe. Although many other articles were reported and published by other researchers, Hashimoto's struma lymphomatosa was only recognized as an early phase of Reidel's thyroiditis in the early 1900s. It was not until 1931 that the disease was recognized as a disease in its own right, when researchers Allen Graham et al. from Cleveland reported its symptoms and presentation in the same detailed manner as Hashimoto.
In 1956, Drs. Rose and Witebsky were able to demonstrate how immunization of certain rodents with extracts of other rodents' thyroid resembled the disease Hakaru and other researchers were trying to describe. These doctors were also able to describe anti-thyroglobulin antibodies in blood serum samples from these same animals.
Later on in the same year, researchers from the Middlesex Hospital in London were able to perform human experiments on patients who presented with similar symptoms. They purified anti-thyroglobulin antibody from their serum and were able to conclude that these sick patients had an immunological reaction to human thyroglobulin. From this data, it was proposed that Hashimoto's struma could be an autoimmune disease of the thyroid gland.
"Following these discoveries, the concept of organ-specific autoimmune disease was established and HT recognized as one such disease."
Following this recognition, the same researchers from Middlesex Hospital published an article in 1962 in The Lancet that included a portrait of Hakaru Hashimoto. The disease became more well known from that moment, and Hashimoto's disease started to appear more frequently in textbooks.
Pregnancy
Conception
It is recommended that hypothyroidism be treated with levothyoxine before conception, to prevent adverse effects on the course of the pregnancy and on the development of the child. In IVF, embryo transfer is improved when hypothyroidism is treated.
Pregnancy
The Endocrine Society recommends screening in pregnant women who are considered high-risk for thyroid autoimmune disease. Universal screening for thyroid diseases during pregnancy is controversial, however, one study "supports the potential benefit of universal screening".
Pregnant women may have antithyroid antibodies (5%–14% of pregnancies), poor thyroid function resulting in hypothyroidism, or both. Each is associated with risks.
Anti-thyroid antibodies in pregnancy
The presence of Thyroid peroxidase antibodies at the outset of pregnancy are associated with a greater risk to the mother of hypothyroidism and thyroid impairment in the first year after delivery.
The presence of antibodies is also associated with "a 2 to 4-fold increase in the risk of recurrent miscarriages, and 2 to 3-fold increased risk of preterm birth", however the reason why is unclear. Thyroid peroxidase antibodies are speculated to indicate other autoimmune processes against the placental-fetal unit.
Levothyroxine treatment in euthyroid women with thyroid autoimmunity does not significantly impact the relative risk of miscarriage and preterm delivery, or outcomes with live birth. "Therefore, no strong recommendations regarding the therapy in such scenarios could be made, but consideration on a case-by-case basis might be implemented."
Hypothyroidism in pregnancy.
Women who have low thyroid function that has not been stabilized are at greater risk of complications for both parent and child. Risks to the mother include gestational hypertension including preeclampsia and eclampsia, gestational diabetes, placental abruption, and postpartum hemorrhage. Risks to the infant include miscarriage, preterm delivery, low birth weight, neonatal respiratory distress, hydrocephalus, hypospadias, fetal death, infant intensive care unit admission, and neurodevelopmental delays (lower child IQ, language delay or global developmental delay).
Successful pregnancy outcomes are improved when hypothyroidism is treated. Levothyroxine treatment may be considered at lower TSH levels in pregnancy than in standard treatment. Liothyronine does not cross the fetal blood-brain barrier, so liothyronine (T3) only or liothyronine + levothyroxine (T3 + T4) therapy is not indicated in pregnancy.
Close cooperation between the endocrinologist and obstetrician benefits the woman and the infant.
Immune changes during pregnancy
Hormonal changes and trophoblast expression of key immunomodulatory molecules lead to immunosuppression and fetal tolerance. The main players in regulation of the immune response are Tregs. Both cell-mediated and humoral immune responses are attenuated, resulting in immune tolerance and suppression of autoimmunity. It has been reported that during pregnancy, levels of thyroid peroxidase and thyroglobulin antibodies decrease.
Postpartum
Thyroid peroxidase antibodies testing is recommended for women who have ever been pregnant regardless of pregnancy outcome. "[P]revious pregnancy plays a major role in development of autoimmune overt hypothyroidism in premenopausal women, and the number of previous pregnancies should be taken into account when evaluating the risk of hypothyroidism in a young women [sic]."
Postpartum thyroiditis can occur in women with Hashimoto's. In healthy women, Postpartum thyroiditis can occur up to 1 year after delivery and should be differentiated from Hashimoto's thyroiditis as it is treated differently.
After giving birth, Tregs rapidly decrease and immune responses are re-established. It may lead to the occurrence or aggravation of autoimmune thyroid disease. In up to 50% of females with thyroid peroxidase antibodies in the early pregnancy, thyroid autoimmunity in the postpartum period exacerbates in the form of postpartum thyroiditis. Higher secretion of IFN-γ and IL-4, and lower plasma cortisol concentration during pregnancy has been reported in females with postpartum thyroiditis than in healthy females. It indicates that weaker immunosuppression during pregnancy could contribute to the postpartum thyroid dysfunction.
Fetal microchimerism
Several years after the delivery, the chimeric male cells can be detected in the maternal peripheral blood, thyroid, lung, skin, or lymph nodes. The fetal immune cells in the maternal thyroid gland may become activated and act as a trigger that may initiate or exaggerate the autoimmune thyroid disease. In Hashimoto's disease patients, fetal microchimeric cells were detected in thyroid in significantly higher numbers than in healthy females.
Other animals
Hashimoto's disease is known to occur in chickens, rats, mice, dogs, and marmosets, but Graves' disease does not.
Pseudoscience
Pseudoscientific claims and "rogue practitioners" pose increasing risks to patients. "We have seen practitioners who proclaim themselves to be experts in hormonal therapy without any formal training and who often promote hormonal treatments without adequate endocrine evaluations. We have seen practitioners who make astonishing promises regarding the benefits of herbal, supplemental, and other unproven therapies that they themselves sell in their offices and/or online. And we have seen what we know to be frankly harmful and even dangerous products that contain animal whole organ (most commonly thyroid and/or adrenal) extracts or hormonal injections that produce highly elevated levels of sex hormones (especially testosterone) without any concern for short-term patient safety or longterm outcomes. And we have heard anecdotal stories from patients who visited these practitioners and had no beneficial results or frankly concerning on-treatment results at a surprisingly high financial cost, even though they had been promised symptom improvement, safety, and full insurance coverage."
See also
Hashimoto's encephalopathy
Myxedematous psychosis
Hashitoxicosis
Hoffmann Syndrome
References
Aging-associated diseases
Autoimmune diseases
Endocrine diseases
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate
Thyroid disease
Diseases named after discoverers | Hashimoto's thyroiditis | Biology | 10,399 |
15,893,225 | https://en.wikipedia.org/wiki/Competitions%20and%20prizes%20in%20artificial%20intelligence | There are a number of competitions and prizes to promote research in artificial intelligence.
General machine intelligence
The David E. Rumelhart Prize is an annual award for making a "significant contemporary contribution to the theoretical foundations of human cognition". The prize is $100,000.
The Human-Competitive Award is an annual challenge started in 2004 to reward results "competitive with the work of creative and inventive humans". The prize is $10,000. Entries are required to use evolutionary computing.
The Intel AI Global Impact Festival is an international annual competition held by Intel Corporation for school, and college students with prizes upwards of $15,000. It is about artificial intelligence technology. There are two age brackets in this competition, 13-18 Age Group, and 18 and Above Age Group.
The IJCAI Award for Research Excellence is a biannual award given at the International Joint Conference on Artificial Intelligence (IJCAI) to researchers in artificial intelligence as a recognition of excellence of their career.
The 2011 Federal Virtual World Challenge, advertised by The White House and sponsored by the U.S. Army Research Laboratory's Simulation and Training Technology Center, held a competition offering a total of US$52,000 in cash prize awards for general artificial intelligence applications, including "adaptive learning systems, intelligent conversational bots, adaptive behavior (objects or processes)" and more.
The Machine Intelligence Prize is awarded annually by the British Computer Society for progress towards machine intelligence.
The Kaggle – "the world's largest community of data scientists compete to solve most valuable problems".
Conversational behaviour
The Loebner prize is an annual competition to determine the best Turing test competitors. The winner is the computer system that, in the judges' opinions, demonstrates the "most human" conversational behaviour, they have an additional prize for a system that in their opinion passes a Turing test. This second prize has not yet been awarded.
Automatic control
Pilotless aircraft
The International Aerial Robotics Competition is a long-running event begun in 1991 to advance the state of the art in fully autonomous air vehicles. This competition is restricted to university teams (although industry and governmental sponsorship of teams is allowed). Key to this event is the creation of flying robots which must complete complex missions without any human intervention. Successful entries are able to interpret their environment and make real-time decisions based only on a high-level mission directive (e.g., "find a particular target inside a building having certain characteristics which is among a group of buildings 3 kilometers from the aerial robot launch point"). In 2000, a $30,000 prize was awarded during the 3rd Mission (search and rescue), and in 2008, $80,000 in prize money was awarded at the conclusion of the 4th Mission (urban reconnaissance).
Driverless cars
The DARPA Grand Challenge is a series of competitions to promote driverless car technology, aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned. While the first race had no winner, the second awarded a $2 million prize for the autonomous navigation of a hundred-mile trail, using GPS, computers and a sophisticated array of sensors. In November 2007, DARPA introduced the DARPA Urban Challenge, a sixty-mile urban area race requiring vehicles to navigate through traffic. In November 2010 the US Armed Forces extended the competition with the $1.6 million prize Multi Autonomous Ground-robotic International Challenge to consider cooperation between multiple vehicles in a simulated-combat situation.
Roborace will be a global motorsport championship with autonomously driving, electric vehicles. The series will be run as a support series during the Formula E championship for electric vehicles. This will be the first global championship for driverless cars.
Data-mining and prediction
The Netflix Prize was a competition for the best collaborative filtering algorithm that predicts user ratings for films, based on previous ratings. The competition was held by Netflix, an online DVD-rental service. The prize was $1,000,000.
The Pittsburgh Brain Activity Interpretation Competition will reward analysis of fMRI data "to predict what individuals perceive and how they act and feel in a novel Virtual Reality world involving searching for and collecting objects, interpreting changing instructions, and avoiding a threatening dog." The prize in 2007 was $22,000.
The Face Recognition Grand Challenge (May 2004 to March 2006) aimed to promote and advance face recognition technology.
The American Meteorological Society's artificial intelligence competition involves learning a classifier to characterise precipitation based on meteorological analyses of environmental conditions and polarimetric radar data.
Cooperation and coordination
Robot football
The RoboCup and Federation of International Robot-soccer Association (FIRA) are annual international robot soccer competitions. The International RoboCup Federation challenge is by 2050 "a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rule of the FIFA, against the winner of the most recent World Cup."
Logic, reasoning and knowledge representation
The Herbrand Award is a prize given by Conference on Automated Deduction (CADE) Inc. to honour persons or groups for important contributions to the field of automated deduction. The prize is $1000.
The CADE ATP System Competition (CASC) is a yearly competition of fully automated theorem provers for classical first order logic associated with the Conference on Automated Deduction (CADE) and International Joint Conference on Automated Reasoning (IJCAR). The competition was part of the Alan Turing Centenary Conference in 2012, with total prizes of 9000 GBP given by Google.
The SUMO prize is an annual prize for the best open source ontology extension of the Suggested Upper Merged Ontology (SUMO), a formal theory of terms and logical definitions describing the world. The prize is $3000.
The Hutter Prize for lossless compression of human knowledge is a cash prize which rewards compression improvements on a specific 100 MB English text file. The prize awards 500 euros for each one percent improvement, up to €50,000. The organizers believe that text compression and AI are equivalent problems and 3 prizes have been given, at around € 2k.
The Cyc TPTP Challenge is a competition to develop reasoning methods for the Cyc comprehensive ontology and database of everyday common sense knowledge. The prize is 100 euros for "each winner of two related challenges".
The Eternity II challenge was a constraint satisfaction problem very similar to the Tetravex game. The objective is to lay 256 tiles on a 16x16 grid while satisfying a number of constraints. The problem is known to be NP-complete. The prize was US$2,000,000. The competition ended in December 2010.
Games
The World Computer Chess Championship has been held since 1970. The International Computer Games Association continues to hold an annual Computer Olympiad which includes this event plus computer competitions for many other games.
The Ing Prize was a substantial money prize attached to the World Computer Go Congress, starting from 1985 and expiring in 2000. It was a graduated set of handicap challenges against young professional players with increasing prizes as the handicap was lowered. At the time it expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a 9-stone handicap match.
The AAAI General Game Playing Competition is a competition to develop programs that are effective at general game playing. Given a definition of a game, the program must play it effectively without human intervention. Since the game is not known in advance the competitors cannot especially adapt their programs to a particular scenario. The prize in 2006 and 2007 was $10,000.
The General Video Game AI Competition (GVGAI) poses the problem of creating artificial intelligence that can play a wide, and in principle unlimited, range of games. Concretely, it tackles the problem of devising an algorithm that is able to play any game it is given, even if the game is not known a priori. Additionally, the contests poses the challenge of creating level and rule generators for any game is given. This area of study can be seen as an approximation of General Artificial Intelligence, with very little room for game dependent heuristics. The competition runs yearly in different tracks: single player planning, two-player planning, single player learning, level and rule generation, and each track prizes ranging from 200 to 500 US dollars for winners and runner-ups.
The 2007 Ultimate Computer Chess Challenge was a competition organised by World Chess Federation that pitted
Deep Fritz against Deep Junior. The prize was $100,000.
The annual Arimaa Challenge offered a $10,000 prize until the year 2020 to develop a program that plays the board game Arimaa and defeats a group of selected human opponents. In 2015, David Wu's bot bot_sharp beat the humans, losing only 2 games out of 9. As a result, the Arimaa Challenge was declared over and David Wu received the prize of $12,000 ($2,000 being offered by third-parties for 2015's championship).
2K Australia is offering a prize worth A$10,000 to develop a game-playing bot that plays a first-person shooter video game which can convince a panel of judges that it is a human player. The competition started in 2008 and was won in 2012. A new competition is planned for 2014.
The Google AI Challenge was a bi-annual online contest organized by the University of Waterloo Computer Science Club and sponsored by Google that ran from 2009 to 2011. Each year a game was chosen and contestants submitted specialized automated bots to play against other competing bots.
Cloudball had its first round in Spring 2012 and finished on June 15. It is an international artificial intelligence programming contest, where users continuously submit the actions their soccer teams will take in each time step, in simple high level C# code.
The International Olympiad in Artificial Intelligence for high-school students was established in 2024 and consists of two rounds: in the scientific round, participants solve problems in different subfields of AI, and in the practical round, participants use existing AI tools to produce a visual result.
See also
Artificial intelligence
Progress in artificial intelligence
Glossary of artificial intelligence
References
Artificial intelligence competitions
Computer science competitions
Science and technology awards | Competitions and prizes in artificial intelligence | Technology | 2,058 |
12,572 | https://en.wikipedia.org/wiki/Grus%20%28constellation%29 | Grus (, or colloquially ) is a constellation in the southern sky. Its name is Latin for the crane, a type of bird. It is one of twelve constellations conceived by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. Grus first appeared on a celestial globe published in 1598 in Amsterdam by Plancius and Jodocus Hondius and was depicted in Johann Bayer's star atlas Uranometria of 1603. French explorer and astronomer Nicolas-Louis de Lacaille gave Bayer designations to its stars in 1756, some of which had been previously considered part of the neighbouring constellation Piscis Austrinus. The constellations Grus, Pavo, Phoenix and Tucana are collectively known as the "Southern Birds".
The constellation's brightest star, Alpha Gruis, is also known as Alnair and appears as a 1.7-magnitude blue-white star. Beta Gruis is a red giant variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. Six star systems have been found to have planets: the red dwarf Gliese 832 is one of the closest stars to Earth to have a planetary system. Another—WASP-95—has a planet that orbits every two days. Deep-sky objects found in Grus include the planetary nebula IC 5148, also known as the Spare Tyre Nebula, and a group of four interacting galaxies known as the Grus Quartet.
History
The stars that form Grus were originally considered part of the neighbouring constellation Piscis Austrinus (the southern fish), with Gamma Gruis seen as part of the fish's tail. The stars were first defined as a separate constellation by the astronomer Petrus Plancius, who created twelve new constellations based on the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. Grus first appeared on a 35-centimetre-diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. Its first depiction in a celestial atlas was in the German cartographer Johann Bayer's Uranometria of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name Den Reygher, "The Heron", but Bayer followed Plancius and Hondius in using Grus.
An alternative name for the constellation, Phoenicopterus (Latin "flamingo"), was used briefly during the early 17th century, seen in the 1605 work Cosmographiae Generalis by Paul Merula of Leiden University and a c. 1625 globe by Dutch globe maker Pieter van den Keere. Astronomer Ian Ridpath has reported the symbolism likely came from Plancius originally, who had worked with both of these people. Grus and the nearby constellations Phoenix, Tucana and Pavo are collectively called the "Southern Birds".
The stars that correspond to Grus were generally too far south to be seen from China. In Chinese astronomy, Gamma and Lambda Gruis may have been included in the tub-shaped asterism Bàijiù, along with stars from Piscis Austrinus. In Central Australia, the Arrernte and Luritja people living on a mission in Hermannsburg viewed the sky as divided between them, east of the Milky Way representing Arrernte camps and west denoting Luritja camps. Alpha and Beta Gruis, along with Fomalhaut, Alpha Pavonis and the stars of Musca, were all claimed by the Arrernte.
Characteristics
Grus is bordered by Piscis Austrinus to the north, Sculptor to the northeast, Phoenix to the east, Tucana to the south, Indus to the southwest, and Microscopium to the west. Bayer straightened the tail of Piscis Austrinus to make way for Grus in his Uranometria. Covering 366 square degrees, it ranks 45th of the 88 modern constellations in size and covers 0.887% of the night sky. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Gru". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined as a polygon of 6 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.31° and −56.39°. Grus is located too far south to be seen by observers in the British Isles and the northern United States, though it can easily be seen from Florida or San Diego; the whole constellation is visible to observers south of latitude 33°N.
Features
Stars
Keyser and de Houtman assigned twelve stars to the constellation. Bayer depicted Grus on his chart, but did not assign its stars Bayer designations. French explorer and astronomer Nicolas-Louis de Lacaille labelled them Alpha to Phi in 1756 with some omissions. In 1879, American astronomer Benjamin Gould added Kappa, Nu, Omicron and Xi, which had all been catalogued by Lacaille but not given Bayer designations. Lacaille considered them too faint, while Gould thought otherwise. Xi Gruis had originally been placed in Microscopium. Conversely, Gould dropped Lacaille's Sigma as he thought it was too dim.
Grus has several bright stars. Marking the left wing is Alpha Gruis, a blue-white star of spectral type B6V and apparent magnitude 1.7, around 101 light-years from Earth. Its traditional name, Alnair, means "the bright one" and refers to its status as the brightest star in Grus (although the Arabians saw it as the brightest star in the Fish's tail, as Grus was then depicted).Alnair Alnair is around 380 times as luminous and has over 3 times the diameter of the Sun. Lying 5 degrees west of Alnair, denoting the Crane's heart is Beta Gruis (the proper name is Tiaki), a red giant of spectral type M5III. It has a diameter of 0.8 astronomical units (AU) (if placed in the Solar System it would extend to the orbit of Venus) located around 170 light-years from Earth. It is a variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. An imaginary line drawn from the Great Square of Pegasus through Fomalhaut will lead to Alnair and Beta Gruis.
Lying in the northwest corner of the constellation and marking the crane's eye is Gamma Gruis, a blue-white subgiant of spectral type B8III and magnitude 3.0 lying around 211 light-years from Earth. Also known as Al Dhanab, it has finished fusing its core hydrogen and has begun cooling and expanding, which will see it transform into a red giant.
There are several double stars visible to the naked eye in Grus. Forming a triangle with Alnair and Beta, Delta Gruis is an optical double whose components—Delta1 and Delta2—are separated by 45 arcseconds. Delta1 is a yellow giant of spectral type G7III and magnitude 4.0, 309 light-years from Earth, and may have its own magnitude 12 orange dwarf companion. Delta2 is a red giant of spectral type M4.5III and semiregular variable that ranges between magnitudes 3.99 and 4.2, located 325 light-years from Earth. It has around 3 times the mass and 135 times the diameter of the Sun. Mu Gruis, composed of Mu1 and Mu2, is also an optical double—both stars are yellow giants of spectral type G8III around 2.5 times as massive as the Sun with surface temperatures of around 4900 K. Mu1 is the brighter of the two at magnitude 4.8 located around 275 light-years from Earth, while Mu2 the dimmer at magnitude 5.11 lies 265 light-years distant from Earth. Pi Gruis, an optical double with a variable component, is composed of Pi1 Gruis and Pi2. Pi1 is a semi-regular red giant of spectral type S5, ranging from magnitude 5.31 to 7.01 over a period of 191 days, and is around 532 light-years from Earth. One of the brightest S-class stars to Earth viewers, it has a companion star of apparent magnitude 10.9 with sunlike properties, being a yellow main sequence star of spectral type G0V. The pair make up a likely binary system. Pi2 is a giant star of spectral type F3III-IV located around 130 light-years from Earth, and is often brighter than its companion at magnitude 5.6. Marking the right wing is Theta Gruis, yet another double star, lying 5 degrees east of Delta1 and Delta2.
RZ Gruis is a binary system of apparent magnitude 12.3 with occasional dimming to 13.4, whose components—a white dwarf and main sequence star—are thought to orbit each other roughly every 8.5 to 10 hours. It belongs to the UX Ursae Majoris subgroup of cataclysmic variable star systems, where material from the donor star is drawn to the white dwarf where it forms an accretion disc that remains bright and outshines the two component stars. The system is poorly understood, though the donor star has been calculated to be of spectral type F5V. These stars have spectra very similar to novae that have returned to quiescence after outbursts, yet they have not been observed to have erupted themselves. The American Association of Variable Star Observers recommends watching them for future events. CE Gruis (also known as Grus V-1) is a faint (magnitude 18–21) star system also composed of a white dwarf and donor star; in this case the two are so close they are tidally locked. Known as polars, material from the donor star does not form an accretion disc around the white dwarf, but rather streams directly onto it.
Six star systems are thought to have planetary systems. Tau1 Gruis is a yellow star of magnitude 6.0 located around 106 light-years away. It may be a main sequence star or be just beginning to depart from the sequence as it expands and cools. In 2002 the star was found to have a planetary companion. HD 215456, HD 213240 and WASP-95 are yellow sunlike stars discovered to have two planets, a planet and a remote red dwarf, and a hot Jupiter, respectively; this last—WASP-95b—completes an orbit round its sun in a mere two days. Gliese 832 is a red dwarf of spectral type M1.5V and apparent magnitude 8.66 located only 16.1 light-years distant; hence it is one of the nearest stars to the Solar System. A Jupiter-like planet—Gliese 832 b—orbiting the red dwarf over a period of 9.4±0.4 years was discovered in 2008. WISE 2220−3628 is a brown dwarf of spectral type Y, and hence one of the coolest star-like objects known. It has been calculated as being around 26 light-years distant from Earth.
In July 2019, astronomers reported finding a star, S5-HVS1, traveling , faster that any other star detected so far. The star is in the Grus constellation in the southern sky, and about 29,000 light-years from Earth, and may have been propelled out of the Milky Way galaxy after interacting with Sagittarius A*, the supermassive black hole at the center of the galaxy.
Deep-sky objects
Nicknamed the spare-tyre nebula, IC 5148 is a planetary nebula located around 1 degree west of Lambda Gruis. Around 3000 light-years distant, it is expanding at 50 kilometres a second, one of the fastest rates of expansion of all planetary nebulae.
Northeast of Theta Gruis are four interacting galaxies known as the Grus Quartet. These galaxies are NGC 7552, NGC 7590, NGC 7599, and NGC 7582. The latter three galaxies occupy an area of sky only 10 arcminutes across and are sometimes referred to as the "Grus Triplet," although all four are part of a larger loose group of galaxies called the IC 1459 Grus Group. NGC 7552 and 7582 are exhibiting high starburst activity; this is thought to have arisen because of the tidal forces from interacting. Located on the border of Grus with Piscis Austrinus, IC 1459 is a peculiar E3 giant elliptical galaxy. It has a fast counterrotating stellar core, and shells and ripples in its outer region. The galaxy has an apparent magnitude of 11.9 and is around 80 million light-years distant.
NGC 7424 is a barred spiral galaxy with an apparent magnitude of 10.4. located around 4 degrees west of the Grus Triplet. Approximately 37.5 million light-years distant, it is about 100,000 light-years in diameter, has well defined spiral arms and is thought to resemble the Milky Way. Two ultraluminous X-ray sources and one supernova have been observed in NGC 7424. SN 2001ig was discovered in 2001 and classified as a Type IIb supernova, one that initially showed a weak hydrogen line in its spectrum, but this emission later became undetectable and was replaced by lines of oxygen, magnesium and calcium, as well as other features that resembled the spectrum of a Type Ib supernova. A massive star of spectral type F, A or B is thought to be the surviving binary companion to SN 2001ig, which was believed to have been a Wolf–Rayet star.
Located near Alnair is NGC 7213, a face-on type 1 Seyfert galaxy located approximately 71.7 million light-years from Earth. It has an apparent magnitude of 12.1. Appearing undisturbed in visible light, it shows signs of having undergone a collision or merger when viewed at longer wavelengths, with disturbed patterns of ionized hydrogen including a filament of gas around 64,000 light-years long. It is part of a group of ten galaxies.
NGC 7410 is a spiral galaxy discovered by British astronomer John Herschel during observations at the Cape of Good Hope in October 1834. The galaxy has a visual magnitude of 11.7 and is approximately 122 million light-years distant from Earth.
See also
Grus in Chinese astronomy
List of star names in Grus
Notes
References
Cited text
External links
The Deep Photographic Guide to the Constellations: Grus
The clickable Grus
Starry Night Photography – Grus Constellation
Southern constellations
Constellations listed by Petrus Plancius | Grus (constellation) | Astronomy | 3,084 |
43,294,813 | https://en.wikipedia.org/wiki/Desulfonatronum%20thiosulfatophilum | Desulfonatronum thiosulfatophilum is a species of haloalkaliphilic sulfate-reducing bacteria. It is able to grow lithotrophically by dismutation of thiosulfate and sulfite.
References
Further reading
External links
LPSN
Type strain of Desulfonatronum thiosulfatophilum at BacDive - the Bacterial Diversity Metadatabase
Bacteria described in 2011
Desulfovibrionales | Desulfonatronum thiosulfatophilum | Biology | 93 |
34,528,242 | https://en.wikipedia.org/wiki/Bis%28chloroethyl%29%20ether | Bis(chloroethyl) ether is an organic compound with the formula O(CH2CH2Cl)2. It is an ether with two 2-chloroethyl substituents. It is a colorless liquid with the odor of a chlorinated solvent.
Reactions and applications
Bis(chloroethyl) ether is less reactive than the corresponding sulfur mustard S(CH2CH2Cl)2. In the presence of base, it reacts with catechol to form dibenzo-18-crown-6:
Bis(chloroethyl) ether can be used in the synthesis of the cough suppressant fedrilate. It is combined with benzyl cyanide and two molar equivalents of sodamide in a ring-forming reaction. When treated with strong base, it gives divinyl ether, an anesthetic:
O(CH2CH2Cl)2 + 2 KOH → O(CH=CH2)2 + 2 KCl + 2 H2O
Toxicity
The is 74 mg/kg (oral, rat). Bis(chloroethyl) ether is considered as a potential carcinogen.
See also
Bis(chloromethyl) ether
Sulfur mustard
References
Ethers
Organochlorides
Alkylating agents
IARC Group 3 carcinogens
Chloroethyl compounds | Bis(chloroethyl) ether | Chemistry | 282 |
212,690 | https://en.wikipedia.org/wiki/Pwd | In Unix-like and some other operating systems, the pwd command (print working directory) writes the full pathname of the current working directory to the standard output.
Implementations
Multics had a pwd command (which was a short name of the print_wdir command) from which the Unix pwd command originated. The command is a shell builtin in most Unix shells such as Bourne shell, ash, bash, ksh, and zsh. It can be implemented easily with the POSIX C functions getcwd() or getwd().
It is also available in the operating systems SpartaDOS X, PANOS, and KolibriOS. The equivalent on DOS (COMMAND.COM) and Microsoft Windows (cmd.exe) is the cd command with no arguments. Windows PowerShell provides the equivalent Get-Location cmdlet with the standard aliases gl and pwd.
On Windows CE 5.0, the cmd.exe Command Processor Shell includes the pwd command.
as found on Unix systems is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification. It appeared in Version 5 Unix. The version of pwd bundled in GNU coreutils was written by Jim Meyering.
The numerical computing environments MATLAB and GNU Octave include a pwd
function with similar functionality. The OpenVMS equivalent is show default.
*nix examples
Note: POSIX requires that the default behavior be as if the -L switch were provided.
Working directory shell variables
POSIX shells set the following environment variables while using the cd command:
OLDPWD The previous working directory (as set by the cd command).
PWD The current working directory (as set by the cd command).
See also
Breadcrumb (navigation), an alternative way of displaying the work directory
List of GNU Core Utilities commands
List of Unix commands
pushd and popd
References
Further reading
External links
Multics commands
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands
IBM i Qshell commands
File system directories | Pwd | Technology | 441 |
11,569,259 | https://en.wikipedia.org/wiki/Sclerotinia%20ricini | Sclerotinia ricini is a plant pathogen infecting poinsettias.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Ornamental plant pathogens and diseases
Sclerotiniaceae
Fungi described in 1919
Fungus species | Sclerotinia ricini | Biology | 55 |
11,654,677 | https://en.wikipedia.org/wiki/Fab%20Tree%20Hab | The Fab Tree Hab is a hypothetical ecological home design developed at MIT in the early 2000s by Mitchell Joachim, Javier Arbona and Lara Greden. With the idea of easing the burden humanity places on the environment with conventional housing by growing "living, breathing" tree homes.
It would be built by allowing native trees to grow over a computer-designed (CNC) removable plywood scaffold. Once the plants are interconnected and stable, the plywood would be removed and reused. MIT is experimenting with trees that grow quickly and develop an interwoven root structure that's soft enough to "train" over the scaffold, but then hardens into a more durable structure. The inside walls would be conventional clay and plaster.
An old methodology new to buildings is introduced in this design: pleaching. Pleaching is a method of weaving together tree branches to form living archways, lattices, or screens. The technique is also named "aeroponic culture". The load-bearing part of the structure uses trees that self-graft or inosculate such as live oak, elm and dogwood. The lattice frame for the walls and roof are created with the branches of the trees. Vines create a dense protective layer woven along the exterior, interspersed with soil pockets and growing plants.
This building could be very sustainable as it can use bio-waste for manure for the trees. Which can use the grey water from the home for the trees and garden. There are plans to be able to use rainwater. These building would improve the quality of life by giving back to nature instead of just exploiting it. Throughout the whole life cycle of the home it remains part of the ecology, feeding different organism at different times of its life. The expected life span is greater than standard structures of brick and concrete. The whole community and the individual would befit from this life style.
The Fab Tree Hab is an experiment that would develop over time. Extra operating costs required over the life-time of the home include pest management with organic pesticides and maintenance of the living machine's water treatment system. Technical demonstration and innovation is still needed for certain components, primarily the bioplastic windows that accept growth of the structure and the management of flows across the wall section to assure that the interior remains dry and animal-free. All in all, the elapsed time to reach livability is greater than the traditional sense, but so should be the health and longevity of the home and family. Above all, building this home could be achieved at a minimal price. Depending on the surrounding climate the house is to be grown in, the team expect it will take a minimum of five years to complete its structure. Realization of these homes will begin as an experiment, and it is envisioned that thereafter, the concept of renewal will take on a new architectural form, one of inter-dependency between nature and people.
As of May 2007 Mitchell Joachim stated that there is a "50 per cent" organic project in California, combining natural elements and traditional construction.
Trees
Main trees suggested to be used are elms, dogwood and oaks. The teams hopes the homes can be grown using mainly native trees.
See also
References
Further reading
James Nestor, "Branching Out," Dwell, Vol. 7 No. 3, pp. 96–98, Feb. 2007.
Gregory Mone, "Grow your second home," Popular Science, pp. 38–9, Nov. 2006.
Carolyn Johnson, "MIT plants seeds of a new kind of house", The Boston Globe, p. C1, Sept. 25th, 2006.
Tracy Staedter, “House and Garden - Architects design a living home," Technology Review, pp. m2-m9, VOL. 109/ NO.3, July/ August, 2006.
Gail Hennessey, "Living in the Trees, " Scholastic News, Mar. 9, 2006.
Linda Stern, "Beware of Squirrels," Newsweek, p. E2, May 28, 2007.
Mitchell Joachim, Javier Arbona, and Lara Greden. "Fab Tree Hab," 306090 08: Autonomous Urbanism, Monson & Duval, ed., Princeton Architectural Press, 2005.
Richard Reames, Arborsculpture- Solutions for a Small Planet, Arborsmith Studios, 2005 .
David J. Brown, Ed., The HOME House Project: The Future of Affordable Housing, MIT Press, 2004.
Mitchell Joachim, J. Arbona, L. Greden, "Fab Tree Hab," Thresholds Journal #26 DENATURED, MIT, 2003.
External links
Newsweek: Terreform - Building Houses Out of Living Trees
MIT Architecture: Computation WORKS
Arborsmith Studios- Shaped trees, history, books, tools
5 minute Video talk Mitchell gave at TED
Sustainable technologies
Architectural theory
Landscape
pt:Eco-design
ta:தாங்குதிறன் வடிவமைப்பு
zh:永續設計 | Fab Tree Hab | Engineering | 1,018 |
2,903,019 | https://en.wikipedia.org/wiki/14%20Aquilae | 14 Aquilae is a probable spectroscopic binary star system in the equatorial constellation of Aquila. 14 Aquilae is the Flamsteed designation though it also bears the Bayer designation g Aquilae. It is visible to the naked eye as a dim, white-hued star with an apparent visual magnitude of 5.42, and it is located at a distance of approximately from Sun. The star is moving closer to the Earth with a heliocentric radial velocity of , and may come as close as in around 3.5 million years.
The visible component is an A-type main sequence star with a stellar classification of A1 V. It has 3.25 times the mass of the Sun and about twice the Sun's radius. The projected rotational velocity is relatively low at 23 km/s. The star is radiating 214 times the luminosity of the Sun from its photosphere at an effective temperature of 9,908 K.
References
External links
Image 14 Aquilae
HR 7209
CCDM 19029-0342
A-type main-sequence stars
Aquila (constellation)
Aquilae, g
BD-03 4460
Aquilae, 14
176984
093526
7209 | 14 Aquilae | Astronomy | 252 |
838,150 | https://en.wikipedia.org/wiki/Pastebin | A pastebin or text storage site is a type of online content-hosting service where users can store plain text (e.g. source code snippets for code review via Internet Relay Chat (IRC)). The most famous pastebin is the eponymous pastebin.com. Other sites with the same functionality have appeared, and several open source pastebin scripts are available. Pastebins may allow commenting where readers can post feedback directly on the page. GitHub Gists are a type of pastebin with version control.
History
Pastebin was developed in the late 1990s to facilitate IRC chatrooms devoted to computing, where users naturally need to share large blocks of computer input or output in a line-oriented medium. In such chatrooms, sending messages containing large blocks of computer data can disrupt conversations, which can be closely interleaved. When users send such messages, they are often warned to instead use pastebins or risk being banned from the service. Contrarily, a reference to a pastebin entry is a one-line hyperlink.
A new class of IRC bot has evolved. In a chatroom that is largely oriented around a few pastebins, nothing more needs to be done after a post at its pastebin. The receiving party then awaits a bot announcing the expected posting by the known user.
After the use of the pastebin.pl pastebin for a data breach, Pastebin started monitoring the site for illegally pasted data and information, leading to a backlash from Anonymous. Hacktivists teamed up with an organization calling itself the People's Liberation Front, launching an alternative called AnonPaste.
See also
Doxbin (clearnet)
Netiquette
PrivateBin
Snippet (programming)
Text file
Wiki
References
File sharing
File sharing services
Web applications
Web hosting
Text | Pastebin | Technology | 371 |
33,158,087 | https://en.wikipedia.org/wiki/Sarah%20Otto | Sarah Perin Otto (born October 23, 1967) is a theoretical biologist, Canada Research Chair in Theoretical and Experimental Evolution, and is currently a Killam Professor at the University of British Columbia. From 2008-2016, she was the director of the Biodiversity Research Centre at the University of British Columbia. Otto was named a 2011 MacArthur Fellow. In 2015 the American Society of Naturalists gave her the Sewall Wright Award for fundamental contributions to the unification of biology. In 2021, she was awarded the Darwin–Wallace Medal for contributing major advances to the mathematical theory of evolution.
Education
Otto received her Bachelor of Science degree in 1988 and followed by her PhD in 1992 from Stanford University.
Research and career
She did post-doctoral research with Nick Barton at the University of Edinburgh. Otto's research focus is a multi-pronged approach of population-genetic mathematical models and statistical tools to understand how evolutionary processes generate diverse biological features. The core of her research revolves around analyzing mathematical models and exploring the insights they yield about how biological systems evolve. Dr. Otto is also the author of the book "A Biologist's Guide to Mathematical Modeling in Ecology and Evolution". Through the analysis and development of stochastic models, Dr. Otto's colleagues and herself have shown how genes are transmitted across generations, the context in which genes are expressed, and how evolutionary constraints influence life trait evolution. The second major component of Dr. Otto's research involves the development of statistical tools such as likelihood-based approaches that allow them to infer how particular traits influence speciation and extinction. This allows us to answer questions such as: Do pollinators promote speciation of colorful flowers? Does genome size influence diversification? According to Otto, her research uses "mathematical models to clarify how features of an organism affect its potential for and rate of adaptation. She also steps back to address why such features vary in the first place. Why is it that some species produce offspring primarily by cloning themselves, whereas others never do? Why do some species have large genomes with many chromosomes, while others are streamlined?" Otto's recent work has investigated the genomic changes that underlie adaptation by yeast to harsh environmental conditions.
Science communication
Since 2013 Otto has been the director of the Liber Ero Fellowship program, a post-doctoral fellowship program that supports early-career scientists to conduct and communicate research that informs conservation and management issues. In 2006 she co-founded the Canadian Society for Ecology and Evolution. She has also served as the Vice-President and President for The Society for the Study of Evolution, The American Society of Naturalists and The European Society of Evolutionary Biology as well as a council member of The Society for the Study of Evolution and the American Genetic Association.
Awards and honours
Elected a Fellow of the Royal Society (FRS) in 2024
Society for the Study of Evolution Lifetime Achievement Award (2023)
Killam Prize (2023)
Darwin–Wallace Medal (2021)
Canadian Society for Ecology & Evolution President's Award (2017)
Sewall Wright Award (2015)
Elected member of (US) National Academy of Sciences (2013)
Guggenheim Fellowship in Natural Sciences For Research (2011)
MacArthur Fellowship For Research (2011)
Steacie Prize (2007)
Royal Society of Canada Fellow (2006)
McDowell Award for Excellence in Research (2003)
NSERC E.W.R. Steacie Memorial Fellowship (2001)
American Society of Naturalists Jasper Loftus-Hills Young Investigator Award (1995)
References
Mathematical ecologists
Living people
Stanford University alumni
Academic staff of the University of British Columbia
Academics of the University of Edinburgh
MacArthur Fellows
Fellows of the Royal Society
Canadian women academics
Women evolutionary biologists
Fellows of the Royal Society of Canada
Canada Research Chairs
1967 births
Theoretical biologists | Sarah Otto | Biology | 754 |
5,284,206 | https://en.wikipedia.org/wiki/Bertrand%27s%20theorem | In classical mechanics, Bertrand's theorem states that among central-force potentials with bound orbits, there are only two types of central-force (radial) scalar potentials with the property that all bound orbits are also closed orbits.
The first such potential is an inverse-square central force such as the gravitational or electrostatic potential:
with force .
The second is the radial harmonic oscillator potential:
with force .
The theorem is named after its discoverer, Joseph Bertrand.
Derivation
All attractive central forces can produce circular orbits, which are naturally closed orbits. The only requirement is that the central force exactly equals the centripetal force, which determines the required angular velocity for a given circular radius. Non-central forces (i.e., those that depend on the angular variables as well as the radius) are ignored here, since they do not produce circular orbits in general.
The equation of motion for the radius of a particle of mass moving in a central potential is given by motion equations
where , and the angular momentum is conserved. For illustration, the first term on the left is zero for circular orbits, and the applied inwards force equals the centripetal force requirement , as expected.
The definition of angular momentum allows a change of independent variable from to :
giving the new equation of motion that is independent of time:
This equation becomes quasilinear on making the change of variables and multiplying both sides by (see also Binet equation):
As noted above, all central forces can produce circular orbits given an appropriate initial velocity. However, if some radial velocity is introduced, these orbits need not be stable (i.e., remain in orbit indefinitely) nor closed (repeatedly returning to exactly the same path). Here we show that a necessary condition for stable, exactly closed non-circular orbits is an inverse-square force or radial harmonic oscillator potential. In the following sections, we show that those two force laws produce stable, exactly closed orbits.
Define as
where represents the radial force. The criterion for perfectly circular motion at a radius is that the first term on the left be zero:
where .
The next step is to consider the equation for under small perturbations from perfectly circular orbits. On the right, the function can be expanded in a standard Taylor series:
Substituting this expansion into the equation for and subtracting the constant terms yields
which can be written as
where is a constant. must be non-negative; otherwise, the radius of the orbit would vary exponentially away from its initial radius. (The solution corresponds to a perfectly circular orbit.) If the right side may be neglected (i.e., for small perturbations), the solutions are
where the amplitude is a constant of integration. For the orbits to be closed, must be a rational number. What's more, it must be the same rational number for all radii, since cannot change continuously; the rational numbers are totally disconnected from one another. Using the definition of along with equation (),
Since this must hold for any value of ,
which implies that the force must follow a power law
Hence, must have the general form
For more general deviations from circularity (i.e., when we cannot neglect the higher-order terms in the Taylor expansion of ), may be expanded in a Fourier series, e.g.,
We substitute this into equation () and equate the coefficients belonging to the same frequency, keeping only the lowest-order terms. As we show below, and are smaller than , being of order . , and all further coefficients, are at least of order . This makes sense, since must all vanish faster than as a circular orbit is approached.
From the term, we get
where in the last step we substituted in the values of and .
Using equations () and (), we can calculate the second and third derivatives of evaluated at :
Substituting these values into the last equation yields the main result of Bertrand's theorem:
Hence, the only potentials that can produce stable closed non-circular orbits are the inverse-square force law () and the radial harmonic-oscillator potential (). The solution corresponds to perfectly circular orbits, as noted above.
Classical field potentials
For an inverse-square force law such as the gravitational or electrostatic potential, the potential can be written
The orbit u(θ) can be derived from the general equation
whose solution is the constant plus a simple sinusoid:
where e (the eccentricity), and θ0 (the phase offset) are constants of integration.
This is the general formula for a conic section that has one focus at the origin; e = 0 corresponds to a circle, 0 < e < 1 corresponds to an ellipse, e = 1 corresponds to a parabola, and e > 1 corresponds to a hyperbola. The eccentricity e is related to the total energy E (see Laplace–Runge–Lenz vector):
Comparing these formulae shows that E < 0 corresponds to an ellipse, E = 0 corresponds to a parabola, and E > 0 corresponds to a hyperbola. In particular, for perfectly circular orbits.
Harmonic oscillator
To solve for the orbit under a radial harmonic-oscillator potential, it's easier to work in components r = (x, y, z). The potential can be written as
The equation of motion for a particle of mass m is given by three independent Euler equations:
where the constant must be positive (i.e., k > 0) to ensure bounded, closed orbits; otherwise, the particle will fly off to infinity. The solutions of these simple harmonic oscillator equations are all similar:
where the positive constants Ax, Ay and Az represent the amplitudes of the oscillations, and the angles φx, φy and φz represent their phases. The resulting orbit r(t) = [x(t), y(y), z(t)] is closed because it repeats exactly after one period
The system is also stable because small perturbations in the amplitudes and phases cause correspondingly small changes in the overall orbit.
References
Further reading
Classical mechanics
Eponymous theorems of physics
Orbits | Bertrand's theorem | Physics | 1,279 |
17,946,498 | https://en.wikipedia.org/wiki/Trina%20Solar | Trina Solar Co., Ltd. () is a Chinese photovoltaics company founded in 1997.
History
In 2018, Trina Solar launched Energy IoT brand, established the Trina Energy IoT Industrial Development Alliance together with leading companies and research institutes in China and elsewhere and founded the New Energy IoT Industrial Innovation Center.
In June 2020, Trina Solar listed on the STAR Market of Shanghai Stock Exchange.
A 2023 report by Sheffield Hallam University stated that Trina Solar had very high exposure to production in Xinjiang involving forced Uyghur labor. In August 2023, the U.S. Department of Commerce ruled that Trina Solar circumvented tariffs on Chinese-made goods.
In November 2024, Trina Solar sold its plant in Texas a week after it opened amid government scrutiny of Chinese companies that benefited from the Inflation Reduction Act.
References
External links
Companies formerly listed on the New York Stock Exchange
Photovoltaics manufacturers
Manufacturing companies of China
Companies based in Changzhou
Companies established in 1997
Chinese brands
Companies in the CSI 100 Index
1997 in Changzhou | Trina Solar | Engineering | 220 |
1,564,830 | https://en.wikipedia.org/wiki/Paul%20Walden | Paul Walden (; ; ; 26 July 1863 – 22 January 1957) was a Russian, Latvian and German chemist known for his work in stereochemistry and history of chemistry. In particular, he discovered the Walden rule, he invented the stereochemical reaction known as Walden inversion and synthesized the first room-temperature ionic liquid, ethylammonium nitrate.
Early life and education
Walden was born in Rozulas in the Russian Empire (now Stalbe parish, Pārgauja municipality, Latvia) in a large Latvian peasant family. At the age of four, he lost his father and later his mother. Thanks to financial support from his two older brothers who lived in Riga (one was a merchant and another served as a lieutenant in the military) Walden managed to complete his education – first graduated with honors from the district school in the town of Cēsis (1876), and then from the Riga Technical High School (1882).
In December 1882, he enrolled into the Riga Technical University and became seriously interested in chemistry. In 1886, he published his first scientific study on the color evaluation of the reactions of nitric and nitrous acid with various reagents and establishing the limits of sensitivity of the color method to detection of nitric acid.
In April 1887, Walden became an active member of the Russian Physico-chemical Society. During this time, Walden started his collaboration with Wilhelm Ostwald (Nobel Prize in Chemistry 1909) which greatly influenced his development as a scientist. Their first work together was published in 1887 and was devoted to the dependence of the electrical conductivity of aqueous solutions of salts on their molecular weight.
Work in chemistry
In 1888, Walden graduated from the university with a degree in chemical engineering and continued working at the Chemistry Department as an assistant to professor C. Bischof.
Under his guidance, Walden began compiling "Handbook of Stereochemistry" which was published in 1894. In preparation of this handbook, Walden had to perform numerous chemical syntheses and characterizations which resulted in 57 journal papers on stereochemistry alone, published between 1889 and 1900 in Russian and foreign journals 57 articles on the stereochemistry. He also continued his research in the field of physical chemistry, establishing in 1889 that the ionizing power of non-aqueous solvent is directly proportional to the dielectric constant.
During the summer vacations of 1890 and 1891, Walden was visiting Ostwald at the University of Leipzig and in September 1891 defended there a master thesis on the affinity values of certain organic acids. Ostwald suggested that he stay in Leipzig as a private lecturer, but Walden declined, hoping for a better career in Riga.
In the summer of 1892 he was appointed assistant professor of physical chemistry. A year later he defended his doctorate on osmotic phenomena in sedimentary layers and in September 1894 became professor of analytical and physical chemistry at the Riga Technical University. He worked there until 1911 and during 1902–1905 was rector of the university. In 1895, Walden made his most remarkable discovery which was later named Walden inversion, namely that various stereoisomers can be obtained from the same compound via certain exchange reactions involving hydrogen. This topic became the basis for his habilitation thesis defended in March 1899 at St. Petersburg University.
After that, Walden became interested in electrochemistry of nonaqueous solutions. In 1902, he proposed a theory of autodissociation of inorganic and organic solvents. In 1905, he found a relationship between the maximum molecular conductivity and viscosity of the medium and in 1906, coined the term "solvation". Together with his work on stereochemistry, these results brought him to prominence; in particular, he was considered a candidate for the Nobel Prize in Chemistry in 1913 and 1914.
Walden was also credited as a talented chemistry lecturer. In his memoirs, he wrote: "My audience usually was crowded and the feedback of sympathetic listeners gave me strength ... my lectures I was giving spontaneously, to bring freshness to the subject ... I never considered teaching as a burden".
1896 brought reforms to the Riga Technical University. Whereas previously, all teaching was conducted in German and Walden was the only professor giving some courses in Russian, from then on, Russian became the official language. This change allowed receiving subsidies from the Russian government and helped the alumni in obtaining positions in Russia. These reforms resulted in another and rather unusual collaboration of Walden with Ostwald: Walden was rebuilding the Chemistry Department and Ostwald sent him the blueprints of the chemical laboratories in Leipzig as an example. In May 1910, Walden was elected a member of the St. Petersburg Academy of Sciences and in 1911 was invited to Saint Petersburg to lead the Chemical Laboratories of the academy founded in 1748, by Mikhail Lomonosov. He remained in that position till 1919. As an exception, he was allowed to stay in Riga where he had better research possibilities, but he was traveling, almost every week, by train, to St. Petersburg for the academy meetings and guidance of research. In the period 1911–1915, Walden published 14 articles in the "Proceedings of the Academy of Sciences" on electrochemistry of nonaqueous solutions. In particular, in 1914 he synthesized the first room-temperature ionic liquid, namely ethylammonium nitrate ()· with the melting point of 12 °C.
After 1915, due to the difficulties caused by the World War I, political unrest in Russia and then October Revolution, Walden had reduced his research activity and focused on teaching and administrative work, taking numerous leading positions in science. Due to the political unrest in Latvia, Walden had immigrated to Germany. He was appointed as professor of inorganic chemistry at the University of Rostock where he worked until retirement in 1934. In 1924 he was invited back to Riga, where he gave a series of lectures. He was offered leading positions in chemistry in Riga and in St. Petersburg, but declined. Despite his emigration, Walden retained his popularity in Russia, and in 1927 he was appointed as a foreign member of the Russian Academy of Sciences. Later, he also became a member of the Swedish (1928) and Finnish (1932) Academies.
Personal life
Walden's daughter, Antonina Anna Walden (1899–1983), was a music teacher who married Finnish translator and essayist Juho August Hollo. Their son was the Finnish poet and translator Anselm Hollo.
Late years
In his last years, Walden focused on history of chemistry and collected a unique library of over 10,000 volumes. The library and his house were destroyed when the British bombed Rostock in 1942. Walden moved to Berlin and then to Frankfurt am Main, where he became a visiting professor of the history of chemistry at the local university. He met the end of World War II in the French Occupation Zone, cut off from Rostock University, located in the Soviet Zone, and thus left without any source of income.
Walden survived on a modest pension arranged by German chemists, giving occasional lectures in Tübingen and writing memoirs. In 1949, he published his best-known book, History of Chemistry. He died in Gammertingen in 1957, at the age of 93. His memoirs were published only in 1974.
References
Further reading
1863 births
1957 deaths
People from Cēsis Municipality
People from Valmiera county
Baltic-German people from the Russian Empire
Latvian scientists
Chemists from the Russian Empire
Latvian chemists
20th-century German chemists
Inventors from the Russian Empire
20th-century German inventors
Stereochemists
19th-century Latvian people
Saint Petersburg State University alumni
Leipzig University alumni
Riga Technical University alumni
Academic staff of Riga Technical University
Academic staff of the University of Latvia
Full members of the Saint Petersburg Academy of Sciences
Full Members of the Russian Academy of Sciences (1917–1925)
Full Members of the USSR Academy of Sciences
Honorary members of the USSR Academy of Sciences
Latvian emigrants to Germany | Paul Walden | Chemistry | 1,585 |
54,833,566 | https://en.wikipedia.org/wiki/NGC%204461 | NGC 4461 (also known as NGC 4443) is a lenticular galaxy located about 50 million light-years away in the constellation of Virgo. It was discovered by astronomer William Herschel on April 12, 1784. NGC 4461 is a member of Markarian's Chain which is part of the Virgo Cluster.
Interaction with NGC 4458
NGC 4461 is in a pair with the nearby galaxy NGC 4458. It has undergone a tidal interaction with NGC 4458.
See also
List of NGC objects (4001–5000)
M86
References
External links
Lenticular galaxies
Virgo (constellation)
4461
Virgo Cluster
41111
7613
Astronomical objects discovered in 1784 | NGC 4461 | Astronomy | 142 |
68,190,854 | https://en.wikipedia.org/wiki/Beam%20Therapeutics | Beam Therapeutics Inc. is an American biotechnology company conducting research in the field of gene therapies and genome editing. The company is headquartered in Cambridge, Massachusetts. In the development of therapies, the company relies on CRISPR and prime editing, whereby single nucleotides in a DNA sequence can be modified without cutting the DNA, theoretically reducing the likelihood of off-target effects compared to previous CRISPR-based methods.
History
Founded in 2017, the company traces its origins to the Broad Institute of the Massachusetts Institute of Technology and Harvard University. Co-founders include David R. Liu and Feng Zhang. Prior to its IPO, the company raised nearly $1 billion in venture capital from investors. In a February 2020 IPO, the company raised $180 million.
In January 2022, Pfizer and Beam Therapeutics announced a collaboration to develop therapies for rare diseases using CRISPR.
See also
CRISPR Therapeutics
Intellia Therapeutics
Editas Medicine
References
External links
American companies established in 2017
Biotechnology companies of the United States
Gene therapy
Health care companies based in Massachusetts
Pharmaceutical companies of the United States
Life sciences industry
2020 initial public offerings
Companies listed on the Nasdaq | Beam Therapeutics | Engineering,Biology | 240 |
2,415,128 | https://en.wikipedia.org/wiki/Monopole%20%28mathematics%29 | In mathematics, a monopole is a connection over a principal bundle G with a section of the associated adjoint bundle.
Physical interpretation
Physically, the section can be interpreted as a Higgs field, where the connection and Higgs field should satisfy the Bogomolny equations and be of finite action.
See also
Nahm equations
Instanton
Magnetic monopole
Yang–Mills theory
References
Differential geometry
Mathematical physics | Monopole (mathematics) | Physics,Mathematics | 82 |
25,728,207 | https://en.wikipedia.org/wiki/Paul%20J.%20Lioy | Paul James Lioy (May 27, 1947 – July 8, 2015) was a United States environmental health scientist born in Passaic, New Jersey, working in the field of exposure science. He was one of the world's leading experts in personal exposure to toxins. He published in the areas of air pollution, airborne and deposited particles, Homeland Security, and Hazardous Wastes. Lioy was a professor and division director at the Department of Environmental and Occupational Health, Rutgers University - School of Public Health. Until 30 June 2015 he was a professor and vice chair of the Department of Environmental and Occupational Medicine, Rutgers University - Robert Wood Johnson Medical School. He was deputy director of government relations and director of exposure science at the Rutgers Environmental and Occupational Health Sciences Institute in Piscataway, New Jersey.
Lioy has been recognized for his research and contributions to development of environmental policy by the International Society of Exposure Analysis (now International Society of Exposure Science) and by the Air & Waste Management Association, both with Lifetime Achievement Awards. Since 2002 he had been one of Information Sciences Institute’s Most Highly Cited Scientists in the Category of Environment and Ecology, and is one of the founders of the International Society of Exposure (Analysis) Science (1989).
Early life and education
Lioy graduated from Passaic High School in 1965, and Montclair State College (today University), NJ in 1969 (Magnum Cum Laude) In 1971, he received a master's degree from Auburn University, AL, in Physics, and in 1975 an M.S. and Ph.D. in environmental science from Rutgers University.
Career
University appointments
2015: Professor, Department of Environmental and Occupational Health, school of Public Health, Rutgers, university, Piscataway, NJ
1989–2015: Professor, Department of Environmental and Occupational Medicine, Rutgers - Robert Wood Johnson Medical School (RWJMS), Piscataway, NJ (formerly UMDNJ)
2000–2015: Professor, Rutgers - School of Public Health, Piscataway, NJ (formerly UMDNJ)
1986–2015: Professor, Graduate Faculty of Rutgers University: Department of Environmental Science, Public Health Program, and Toxicology Program, New Brunswick, NJ
1985-1989: Associate Professor, Department of Environmental and Community Medicine, UMDNJ-Robert Wood Johnson Medical School, Piscataway, NJ
1982-1985: Associate Professor, Institute of Environmental Medicine, New York University Medical Center, New York City, NY
1978-1982: Assistant Professor, Institute of Environmental Medicine, New York University Medical Center, New York City, NY
1976- 1978: Lecturer, Department of Civil Environmental Engineering, Polytechnic Institute of New York, New York City, NY
2015: Division Director, School of Public Health, Rutgers
2004–2015: Vice Chair, Department of Environmental and Occupational Medicine, Rutgers-RWJMS
2003–2015: Deputy Director Government Relations, Rutgers Environmental and Occupational Health Sciences Institute (formerly sponsored by UMDNJ and Rutgers University)
2001-2003: Acting Associate Director, Environmental and Occupational Health Sciences Institute, UMDNJ-RWJMS and Rutgers University
1999–2015: Co-Director, Center for Exposure and Risk Modeling, EOHSI
1995-2001: Deputy Director, Environmental and Occupational Health Sciences Institute, UMDNJ-RWJMS and Rutgers University
1994-1995: Acting Deputy Director, Environmental and Occupational Health Sciences Institute, UMDNJ-RWJMS and Rutgers University
1992–2015: Director, Controlled Exposure Facility, EOHSI
1990-2002: Faculty Administrator, EOHSI Analytical Laboratories
1986–2015: Chief, Exposure Measurement and Assessment Division, DECM of Rutgers-RWJMS
1986–2015: Director, Exposure Science Division, Rutgers Environmental and Occupational Health Sciences Institute, (EOHSI) (formerly sponsored by UMDNJ and Rutgers University)
1984-1985: Associate Director, Laboratory of Aerosol and Inhalation Research, Institute of Environmental Medicine, NYU Medical Center
1975-1978: Senior Air Pollution Engineer, Interstate Sanitation Commission, New York City, NY
1973-1975: Physical Scientist (part-time) U.S. EPA, Region II, Surveillance and Analysis Division, NJ
Adjunct positions
2006-2009 and 2012–2105: Adjunct Professor, (volunteer) Department of Environmental and Occupational Health University of Pittsburgh Graduate School Public Health
1996: Visiting Professor, Department of Biometry and Biostatistics, Medical University of South Carolina, Charleston, SC
1990: Visiting Scientist, RIVM, Bilthoven, The Netherlands
Awards and advisory committees
Recipient of Cranford Chamber of Commerce Meritorious Service Award, acknowledged by resolution from the State Legislature, Union County, and Township of Cranford, 2012
Recipient of the Daughters of the American Revolution Founders Award, The Ellen Hardin Walworth Medal for Patriotism, 2009. A Resolution also approved by the New Jersey State Legislature.
Recipient of the National Medal for Conservation from The Daughters of the American Revolution; 2009: Chapter and State of New Jersey Medalist
Recipient of the 2009-2011 Distinguished Lecturer Award from the International Society of Exposure Science, Pasadena, CA, 2008.
Recipient of the 2008 Distinguished Alumnus Award from Physical Sciences, Mathematics and Engineering, Rutgers University Graduate School
Recipient of the 2006 R. Walter Schesinger Basic Science Mentoring Award, UMDNJ - Robert Wood Johnson Medical School
Recipient of Frank A. Chambers Award for outstanding achievement in the science and art of air pollution control from the Air Waste Management Association, 2003
Institute for Scientific Information – Highly Cited Scientist – Environment and Ecology, 2002–2015
Fellow, International Academy of Indoor Air Sciences, (Elected) 1999–2015
Fellow, Collegium Ramazzini, Environmental & Occupational Medicine and Health, Carpi, Italy (Elected) 1999–2015
Extraordinary Citizen of Week, Union County, Star Ledger, September 1999
Resolution for selection as a fellow by the Collegium provided by Union County, Board of Freeholders
Recipient of Jerome Wesolowski Award for Lifetime Excellence in Exposure Assessment Research, International Society of Exposure Analysis, 1998
Robert Wood Johnson Medical School Nominee for the UMDNJ Excellence Award, Biomedical Researcher 1992
Fellow of New York Academy of Sciences, Elected 1979
Member of Sigma XI, 1980–2007
University Fellow, Rutgers University, 1973–1975
Russell Scholar, Rutgers University, 1973–1974
United States Environmental Protection Agency Air Pollution Fellow, Rutgers University, 1971–1973
First Year Physics Graduate Student Award for Academics, Auburn University, 1970
National Defense Education Act, Title IV Fellow, Auburn University, 1969–1971
Science Advisor, Health Environmental Science Institute (HESI) of International Life Sciences Institute (ILSI), Washington, DC, 2015
Member, State of New Jersey Department of Environmental Protection Science Advisory Board, 2010–2015
Executive Committee, University Center for Disaster Preparedness and Emergency Response, 2007 – 2015
Member, research advisory board, Office of the Vice President for Research, Auburn University, 2009–2015
Executive committee, New Jersey Office of Homeland Security and Preparedness College, 2007–2009
Co-chair, New Jersey Universities Consortium on Homeland Security Research 2006–2012
Member, The College of Science and Mathematics Advisory Council, Montclair State University, 2005–2015
Member, executive committee, Rutgers University Homeland Security Initiative, 2003–2011
Member, Citizens Advisory Committee New York City DEP Brooklyn-Queens Aquifer Feasibility Study, 2002–2006
Member, Douglass College, Rutgers University Academic Councilors, 1998–2015
Member, Council of Academic Policy Advisors to the New Jersey Legislature, 1998–2004
Chair, United States Environmental Protection Agency (EPA) Science Advisory Board, Committee on Health and Ecological Effects Valuation, Advisory Council on Clean Air Compliance Analysis, 1997–2002 (see EPA-SAB bio)
Member, Science Advisory Board, European - EXPOLIS (Air Pollution Exposure Distribution of Adult Population in Europe) 1997-2004
Member, Technical Advisory Committee on Aggregate Exposure and Risk, Hampshire Research Institute, 1999–2000
Member, Dean's Advisory Council of the College of Science and Mathematics, Auburn University, 1996–1999
Member, EPA Science Advisory Board, 1992–2002, 2005–2015
Member, International Joint Commission: Board on Air Quality, 1992–2006
Past president, International Society of Exposure Analysis, 1994–1995
President, International Society of Exposure Analysis, 1993–1994
Chair, Science Advisory Board, Pelham Bay Landfill, NY Remediation, 1990–1997
Member, Board of Environmental Studies and Toxicology, National Academy of Sciences, 1989–1992
Treasurer, International Society of Exposure Analysis, 1989–1991 (Co-Founder of Organization)
Counselor, International Society for Environmental Epidemiology, 1988-1990 (Founding), Board of Directors
Board Member, Mid-Atlantic States Section Air Pollution Control Association, 1978–1982
Advisor, New Jersey Italian and Italian American Heritage Commission, Rutgers University
Member, College of Science and Mathematics Advisory Council, Montclair State University
Major committee assignments - international, national, and regional
Member, Icahn School of Medicine at Mount Sinai, External Advisory Committee, NIEHS Center, 2013–2015
Member, Harvard School of Public Health, Superfund External Advisory Committee, 2010 – 2014
Vice Chair, National Research Council Committee on Exposure Science, 2010 – 2012
Member, Committee on Human and Environmental Exposure Science in the 21st Century, National Academies, 2010–2012
Member, EPA Federal Insecticide, Fungicide, and Rodenticide Act Panel on Exposure Assessment Protocols, 2009 – 2011
Member, U.S. Consumer Product Safety Commission (CPSC) Chronic Hazard Advisory Panel (CHAP) ton children's health of phthalates and phthalate alternatives as used in children's toys and child care articles, 2010–2015
Senior Technical Advisor, Pediatric Environmental Medicine Center, University of Pittsburgh Medical Center, Pittsburgh, PA, 2009 – 2012
Member, EPA Science Advisory Board panel on asbestos, 2008–2015
Member, Advisory Board of University of Pittsburgh Academic Consortium for Excellence (UPACE) in Environmental Public Health Tracking (EPHT) (in collaboration with Drexel University), 2006 – 2009
Member, EPA Science Advisory Board, Council on Homeland Security, 2005–2014.
Member, Homeland Security Policy Committee, NJ 2005-2006
Member, Executive Leadership Group of the New Jersey Chemical-Biological-Radiological-Nuclear-Explosive Center for Training and Research at UMDNJ, 2005–2006
Member, University Committee for Environmental Affairs, Rutgers, 2005–2008
Vice-Chair, US EPA, World Trade Center Expert Technical Panel – Indoor Clean-up Issues, 2004–2005
Member, New Jersey Department of Health and Senior Services, Cancer Cluster Task Force, 2003–2005
Member, Healthcare Issues Advisory Task Force of NJ, 2002–2004
Member, Harvard University Particulate Matter Center Advisory Committee, 2000–2004
Member, New Jersey Department of Health and Senior Services, Trenton/Hamilton Processing Center Environmental Clearance Committee (anthrax), 2002–2004
Member, Advisory Committee on NJ Southdown Quarry Exposure/Risk Characterization, 2000–2001
Member, National Academy of Sciences, National Research Council Committee on Research Priorities for Airborne Particulate Matter, 1998–2005
Member, EPA Science Advisory Board Committee on the Particulate Matter Centers Research Program, Review Panel, 2001
Temporary Councilor, World Health Organization, 1997
Member, Air Pollution Guidelines Committee for Europe, 1993–1994
Member, National Academy of Sciences, National Research Council Committee on Risk Management in Department of Energy's Environmental Restoration Program, 1993–1994
Chair, Particle Total Exposure Assessment Methodology Review Panel, EPA, Science Advisory Board, 1989–1994
Member, National Academy of Sciences, National Research Council Committee on Tropospheric Ozone Formation and Measurement, 1989–1991
Member, Scientific Advisory Committee, Center for Environmental Epidemiology, University of Pittsburgh, School of Public Health, 1988–1992
Chairman, NAS, National Research Council Committee on Exposure Assessment, 1987–1990
Member, Scientific Advisory Committee on Harvard Multi-City Acid Health Study, Harvard University, 1987–1993
Member, National Academy of Sciences, Workshop Panel, Health Risks from Exposure to Common Indoor Household Products in Allergic or Diseased Persons, 1987
Member, Canadian Royal Academy of Sciences Committee on Acid Aerosol Health Research, 1987
Member or Consultant, Science Advisory Board Subcommittees, 1984–2001, U.S. EPA: 1. Risk Assessment; 2. Integrated Air Cancer; 3. Integrated Environmental Management Project; 4. Total Exposure Assessment; 5. Clean Air Science Advisory Committee
Member, USEPA, Health Effects, Grant's Peer Review Committee, 1989–1992
Chairman, Peer Review Panel, U.S. EPA Indoor Air Pollution Program, 1984
Member, National Academy of Sciences Committee on Air Pollution Epidemiology, 1983–1985
Chairman, New Jersey Clean Air Council, 1983–1985
Member, New Jersey Clean Air Council, 1981–1994
Member, Interstate Hazardous Spill Response Committee, NJ, 1977
Personal life and death
In 1971, he married the former Jean Yonone and had one son, Jason.
Lioy died on July 8, 2015, after collapsing at Newark Liberty International Airport, aged 68 of undetermined causes. Lioy's survivors included his mother, also named Jean Lioy, a sister, Mary Jean Giannini and two grandchildren.
Books
Kneip TJ, Lioy PJ (eds)., Air Pollution Control Association. 1980. Aerosols, anthropogenic and natural, sources and transport. New York, NY: New York Academy of Sciences.
Lioy PJ, Lioy MJY (eds). 1983. Air sampling instruments for evaluation of atmospheric contaminants. 6th ed. Cincinnati, OH: American Conference of Governmental Industrial Hygienists.
Lioy PJ, Daisey JM (eds). 1987. Toxic air pollution : a comprehensive study of non-criteria air pollutants. Chelsea, MI.: Lewis Publishers.
Lioy PJ. 2010. DUST: The Inside Story of Its Role in the September 11th Aftermath (Foreword By Tom Kean). Lanham, MD: Rowman and Littlefield. (Paperback and E-Book, 2011)
Lioy PJ, Weisel C. 2014. Exposure Science: Basic Principles and Applications. Oxford, UK, Academic Press, Elsevier
Legacy in exposure science
Lioy's reputation evolved primarily based upon his role in developing scientific principles and refining the approaches that define the field of exposure science. This discipline is associated with the field of environmental and occupational health sciences, which includes epidemiology and risk assessment, and prevention. In a 1990 article published in Environmental Science and Technology he was the first to properly locate exposure science as the bridge between traditional environmental sciences and the understanding of human health outcomes. Building upon the work of occupational hygiene and the work of Wayne Ott, Lioy has clearly shown that the most important aspect of total human exposure is whether or not an individual comes into contact with a toxin, discussed in a 2010 review article on exposure science and his recent book on exposure science. In the latter he has clearly linked external and internal markers of exposure. Thus, prevention is a key part of the application of this field of science. He re-analyzed the work of the "father of occupational medicine", Bernardino Ramazzini who provided the initial reasons for examining contact with an agent to define ways to control occupational illness. This historical analysis can be used to improve the way exposure science evolves in the future. Lioy is also a Fellow of the Collegium Ramazzini.
Lioy has been a central figure in understanding exposure to the air pollutant tropospheric ozone, chloroform and other toxicant exposures from shower water, hexavalent chromium wastes, and most recently, the exposures derived from the dust and smoke released in the aftermath of the September 11 attacks on the World Trade Center in 2001. He has also been a major figure in defining some of the basic data requirements (and providing exposure indices) for examining human exposures within the National Children's Study. He was a Co-Principal Investigator within the portion of the National Human Exposure Assessment Survey (NHEXAS) conducted in five mid-western states, led by Edo Pellizzari of Research Triangle Institute. Currently his research addresses human exposure to engineered nanotechnology consumer products and the exposure of athletes to artificial turf used on athletic fields. In addition, from 1987 to 1991 he was the chairman of the First National Research Council (NRC) Committee that directly addressed human exposure issues and published Human Exposure to Air Pollutants: Advances and Opportunities," also called "White Book." He was Vice Chair of the NRC committee on Exposure Science that produced the report entitled "Exposure Science in the 21st Century: A Vision and A Strategy". He was also vice chair of the WTC Technical Panel that was formed to address the issues of residential cleanup during the WTC Aftermath.
Ozone
During the early 1980s Lioy recognized that the public health metric for defining exposure of the general population to ground level ozone (smog) was incorrect and that the one-hour standard for peak ozone levels should be replaced by an eight-hour standard. Independently, Peter Rombout, RIVM, Netherlands, discovered the same issue. In 1986, they collaborated and published an article on the need for an eight-hour ozone standard. Lioy's group also conducted research on the relationship between ozone exposure and visits to emergency rooms during the summertime. In 2002, the United States Environmental Protection Agency (EPA) published an eight-hour NAAQS ozone standard based upon the scientific exposure–response evidence from multiple laboratories that exposures to asthmatics and others to eight hours of ozone above 0.80 ppm. This standard for protection of public health was tightened to 0.75 ppm but remains as an 8-hour contact with the air pollutant, and is in final review for a further tightening of the 8-hour standard.
Semivolatile chemical exposures in the home
In the 1990s Lioy's laboratory became increasingly focused on dust in the home as a potential metric of exposure to metals and organic compounds. Included was the concurrent scientific issue of the semi-volatility of the materials associated with dust particles. This led to studies that demonstrated that semi-volatile pesticides should not just be considered just residues after application, but as toxin that can be spread throughout the home based process of evaporation and absorption and adsorption. This process was described in an article Published in 1998, and focused on the accumulation of pesticides in children's toys, and ways to protect toys were summarized in popular magazines and web sites. The work was used in revisions of the EPA standards for use of the pesticide chlorpyrifos indoors. The complex issues of dust and semi-volatile toxins in homes were published in 2002 and 2006 review articles. Additionally he expanded this work to encompass releases and deposition of many chemicals in carpets and other plush surfaces.
Chromium wastes
During the late 1980s the state of NJ discovered that wastes from the refining and production of the chrome plated products had been used as apparent Clean fill in various residential settings, and was also had contaminated a number other industrial locations. Lioy conducted a comprehensive study of chromium wastes in Jersey City, including residential exposures and the bioavailability and size distribution if the wastes. The work found that similar to current lead problems, the chromium exposures indoors were highly related to the levels found in house dust and not ambient air. In addition the use of dust laden corium as a marker of exposure was extremely valuable in conclusively defining that the removal of the wastes in the residential neighborhoods brought the levels of chromium down to background by the end of 2000. The efforts are continuing in Jersey City and are now using analytical methods perfected at EOHSI to measure the levels of the hexavalent chromium (carcinogenic form) in human blood and in the areas around remaining industrial sites, that are beginning to receive final remediation. Acomprehensive review paper on this work was published by Stern, Gochfeld and Lioy.
World Trade Center dust
In the wake of the September 11 attacks on the World Trade Center (WTC), Lioy was able to see the dust plumes from his home in Cranford, New Jersey. The major environmental and occupational health related issue during the aftermath of the building collapses was the size range and composition of the dust and smoke that was released during the first hours to days post collapse of the twin towers, and subsequently the dust that had deposited indoors and required cleanup. In collaboration with multiple laboratories, Lioy examined the composition and size distribution of the WTC dust in detail for inorganic, organic and ionic species. The results were published in a 2002 article entitled Characterization of the dust/smoke aerosol that settled east of the WTC in lower Manhattan after the collapse of the WTC September 11, 2001, and have been used to understand the cause of the WTC cough and other health outcomes. In other work that Lioy and colleagues published through 2009, they described the time line of exposure to the local population and workers from the moments after the collapse through December 2001, and pointed out the many lessons that can be learned from the WTC in order to effectively respond to other disasters. At the time of his death, he was working with Dr. Philip J. Landrigan et al. of Mount Sinai School of Medicine on the long-term health effects experienced by WTC workers. During the aftermath Lioy was interviewed many times by the Media on WTC Dust related issues from October 2001 through 2011 The work of Lioy and his colleagues is mentioned in a book by Anthony Depalma entitled City of Dust: Illness, Arrogance, and 9/11. Lioy published a book on the WTC dust and his experiences entitled Dust: the Inside Story of its Role in the September 11th Aftermath in 2010. In 2009 he received an Ellen Hardin Walworth National Patriotism Medal from the Daughters of the American Revolution for his work on the World Trade Center aftermath.
Nanoparticles
Dr. Lioy's research has expanded to covering exposure of humans to nanoparticles released by Consumer Products.
References
External links
Environmental and Occupational Health Sciences Institute (EOHSI) rutgers.edu
NIEHS Center for Environmental Exposures and Disease (CEED) at EOHSI rutgers.edu
UMDNJ - Rutgers University CounterACT Research Center of Excellence rutgers.edu
University Center For Disaster Preparedness and Emergency Response (UCDPER)
Dust: The Inside Story of its Role in the September 11th Aftermath Barnes and Noble
Rutgers University faculty
University of Medicine and Dentistry of New Jersey faculty
New York University faculty
University of Pittsburgh staff
Auburn University alumni
Environmental scientists
2015 deaths
Rutgers University alumni
Montclair State University alumni
Passaic High School alumni
People from Cranford, New Jersey
People from Passaic, New Jersey
1947 births
American scientists | Paul J. Lioy | Environmental_science | 4,659 |
44,316,500 | https://en.wikipedia.org/wiki/Inosperma%20bongardii | Inosperma bongardii is an agaric fungus in the family Inocybaceae. It was originally described as a species of Agaricus by German botanist Johann Anton Weinmann in 1836. Lucien Quélet transferred it to the genus Inocybe in 1872. A 2019 multigene phylogenetic study by Matheny and colleagues found that I. bongardii and its relatives in the subgenus Inosperma were only distantly related to the other members of the genus Inocybe. Inosperma was raised to genus rank and the species became Inosperma bongardii.
It is a common species with a widespread distribution. Fruit bodies grow on the ground, often in clay soils, and typically with broadleaf trees. The fruit bodies are suspected to be toxic, as they contain muscarine.
See also
List of Inocybe species
References
bongardii
Poisonous fungi
Fungi described in 1836
Fungi of Europe
Fungi of North America
Fungus species | Inosperma bongardii | Biology,Environmental_science | 202 |
36,684,546 | https://en.wikipedia.org/wiki/Bad%20Pharma | Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients is a book by the British physician and academic Ben Goldacre about the pharmaceutical industry, its relationship with the medical profession, and the extent to which it controls academic research into its own products. It was published in the UK in September 2012 by the Fourth Estate imprint of HarperCollins, and in the United States in February 2013 by Faber and Faber.
Goldacre argues in the book that "the whole edifice of medicine is broken", because the evidence on which it is based is systematically distorted by the pharmaceutical industry. He writes that the industry finances most of the clinical trials into its own products and much of doctors' continuing education, that clinical trials are often conducted on small groups of unrepresentative subjects and negative data is routinely withheld, and that apparently independent academic papers may be planned and even ghostwritten by pharmaceutical companies or their contractors, without disclosure. Describing the situation as a "murderous disaster", he makes suggestions for action by patients' groups, physicians, academics and the industry itself.
Responding to the book's publication, the Association of the British Pharmaceutical Industry issued a statement in 2012 arguing that the examples the book offers were historical, that the concerns had been addressed, that the industry is among the most regulated in the world, and that it discloses all data in accordance with international standards.
In January 2013 Goldacre joined the Cochrane Collaboration, British Medical Journal and others in setting up AllTrials, a campaign calling for the results of all past and current clinical trials to be reported. The British House of Commons Public Accounts Committee expressed concern in January 2014 that drug companies were still only publishing around 50 percent of clinical-trial results.
Synopsis
Introduction
Goldacre writes in the introduction of Bad Pharma that he aims to defend the following:
Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques which are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don't like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug's true effects. Regulators see most of the trial data, but only from early on in a drug's life, and even then they don't give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion.
In their forty years of practice after leaving medical school, doctors hear about what works through ad hoc oral traditions, from sales reps, colleagues or journals. But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are even owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it's not in anyone's financial interest to conduct any trials at all.
Chapter 1: "Missing Data"
In "Missing Data," Goldacre argues that the clinical trials undertaken by drug companies routinely reach conclusions favourable to the company. For example, in a 2007 journal article published in PLOS Medicine, researchers studied every published trial on statins, drugs prescribed to reduce cholesterol levels. In the 192 trials they looked at, industry-funded trials were 20 times more likely to produce results that favoured the drug.
He writes that these positive results are achieved in a number of ways. Sometimes the industry-sponsored studies are flawed by design (for example by comparing the new drug to an existing drug at an inadequate dose), and sometimes patients are selected to make a positive result more likely. In addition, the data are analysed as the trial progresses. If the trial seems to be producing negative data it is stopped prematurely and the results are not published, or if it is producing positive data it may be stopped early so that longer-term effects are not examined. He writes that this publication bias, where negative results remain unpublished, is endemic within medicine and academia. As a consequence, he argues, doctors may have no idea what the effects are of the drugs they prescribe.
An example he gives of the difficulty of obtaining missing data from drug companies is that of oseltamivir (Tamiflu), manufactured by Roche to reduce the complications of bird flu. Governments spent billions of pounds stockpiling this, based in large part on a meta-analysis that was funded by the industry. Bad Pharma charts the efforts of independent researchers, particularly Tom Jefferson of the Cochrane Collaboration Respiratory Group, to gain access to information about the drug.
Chapter 2: "Where Do New Drugs Come From?"
In the second chapter, the book describes the process as new drugs move from animal testing through phase 1 (first-in-man study), phase 2, and phase 3 clinical trials. Phase 1 participants are referred to as volunteers, but in the US are paid $200–$400 per day, and because studies can last several weeks and subjects may volunteer several times a year, earning potential becomes the main reason for participation. Participants are usually taken from the poorest groups in society, and outsourcing increasingly means that trials may be conducted in countries with low wages by contract research organizations (CROs). The rate of growth for clinical trials in India is 20 percent a year, in Argentina 27 percent, and in China 47 percent, while trials in the UK have fallen by 10 percent a year and in the US by six percent.
The shift to outsourcing raises issues about data integrity, regulatory oversight, language difficulties, the meaning of informed consent among a much poorer population, the standards of clinical care, the extent to which corruption may be regarded as routine in certain countries, and the ethical problem of raising a population's expectations for drugs that most of that population cannot afford. It also raises the question of whether the results of clinical trials using one population can invariably be applied elsewhere. There are both social and physical differences: Goldacre asks whether patients diagnosed with depression in China are really the same as patients diagnosed with depression in California, and notes that people of Asian descent metabolize drugs differently from Westerners.
There have also been cases of available treatment being withheld during clinical trials. In 1996 in Kano, Nigeria, the drug company Pfizer compared a new antibiotic during a meningitis outbreak to a competing antibiotic that was known to be effective at a higher dose than was used during the trial. Goldacre writes that 11 children died, divided almost equally between the two groups. The families taking part in the trial were apparently not told that the competing antibiotic at the effective dose was available from Médecins Sans Frontières in the next-door building.
Chapter 3: "Bad Regulators"
Chapter three describes the concept of "regulatory capture," whereby a regulator – such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK, or the Food and Drug Administration (FDA) in the United States – ends up advancing the interests of the drug companies rather than the interests of the public. Goldacre writes that this happens for a number of reasons, including the revolving door of employees between the regulator and the companies, and the fact that friendships develop between regulator and company employees simply because they have knowledge and interests in common. The chapter also discusses surrogate outcomes and accelerated approval, and the difficulty of having ineffective drugs removed from the market once they have been approved. He argues that regulators do not require that new drugs offer an improvement over what is already available, or even that they be particularly effective.
Chapter 4: "Bad Trials"
"Bad Trials" examines the ways in which clinical trials can be flawed. Goldacre writes that this happens by design and by analysis, and that it has the effect of maximizing a drug's benefits and minimizing harm.
There have been instances of fraud, though he says these are rare. More common are what he calls the "wily tricks, close calls, and elegant mischief at the margins of acceptability."
These include testing drugs on unrepresentative, "freakishly ideal" patients; comparing new drugs to something known to be ineffective, or effective at a different dose or if used differently; conducting trials that are too short or too small; and stopping trials early or late. It also includes measuring uninformative outcomes; packaging the data so that it is misleading; ignoring patients who drop out (i.e. using per-protocol analysis, where only patients who complete the trial are counted in the final results, rather than intention-to-treat analysis, where everyone who starts the trial is counted); changing the main outcome of the trial once it has finished; producing subgroup analyses that show apparently positive outcomes for certain tightly defined groups (such as Chinese men between the ages of 56 and 71), thereby hiding an overall negative outcome; and conducting "seeding trials," where the objective is to persuade physicians to use the drug.
Another criticism is that outcomes are presented in terms of relative risk reduction to exaggerate the apparent benefits of the treatment. For example, he writes, if four people out of 1,000 will have a heart attack within the year, but on statins only two will, that is a 50 percent reduction if expressed as relative risk reduction. But if expressed as absolute risk reduction, it is a reduction of just 0.2 percent.
Chapter 5: "Bigger, Simpler Trials"
In chapter five Goldacre suggests using the General Practice Research Database in the UK, which contains the anonymized records of several million patients, to conduct randomized trials to determine the most effective of competing treatments. For example, to compare two statins, atorvastatin and simvastatin, doctors would randomly assign patients to one or the other. The patients would be followed up by having data about their cholesterol levels, heart attacks, strokes and deaths taken from their computerized medical records. The trials would not be blind – patients would know which statin they had been prescribed – but Goldacre writes that they would be unlikely to hold such firm beliefs about which one is preferable to the extent that it could affect their health.
Chapter 6: "Marketing"
In the final chapter, Goldacre looks at how doctors are persuaded to prescribe "me-too drugs," brand-name drugs that are no more effective than significantly cheaper off-patent ones. He cites as examples the statins atorvastatin (Lipitor, made by Pfizer) and simvastatin (Zocor), which he writes seem to be equally effective, or at least there is no evidence to suggest otherwise. Simvastatin came off patent several years ago, yet there are still three million prescriptions a year in the UK for atorvastatin, costing the National Health Service (NHS) an annual £165 million extra.
He addresses the issue of medicalization of certain conditions (or, as he argues, of personhood), whereby pharmaceutical companies "widen the boundaries of diagnosis" before offering solutions. Female sexual dysfunction was highlighted in 1999 by a study published in the Journal of the American Medical Association, which alleged that 43 percent of women were suffering from it. After the article appeared, the New York Times wrote that two of its three authors had worked as consultants for Pfizer, which at the time was preparing to launch UK-414,495, known as female Viagra. The journal's editor said that the failure to disclose the relationship with Pfizer was the journal's mistake.
The chapter also examines celebrity endorsement of certain drugs, the extent to which claims in advertisements aimed at doctors are appropriately sourced, and whether direct-to-consumer advertising (currently permitted in the US and New Zealand) ought to be allowed. It discusses how PR firms promote stories from patients who complain in the media that certain drugs are not made available by the funder, which in the UK is the NHS and the National Institute for Health and Clinical Excellence (NICE). Two breast-cancer patients who campaigned in the UK in 2006 for trastuzumab (Herceptin) to be available on the NHS were being handled by a law firm working for Roche, the drug's manufacturer. The historian Lisa Jardine, who was suffering from breast cancer, told the Guardian that she had been approached by a PR firm working for the company.
The chapter also covers the influence of drug reps, how ghostwriters are employed by the drug companies to write papers for academics to publish, how independent the academic journals really are, how the drug companies finance doctors' continuing education, and how patients' groups are often funded by industry.
Afterword: "Better Data"
In the afterword and throughout the book, Goldacre makes suggestions for action by doctors, medical students, patients, patient groups and the industry. He advises doctors, nurses and managers to stop seeing drug reps, to ban them from clinics, hospitals and medical schools, to declare online and in waiting rooms all gifts and hospitality received from the industry, and to remove all drug company promotional material from offices and waiting rooms. (He praises the website of the American Medical Student Association – www.amsascorecard.org – which ranks institutions according to their conflict-of-interest policies, writing that it makes him "feel weepy.") He also suggests that regulations be introduced to prevent pharmacists from sharing doctors' prescribing records with drug reps.
He asks academics to lobby their universities and academic societies to forbid academics from being involved in ghostwriting, and to lobby for "film credit" contributions at the end of every academic paper, listing everyone involved, including who initiated the idea of publishing the paper. He also asks for full disclosure of all past clinical trial results, and a list of academic papers that were, as he puts it, "rigged" by industry, so that they can be retracted or annotated. He asks drug company employees to become whistleblowers, either by writing an anonymous blog, or by contacting him.
He advises patients to ask their doctors whether they accept drug-company hospitality or sponsorship, and if so to post details in their waiting rooms, and to make clear whether it is acceptable to the patient for the doctor to discuss his or her medical history with drug reps. Patients who are invited to take part in a trial are advised to ask, among other things, for a written guarantee that the trial has been publicly registered, and that the main outcome of the trial will be published within a year of its completion. He advises patient groups to write to drug companies with the following: "We are living with this disease; is there anything at all that you're withholding? If so, tell us today."
Reception
The book was generally well received. The Economist described it as "slightly technical, eminently readable, consistently shocking, occasionally hectoring and unapologetically polemical". Helen Lewis in the New Statesman called it an important book, while Luisa Dillner, writing in the Guardian, described it as a "thorough piece of investigative medical journalism".
Andrew Jack wrote in the Financial Times that Goldacre is "at his best in methodically dissecting poor clinical trials. ... He is less strong in explaining the complex background reality, such as the general constraints and individual slips of regulators and pharma companies' employees." Jack also argued that the book failed to reflect how many lives have been improved by the current system, for example with new treatments for HIV, rheumatoid arthritis and cancer.
Max Pemberton, a psychiatrist, wrote in the Daily Telegraph that "this is a book to make you enraged ... because it's about how big business puts profits over patient welfare, allows people to die because they don't want to disclose damning research evidence, and the tricks they play to make sure doctors do not have all the evidence when it comes to appraising whether a drug really works or not."
The Association of the British Pharmaceutical Industry (ABPI) replied in the New Statesman that Goldacre was "stuck in a bygone era where pharmaceutical companies wine and dine doctors in exchange for signing on the dotted line". The ABPI issued a press release, writing that the pharmaceutical industry is responsible for the discovery of 90 percent of all medicines, and that it takes an average of 10–12 years and £1.1bn to introduce a medicine to the market, with just one in 5,000 new compounds receiving regulatory approval. This makes research and development an expensive and risky business. They wrote that the industry is one of the most heavily regulated in the world, and is committed to ensuring full transparency in the research and development of new medicines. They also maintained that the examples Goldacre offered were "long documented and historical, and the companies concerned have long addressed these issues". Goldacre argues in the book that "the most dangerous tactic of all is the industry's enduring claim that these problems are all in the past".
Humphrey Rang of the British Pharmacological Society wrote that Goldacre had chosen his target well and had produced some shocking examples of secrecy and dishonesty, particularly the nondisclosure of data on the antidepressant reboxetine (chapter one), in which only one trial out of seven was published (the published study showed positive results, while the unpublished trials suggested otherwise). He argued that Goldacre had gone "over the top" in devoting a whole chapter (chapter five) to recommending large clinical trials using electronic patient data from general practitioners, without fully pointing out how problematic these can be; such trials raise issues, for example, about informed consent and regulatory oversight. Rang also criticized Goldacre's style, describing the book as too long, repetitive, hyperbolic, and in places too conversational. He particularly objected to the line, "medicine is broken", calling it a "foolish remark".
AllTrials
Following the book's publication, Goldacre co-founded AllTrials with David Tovey, editor-in-chief of the Cochrane Library, together with the British Medical Journal, the Centre for Evidence-Based Medicine, and others in the UK, and Dartmouth College's Geisel School of Medicine and the Dartmouth Institute for Health Policy and Clinical Practice in the US. Set up in January 2013, the group campaigns for all past and current clinical trials to be registered and reported, for all treatments in use.
The British House of Commons Public Accounts Committee produced a report in January 2014, after hearing evidence from Goldacre, Fiona Godlee, editor-in-chief of the British Medical Journal, and others, about the stockpiling of Tamiflu and the withholding of data about the drug by its manufacturer, Roche. The committee said it was "surprised and concerned" to learn that information from clinical trials is routinely withheld from doctors, and recommended that the Department of Health take steps to ensure that all clinical-trial data be made available for currently prescribed treatments.
Publication details
Bad Pharma: How drug companies mislead doctors and harm patients, Fourth Estate, 2012 (UK).
Faber and Faber, 2013 (US).
Signal, 2013 (Canada).
As of December 2012 foreign rights had been sold for Brazil, the Czech Republic, Netherlands, Germany, Israel, Italy, Korea, Norway, Poland, Portugal, Spain and Turkey.
See also
Books
Anatomy of an Epidemic (2010) by Robert Whitaker
Big Pharma (2006) by Jacky Law
Deadly Medicines and Organised Crime (2013) by Peter C. Gøtzsche
Pharmageddon (2012) by David Healy (psychiatrist)
Side Effects (2008) by Alison Bass
Lists
Lists about the pharmaceutical industry
List of books about the politics of science
List of pharmaceutical companies
List of largest pharmaceutical settlements
Miscellaneous
Ethics in pharmaceutical sales
Pharmaceutical fraud
Pharmaceutical industry in the UK
GlaxoSmithKline#2012 criminal and civil settlement
Rosiglitazone
Study 329
TGN1412
Notes
References
External links
Bad Pharma, publisher's website.
badscience.net, Ben Goldacre's website.
"Bad Science", Goldacre's column for The Guardian.
"Why doctors don't know what they're prescribing", extract from Bad Pharma.
Articles and radio
BBC Radio 4. "Pharmaceutical regulators have been 'unethical'", Today programme, 25 September 2012 (radio interview with Ben Goldacre and Stephen Whitehead of the Association of the British Pharmaceutical Industry).
Brice, Makini. "Pharmaceutical Companies Cherry Pick Data for Drug Approval, Sweep Bad Results Under the Rug", Medical Daily, 28 September 2012.
Burke, Maria. "GSK pledge on trials transparency", Chemistry World, 17 October 2012.
Carlat, Daniel. "Dr. Drug Rep", New York Times magazine, 25 November 2007.
Goldacre, Ben. "Is the conflict of interest unacceptable when drug companies conduct trials on their own drugs? Yes", British Medical Journal, 29 November 2009.
Goldacre, Ben. "Drug companies must publish all trial results", The Times, 23 October 2012; "Calls to end ‘national scandal’ of stifled clinical trial results", The Times health editor, 23 October 2012.
Haynes, Laura; Service, Owain; Goldacre, Ben; Torgerson, David. "Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials", Cabinet Office Behavioural Insights Team (UK), June 2012.
Hennessy, Mark. "Putting the drug companies' research to the test", The Irish Times, 29 September 2012.
McClenaghan, Maeve. "Why Big Pharma is bad for your health", Bureau of Investigative Journalism, 28 September 2012.
Rehman, Jalees. "Can the Source of Funding for Medical Research Affect the Results?", Scientific American, 23 September 2012.
Rutherford, Adam. "Podcast Extra: Ben Goldacre", Nature, 28 September 2012.
Szalavitz, Maia. "A Doctor’s Dilemma: When Crucial New-Drug Data Is Hidden", Time magazine, 24 September 2012.
Tucker, Ian. "Ben Goldacre: 'It's appalling ... like phone hacking or MPs' expenses'", The Observer, 7 October 2012.
2012 non-fiction books
Books about the politics of science
Books by Ben Goldacre
British books
Fourth Estate books
Medical books
Pharmaceutical industry
Science books
Faber & Faber books
Books about drugs
Works about corruption | Bad Pharma | Chemistry,Biology | 4,725 |
2,641,435 | https://en.wikipedia.org/wiki/Quantum%20noise | Quantum noise is noise arising from the indeterminate state of matter in accordance with fundamental principles of quantum mechanics, specifically the uncertainty principle and via zero-point energy fluctuations. Quantum noise is due to the apparently discrete nature of the small quantum constituents such as electrons, as well as the discrete nature of quantum effects, such as photocurrents.
Quantified noise is similar to classical noise theory and will not always return an asymmetric spectral density.
Shot noise as coined by J. Verdeyen is a form of quantum noise related to the statistics of photon counting, the discrete nature of electrons, and intrinsic noise generation in electronics. In contrast to shot noise, the quantum mechanical uncertainty principle sets a lower limit to a measurement. The uncertainty principle requires any amplifier or detector to have noise.
Macroscopic manifestations of quantum phenomena are easily disturbed, so quantum noise is mainly observed in systems where conventional sources of noise are suppressed. In general, noise is uncontrolled random variation from an expected value and is typically unwanted. General causes are thermal fluctuations, mechanical vibrations, industrial noise, fluctuations of voltage from a power supply, thermal noise due to Brownian motion, instrumentation noise, a laser's output mode deviating from the desired mode of operation, etc. If present, and unless carefully controlled, these other noise sources typically dominate and mask quantum noise.
In astronomy, a device which pushes against the limits of quantum noise is the LIGO gravitational wave observatory.
A Heisenberg microscope
Quantum noise can be illustrated by considering a Heisenberg microscope where an atom's position is measured from the scattering of photons. The uncertainty principle is given as,
Where the is the uncertainty in an atom's position, and the is the uncertainty of the momentum or sometimes called the backaction (momentum transferred to the atom) when near the quantum limit. The precision of the position measurement can be increased at the expense of knowing the atom's momentum. When the position is precisely known enough backaction begins to affect the measurement in two ways. First, it will impart momentum back onto the measuring devices in extreme cases. Secondly, we have decreasing future knowledge of the atom's future position. Precise and sensitive instrumentation will approach the uncertainty principle at sufficiently control environments.
Basics of noise theory
Noise is of practical concern for precision engineering and engineered systems approaching the standard quantum limit. Typical engineered consideration of quantum noise is for quantum nondemolition measurement and quantum point contact. So quantifying noise is useful.
A signal's noise is quantified as the Fourier transform of its autocorrelation.
The autocorrelation of a signal is given as
which measures when our signal is positively, negatively or not correlated at different times and .
The time average, , is zero and our is a voltage signal. Its Fourier transform is
because we measure a voltage over a finite time window. The Wiener–Khinchin theorem generally states that a noise's power spectrum is given as the autocorrelation of a signal, i.e.,
The above relation is sometimes called the power spectrum or spectral density.
In the above outline, we assumed that
Our noise is stationary or the probability does not change over time. Only the time difference matters.
Noise is due to a very large number of fluctuating charge so that the central limit theorem applied, i.e., the noise is Gaussian or normally distributed.
decays to zero rapidly over some time .
We sample over a sufficiently large time, , that our integral scales as a random walk . So our is independent of measured time for . Said in another way, as .
One can show that an ideal "top-hat" signal, which may correspond to a finite measurement of a voltage over some time, will produce noise across its entire spectrum as a sinc function. Even in the classical case, noise is produced.
Classical to quantum noise
To study quantum noise, one replaces the corresponding classical measurements with quantum operators, e.g.,
where are the quantum statistical average using the density matrix in the Heisenberg picture.
Quantum noise and the uncertainty principle
The Heisenberg uncertainty implies the existence of noise. An operator with a hermitian conjugate follows the relationship, . Define as where is real. The and are the quantum operators. We can show the following,
where the are the averages over the wavefunction and other statistical properties. The left terms are the uncertainty in and , the second term on the right is to covariance or which arises from coupling to an external source or quantum effects. The first term on the right corresponds to the Commutator relation and would cancel out if the x and y commuted. That is the origin of our quantum noise.
It is demonstrative to let and correspond to position and momentum that meets the well known commutator relation, . Then our new expression is,
Where the is the correlation. If the second term on the right vanishes, then we recover the Heisenberg uncertainty principle.
Harmonic motion and weakly coupled heat bath
Consider the motion of a simple harmonic oscillator with mass, , and frequency, , coupled to some heat bath which keeps the system in equilibrium. The equations of motion are given as,
The quantum autocorrelation is then,
Classically, there is no correlation between position and momentum. The uncertainty principle requires the second term to be nonzero. It goes to .
We can take the equipartition theorem or the fact that in equilibrium the energy is equally shared among a molecule/atoms degrees of freedom in thermal equilibrium, i.e.,
In the classical autocorrelation, we have
while in the quantum autocorrelation we have
Where the fraction terms in parentheses is the zero-point energy uncertainty. The is the Bose-Einstein population distribution. Notice that the quantum is asymmetric in the due to the imaginary autocorrelation. As we increase to higher temperature that corresponds to taking the limit of . One can show that the quantum approaches the classical . This allows
Physical interpretation of spectral density
Typically, the positive frequency of the spectral density corresponds to the flow of energy into the oscillator (for example, the photons' quantized field), while the negative frequency corresponds to the emitted of energy from the oscillator. Physically, an asymmetric spectral density would correspond to either the net flow of energy from or to our oscillator model.
Linear gain and quantum uncertainty
Most optical communications use amplitude modulation where the quantum noise is predominantly the shot noise. A laser's quantum noise, when not considering shot noise, is the uncertainty of its electric field's amplitude and phase. That uncertainty becomes observable when a quantum amplifier preserves phase. The phase noise becomes important when the energy of the frequency modulation or phase modulation is comparable to the energy of the signal (frequency modulation is more robust than amplitude modulation due to the additive noise intrinsic to amplitude modulation).
Linear amplification
An ideal noiseless gain cannot exit. Consider the amplification of stream of photons, an ideal linear noiseless gain, and the Energy-Time uncertainty relation.
The photons, ignoring the uncertainty in frequency, will have an uncertainty in its overall phase and number, and assume a known frequency, i.e., and . We can substitute these relations into our energy-time uncertainty equation to find the number-phase uncertainty relation or the uncertainty in the phase and photon numbers.
Let an ideal linear noiseless gain, , act on the photon stream. We also assume a unity quantum efficiency, or every photon is converted to a photocurrent. The output will be following with no noise added.
The phase will be modified too,
where the is the overall accumulated phase as the photons traveled through the gain medium.
Substituting our output gain and phase uncertainties, gives us
Our gain is , which is a contradiction to our uncertainty principles. So a linear noiseless amplifier cannot increase its signal without noise.
A deeper analysis done by H. Heffner
showed the minimum noise power output required to meet the Heisenberg uncertainty principle is given as
where is half of the full width at half max, the frequency of the photons, and is the Planck constant. The term with is sometimes called quantum noise
Shot noise and instrumentation
In precision optics with highly stabilized lasers and efficient detectors, quantum noise refers to the fluctuations of signal.
The random error of interferometric measurements of position, due to the discrete character of photons measurement, is another quantum noise. The uncertainty of position of a probe in probe microscopy may also attributable to quantum noise; but not the dominant mechanism governing resolution.
In an electric circuit, the random fluctuations of a signal due to the discrete character of electrons can be called quantum noise.
An experiment by S. Saraf, et .al.
demonstrated shot noise limited measurements as a demonstration of quantum noise measurements. Generally speaking, they amplified a Nd:YAG free space laser with minimal noise addition as it transitioned from linear to nonlinear amplification. The experiment required Fabry-Perot for filtering laser mode noises and selecting frequencies, two separate but identical probe and saturating beams to ensure uncorrelated beams, a zigzag slab gain medium, and a balanced detector for measuring quantum noise or shot-noise limited noise.
Shot Noise Power
The theory behind noise analysis of photon statistics (sometimes called the forward Kolmogorov equation) starts from the Masters equation from Shimoda et al.
where corresponds to the emission cross section and upper population number product , and the is the absorption cross section . The above relation is describing the probability of finding photons in radiation mode . The dynamic only considers neighboring modes and as the photons travel through a medium of excited and ground state atoms from position to . This gives us a total of 4 photon transitions associated to one photon energy level. Two photon number adding to the field and leaving an atom, and and two photons leaving a field to the atom and . Its noise power is given as,
Where,
is the power at the detector,
is the power limited shot noise,
the unsaturated gain and is also true for saturated gain,
is the efficiency factor. That is the product of transmission window efficiency to our photodetector, and quantum efficiency.
is the spontaneous emission factor that typically corresponds relative strength of spontaneous emission to stimulated emission. A value of unity would mean all doped ions are in the excited state.
Sarif, et al. demonstrated quantum noise or shot noise limited measurements over a wide range of power gain that agreed with theory.
Zero-point fluctuations
The existence of zero-point energy fluctuations is well-established in the theory of the quantised electromagnetic field.
Generally speaking, at the lowest energy excitation of a quantized field that permeates all space (i.e. the field mode being in the vacuum state), the root-mean-square fluctuation of field strength is non-zero. This accounts for vacuum fluctuations that permeate all space.
This vacuum fluctuation or quantum noise will effect classical systems. This manifest as quantum decoherence in an entangled system, normally attributed to thermal differences in the conditions surrounding each entangled particle. Because entanglement is studied intensely in simple pairs of entangled photons, for example, decoherence observed in experiments could well be synonymous with "quantum noise" as to the source of the decoherence. Vacuum fluctuation is a possible causes for a quanta of energy to spontaneously appear in a given field or spacetime, then thermal differences must be associated with this event. Hence, it would cause decoherence in an entangled system in proximity of the event.
Coherent states and noise of a quantum amplifier
A laser is described by the coherent state of light, or the superposition of harmonic oscillators eigenstates. Erwin Schrödinger first derived the coherent state for the Schrödinger equation to meet the correspondence principle in 1926.
The laser is a quantum mechanical phenomena (see Maxwell–Bloch equations, rotating wave approximation, and semi-classical model of a two level atom). The Einstein coefficients and the laser rate equations are adequate if one is interested in the population levels and one does not need to account for population quantum coherences (the off diagonal terms in a density matrix). Photons of the order of 108 corresponds to a moderate energy. The relative error of measurement of the intensity due to the quantum noise is on the order of 10−5. This is considered to be of good precision for most of applications.
Quantum amplifier
A quantum amplifier is an amplifier which operates close to the quantum limit. Quantum noise becomes important when a small signal is amplified. A small signal's quantum uncertainties in its quadrature are also amplified; this sets a lower limit to the amplifier. A quantum amplifier's noise is its output amplitude and phase. Generally, a laser is amplified across a spread of wavelengths around a central wavelength, some mode distribution, and polarization spread. But one can consider a single mode amplification and generalize to many different modes. A phase-invariant amplifier preserves the phase of the input gain without drastic changes to the output phase mode.
Quantum amplification can be represented with a unitary operator, , as stated in D. Kouznetsov 1995 paper.
See also
Quantum error correction
Quantum optics
Quantum limit
Shot noise
Quantum harmonic oscillator
References
Further reading
Clerk, Aashish A. Quantum Noise and quantum measurement. Oxford University Press.
Clerk, Aashish A., et al. Introduction to Quantum Noise, measurement, and amplification,Reviews of Modern Physics 82, 1155-1208.
Gardiner, C. W. and Zoller, P. Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics, Springer, 2004, 978-3540223016
Sources
C. W. Gardiner and Peter Zoller, Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics, Springer-Verlag (1991, 2000, 2004).
Quantum optics
Laser science | Quantum noise | Physics | 2,887 |
57,114,522 | https://en.wikipedia.org/wiki/Mac%20OS%20Ogham | Mac OS Ogham is a character encoding for representing Ogham text on Apple Macintosh computers. It is a superset of the Irish Standard I.S. 434:1999 character encoding for Ogham (which is registered as ISO-IR-208), adding some punctuation characters from Mac OS Roman. It is not an official Mac OS Codepage.
Layout
Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII.
References
Character sets
Ogham
Ogham | Mac OS Ogham | Technology | 129 |
2,679,924 | https://en.wikipedia.org/wiki/List%20of%20compounds%20with%20carbon%20number%2011 | This is a partial list of molecules that contain 11 carbon atoms.
See also
Carbon number
List of compounds with carbon number 10
List of compounds with carbon number 12
C11 | List of compounds with carbon number 11 | Chemistry | 35 |
15,018 | https://en.wikipedia.org/wiki/Infusoria | Infusoria is a word used to describe various freshwater microorganisms, including ciliates, copepods, euglenoids, planktonic crustaceans, protozoa, unicellular algae and small invertebrates. Some authors (e.g., Bütschli) have used the term as a synonym for Ciliophora. In modern, formal classifications, the term is considered obsolete; the microorganisms previously and colloquially referred to as Infusoria are mostly assigned to the kingdom Protista.
In other contexts, the term is used to define various aquatic microorganisms found in decomposing matter.
Aquarium use
Certain microorganisms, including cyclops and daphnia (among others), are sold as a supplemental fish food. Some fish stores or pet shops may have these infusoria available for live purchase, but typically they are sold in frozen cubes—for example, by the Japan-based fish food brand Hikari. Still, some advanced aquarists, with especially large collections of fish, will breed and cultivate their own supplies of the microorganisms.
Infusoria are especially used by aquarists and fish breeders to feed fish fry; because of their small sizes, infusoria can be used to rear newly-hatched offspring of many common (and also less common) aquarium species. Many average home aquaria are unable to naturally supply sufficient infusoria for fish-rearing, so hobbyists may create and maintain their own cultures, either through utilizing their own existing aquarium water or by using one of the many commercial cultures available.
Infusoria can be cultured at-home by soaking any decomposing vegetative matter, such as papaya or cucumber peels, in a jar of aged (i.e., chlorine-free) water, preferably from an existing aquarium setup. The culture starts to proliferate in two to three days, depending on temperature and light received. The water first turns cloudy because of a rise in levels of bacteria, but clears up once the infusoria consume them. At this point, the infusoria are usually visible to the naked eye as small, white motile specks. They can be easily fed to fish with the use of a large turkey-baster or by gently scooping with a very fine net. Additionally, the water in which the infusoria are kept in can be changed periodically, even one to two times per week, by draining and replacing up to 50% of the volume of water (for hygienic and maintenance purposes).
See also
Animalcules
References
Bibliography
Ratcliff, Marc J. (2009). The Emergence of the Systematics of Infusoria. In: The Quest for the Invisible: Microscopy in the Enlightenment. Aldershot: Ashgate. infusoria dieses first identified in 18th
sentury in 1773 by o.f.mular(zoologist)
External links
Types of Protozoans and video
Pond Life Identification Kit
Fishkeeping
Obsolete eukaryote taxa | Infusoria | Biology | 631 |
7,181,677 | https://en.wikipedia.org/wiki/AACE%20International | AACE International (Association for the Advancement of Cost Engineering) was founded in 1956 by 59 cost estimators and cost engineers during the organizational meeting of the American Association of Cost Engineering at the University of New Hampshire in Durham, New Hampshire. AACE International Headquarters is located in Morgantown, West Virginia, USA. AACE is a 501(c)(3) non-profit professional association. AACE International is a member of the Board of the Council of Engineering and Scientific Specialty Boards (CESB).
Activities
AACE is a non-profit organization with about 15 employees at its headquarters in Morgantown, WV. A variety of other organizations in the United States provide similar certifications, often specialized for particular industries, such as power, manufacturing, gas and oil.
AACE is the publisher of Cost Engineering, a bi-monthly technical journal, Skills and Knowledge of Cost Engineering (currently in its 6th edition), Source magazine (a bi-monthly magazine), 20 different AACE International Professional Practice Guides, approximately 120 Recommended Practices, and its most comprehensive publication, the Total Cost Management Framework: An Integrated Approach to Portfolio, Program and Project Management.
Certification programs
AACE currently manages eight certification programs, as listed below. All require agreeing to adhere to canons of ethics, and passing an examination. Most require prior industry experience, and also involve recertification by continuing education or reexamination.
Certified Cost Technician (formerly known as Interim Cost Consultant), an entry-level certification and is not eligible for renewal
Certified Scheduling Technician, an entry-level certification
Certified Cost Professional (formerly Certified Cost Consultant / Certified Cost Engineer), which additionally requires a technical paper submission
Certified Estimating Professional
Certified Forensic Claims Consultant, which has additional requirements, including submission of a publication
Decision & Risk Management Professional
Earned Value Professional
Planning & Scheduling Professional
Since becoming a charter member of the Council of Engineering and Scientific Specialty Boards in 1990, six of its certification programs (CCP, CCT, CEP, CST, EVP and PSP) have been accredited by the CESB.
Membership
As of 2012, AACE reported over 8,000 members. To network in local areas, there are over 80 local sections located in 80 countries. There are also 11 technical subcommittees and 17 special interest groups.
References
Further reading
"Total Cost Management Framework: An Integrated Approach to Portfolio, Program and Project Management," 2nd Edition, AACE International, Morgantown, West Virginia, 2016
"Skills and Knowledge of Cost Engineering," 6th Edition, AACE International, Morgantown, West Virginia, 2016.
External links
AACE International
What is cost engineering? - a white paper
The Total Cost Management Framework; An Integrate Approach to Portfolio, Program and Project Management
Professional associations based in the United States
Cost engineering
Engineering societies based in the United States | AACE International | Engineering | 567 |
71,750,338 | https://en.wikipedia.org/wiki/Sh%202-113 | Sh 2-113 (Sharpless 113) also known as the Flying Dragon Nebula or LBN 333, is a small planetary nebula that resembles a supernova remnant (SNR) but with no evidence to support it being an SNR. Sh 2-113 is located in the northern hemisphere constellation of Cygnus south of the star Deneb. Nearby are other planetary nebulae named K 2-81, Sh 2-114, Kn 26 and LBN 346.
References
Planetary nebulae
Cygnus (constellation)
113 | Sh 2-113 | Astronomy | 108 |
74,992,340 | https://en.wikipedia.org/wiki/GMY%20Lighting%20Technology | GMY Lighting Technology Co., LTD (, doing business as GMY), is a large manufacturer of light source components and products, located in Heshan, Guangdong, China.
History and recognition
GMY was founded in 1998 by Yannan "Edward" Hong, and commercially registered in September 2002. By 2010, GMY was the world's largest manufacturer of halogen bulbs.
GMY opened the largest comprehensive plant factory in South China in 2015. In 2017, GMY was recognized as a "Guangdong Provincial Enterprise Technology Center." GMY won the "Ai Rui Cup" in 2018 as one of the China Automotive Industry's top five national brands. It received recognition by the Guangdong Provincial Department of Industry and Information Technology in its list of "2022 Guangdong Province Specialized, Special and New Small and Medium-sized Enterprises."
GMY's 222nm ultraviolet light modules won the Zhongzhao China Lighting Award for Science and Technology Innovation issued by the Chinese Lighting Society in March, 2022.The award noted that "The 222nm excimer lamp emits accurate 222nm wavelength ultraviolet light, which is safer to use than traditional 185nm and 254nm ultraviolet light. It does not produce mercury and is harmless to the environment."
Products
GMY's manufacturing facility covers an area of nearly 80,000 square meters, and has an annual output of hundreds of millions of light source products, which are sold to more than 100 countries. GMY's product line includes general lighting, automotive lighting, and specialized light sources, especially ultraviolet lights for disinfection, UV lighting for manufacturing processes, IPL lights for health and beauty applications, and artificial light vertical planting solutions.
GMY has obtained more than 300 patents, including more than 200 ultraviolet-related patents, ranking among the top five China. As of 2021, GMY ranked first in China with patents awarded for 253.7 nm and 185.0 nm ultraviolet germcidall ultraviolet lighting.
References
Manufacturing companies of China
Manufacturing
Lighting brands
Sterilization (microbiology)
Ultraviolet radiation
Waste treatment technology
Radiobiology | GMY Lighting Technology | Physics,Chemistry,Engineering,Biology | 430 |
76,682,944 | https://en.wikipedia.org/wiki/NGC%203125 | NGC 3125 is a large starburst galaxy in the constellation Antlia. It is located approximately 50 million light-years away from Earth. Starburst galaxies are galaxies in which unusually high numbers of new stars are forming, springing to life within intensely hot clouds of gas.
Morphology
NGC 3125 is notable as it displays large and violent bursts of star formation. Some of these stars are notable; one of the most extreme Wolf–Rayet star clusters in the local Universe, NGC 3125-A1, resides within NGC 3125.
Nearby galaxies
NGC 3125 is member of the LGG 189 Group, which also includes the galaxies NGC 3113, NGC 3137, and NGC 3175.
See also
List of NGC objects (3001–4000)
Star formation
References
Antlia
Starburst galaxies
3125
29366
435-G041
-05-24-022 | NGC 3125 | Astronomy | 180 |
5,598,795 | https://en.wikipedia.org/wiki/Tekin%20Dereli | Tekin Dereli (November 30, 1949) is a Turkish theoretical physicist.
Life and academic career
He studied at Ankara Science High School and the Middle East Technical University.
He was an associate professor and a Professor of Physics at Middle East Technical University (1984–1987, 1993–2001); professor at Faculty of Science at Ankara University (1987–1993), Leverhulme Visiting Professor at Lancaster University UK (2000–2001) and since 2001, he is a professor at the department of physics at Koç University.
TÜBİTAK honored him with TÜBİTAK Junior Science Price in 1982 and TÜBITAK Science Prize in 1996. He also was awarded prestigious Turkish prizes for science by Sedat Simavi Trust in 1989 and METU Mustafa Parlar Foundation Science Award in (1993).
He is a member of Turkish Academy of Sciences (TAS) since 1993.
He is married with two children.
Research interests
His research interests are Yang-Mills gauge theories, supersymmetry, supergravity, quaternion and octonion algebras, spin structures, generalised theories of gravity, cosmological solutions, integrable systems and phase space quantisation.
References
Biography at Koç University
External links
Koç University: Tekin Dereli
1949 births
Living people
People from Ankara
Middle East Technical University alumni
Academic staff of Middle East Technical University
Academic staff of Ankara University
Academic staff of Koç University
Turkish physicists
Theoretical physicists
Recipients of TÜBİTAK Science Award
METU Mustafa Parlar Foundation Science Award winners
20th-century physicists
21st-century physicists | Tekin Dereli | Physics | 320 |
17,909,180 | https://en.wikipedia.org/wiki/Comparison%20of%20web%20conferencing%20software | This list is a comparison of web conferencing software available for Linux, macOS, and Windows platforms. Many of the applications support the use of videoconferencing.
Comparison chart
Terminology
In the table above, the following terminology is intended to be used to describe some important features:
Audio Support: the remote control software transfers audio signals across the network and plays the audio through the speakers attached to the local computer. For example, music playback software normally sends audio signals to the locally attached speakers, via some sound controller hardware. If the remote control software package supports audio transfer, the playback software can run on the remote computer, while the music can be heard from the local computer, as though the software were running locally.
Co-Browsing: the navigation of the Web by several people accessing the same web pages at the same time. When session leader clicks on a link, all other users are transferred to the new page. Co-browsers should support multiple frames and support embedded multimedia (e.g., if a page contains a video player, the session leader may commence synchronized playback for all users. Passing URLs via other tools such as a chat or phone and entering them into browser by each user is not considered co-browsing.
File Transfer: the software allows the user to transfer files between the local and remote computers, from within the client program's user interface.
Unified Communications (UC) is a marketing buzzword describing the integration of real-time, enterprise, communication services such as instant messaging (chat), presence information, voice (including IP telephony), mobility features (including extension mobility and single number reach), audio, web & video conferencing, fixed-mobile convergence (FMC), desktop sharing, data sharing (including web connected electronic interactive whiteboards), call control and speech recognition with non-real-time communication services such as unified messaging (integrated voicemail, e-mail, SMS and fax). UC is not necessarily a single product, but a set of products that provides a consistent unified user-interface and user-experience across multiple devices and media-types.
Notes
References
Web conferencing software
Conferencing
Network software comparisons | Comparison of web conferencing software | Technology | 451 |
22,658,094 | https://en.wikipedia.org/wiki/B%20recognition%20element | The B recognition element (BRE) is a DNA sequence found in the promoter region of most genes in eukaryotes and Archaea. The BRE is a cis-regulatory element that is found immediately near TATA box, and consists of 7 nucleotides. There are two sets of BREs: one (BREu) found immediately upstream of the TATA box, with the consensus SSRCGCC; the other (BREd) found around 7 nucleotides downstream, with the consensus RTDKKKK.
The BREu was discovered in 1998 by Richard Ebright and co-workers. The BREd was named in 2005 by Deng and Roberts; such a downstream recognition was reported earlier in 2000 in Tsai and Sigler's crystal structure.
Binding
The transcription factor II B (TFIIB) recognizes either BRE and binds to it. Both BREs work in conjunction with the TATA box (and TATA box binding protein), and have various effects on levels of transcription.
TFIIB uses the cyclin-like repeats to recognize DNA. The C-terminal alpha helices of TFIIB intercalate with the major groove of the DNA at the BREu. The N-terminal helices bind to the minor groove at BREd. TFIIB is one part of the preinitiation complex that helps RNA polymerase II bind to the DNA.
In addition to the human TFIIB-BRE structure, structures from many other organisms have been solved. Among those are transcription factor B (TFB) from the archaeon Pyrococcus woesei which presents an inverted orientation and a TFIIB from the parasite Trypanosoma brucei which despite some specific insertions show a similar fold.
See also
CAAT box
Enhancer (genetics)
Initiator element
Insulator (genetics)
Promoter (biology)
Transcription start site
Notes
References
Regulatory sequences | B recognition element | Chemistry | 385 |
289,592 | https://en.wikipedia.org/wiki/Surface-water%20hydrology | Surface-water hydrology is the sub-field of hydrology concerned with above-earth water (surface water), in contrast to groundwater hydrology that deals with water below the surface of the Earth. Its applications include rainfall and runoff, the routes that surface water takes (for example through rivers or reservoirs), and the occurrence of floods and droughts. Surface-water hydrology is used to predict the effects of water constructions such as dams and canals. It considers the layout of the watershed, geology, soils, vegetation, nutrients, energy and wildlife. Modelled aspects include precipitation, the interception of rain water by vegetation or artificial structures, evaporation, the runoff function and the soil-surface system itself.
When surface water seeps into the ground above bedrock, it is categorized as groundwater, and the rate at which this occurs determines baseflow needs for instream flow, as well as subsurface water levels in wells. While groundwater is not part of surface-water hydrology, it must be taken into account for a full understanding of the behaviour of surface water.
Glacial hydrology is a part of surface-water hydrology; some of the runoff from glaciers and snow also involves groundwater hydrology concepts.
See also
Hydrological transport model
Moisture recycling
References
Hydrology
Hydraulic engineering | Surface-water hydrology | Physics,Chemistry,Engineering,Environmental_science | 259 |
15,046,429 | https://en.wikipedia.org/wiki/Philip%20Rabinowitz%20%28mathematician%29 | Philip Rabinowitz (August 14, 1926 – July 21, 2006) was an American and Israeli applied mathematician. He was best known for his work in numerical analysis, including his books A First Course in Numerical Analysis with Anthony Ralston and Methods of Numerical Integration with Philip J. Davis. He was the author of numerous articles on numerical computation.
He earned his Ph.D. in 1951 under Walter Gottschalk at the University of Pennsylvania. He worked for the American National Bureau of Standards and taught at the Weizmann Institute of Science in Israel.
References
External links
Personal web page at the Weizmann Institute of Science
20th-century American mathematicians
21st-century American mathematicians
1926 births
2006 deaths
Academic staff of Weizmann Institute of Science | Philip Rabinowitz (mathematician) | Mathematics | 151 |
960,235 | https://en.wikipedia.org/wiki/Claudia%20Severa | Claudia Severa (born 11 September in first century, fl. 97–105) was a literate Roman woman, the wife of Aelius Brocchus, commander of an unidentified fort near Vindolanda fort in northern England. She is known for a birthday invitation she sent around 100 AD to Sulpicia Lepidina, wife of Flavius Cerialis, commander at Vindolanda. This invitation, written in ink on a thin wooden tablet, was discovered in the 1970s and is probably the best-known item of the Vindolanda Tablets.
The first part of the letter was written in formal style in a professional hand evidently by a scribe; the last four lines are added in a different handwriting, thought to be Claudia's own.
The translation is as follows:
Claudia Severa to her Lepidina greetings.
On 11 September, sister, for the day of the celebration of my birthday, I give you a warm invitation to make sure that you come to us, to make the day more enjoyable for me by your arrival, if you are present. Give my greetings to your Cerialis. My Aelius and my little son send him their greetings.
(2nd hand) I shall expect you, sister. Farewell, sister, my dearest soul, as I hope to prosper, and hail.
(Back, 1st hand) To Sulpicia Lepidina, (wife) of Cerialis, from Cl. Severa."
The Latin reads as follows:
Cl. Severá Lepidinae [suae] [sa]l[u]temiii Idus Septembres soror ad diemsollemnem natalem meum rogólibenter faciás ut veniasad nos iucundiorem mihi
[diem] interventú tuo facturá siaderisCerial[em t]uum salutá Aelius meus [...]et filiolus salutant
sperabo te sororvale soror animamea ita valeamkarissima et have
The Vindolanda Tablets also contain a fragment from another letter in Claudia's hand. These two letters are thought to be the oldest extant writing by a woman in Latin found in Britain, or perhaps anywhere. The letters show that correspondence between the two women was frequent and routine, and that they were in the habit of visiting one another, although it is not known at which fort Severa lived.
There are several aspects of Severa's letters that should be regarded as literary, even though they were not written for a wide readership. In particular, they share several thematic and stylistic features with other surviving writings in Latin by women from Greek and Roman antiquity. Although Severa's name reveals that she is unlikely to be related to Sulpicia Lepidina, she refers frequently to Lepidina as her sister, and uses the word iucundus to evoke a strong and sensual sense of the pleasure Lepidina's presence would bring, creating a sense of affection through her choice of language. In the post-script written in her own hand, she appears to draw on another Latin, literary model, from the fourth book of the Aeneid, in which at 4.8 Vergil characterises Anna as Dido's unanimam sororem, "sister sharing a soul", and at 4.31, she is "cherished more than life" (luce magis dilecta sorori). Although this is not proof that Severa and Lepidina were familiar with Virgil's writing, another letter in the archive, written between two men, directly quotes a line from the Aeneid, suggesting that the sentiments and language Sulpicia used do indeed draw on a Virgilian influence.
The Latin word that was chosen to describe the birthday festivities, sollemnis, is also noteworthy, as it means "ceremonial, solemn, performed in accordance with the forms of religion", and suggests that Severa has invited Lepidina to what was an important annual religious occasion.
Display of letter
The invitation was acquired in 1986 by the British Museum, where it holds registration number 1986,1001.64. The museum has a selection of the Vindolanda Tablets on display, and loans some to the museum at Vindolanda.
References
External links
Vindolanda Tablets Online: Correspondence of Lepidina: tablets 291–294
1st-century births
1st-century Roman women
1st-century Romans
2nd-century Roman women
1st-century women writers
1st-century writers in Latin
2nd-century women writers
2nd-century writers in Latin
Ancient Romans in Britain
Hadrian's Wall
Letter writers in Latin
Ancient Roman women writers
Date of death unknown
Year of birth unknown
Year of death unknown
Claudii
Silver Age Latin writers | Claudia Severa | Engineering | 989 |
7,399,717 | https://en.wikipedia.org/wiki/Chiral%20resolution | Chiral resolution, or enantiomeric resolution, is a process in stereochemistry for the separation of racemic mixture into their enantiomers. It is an important tool in the production of optically active compounds, including drugs. Another term with the same meaning is optical resolution.
The use of chiral resolution to obtain enantiomerically pure compounds has the disadvantage of necessarily discarding at least half of the starting racemic mixture. Asymmetric synthesis of one of the enantiomers is one means of avoiding this waste.
Crystallization of diastereomeric salts
The most common method for chiral resolution involves conversion of the racemic mixture to a pair of diastereomeric derivatives by reacting them with chiral derivatizing agents, also known as chiral resolving agents. The derivatives which are then separated by conventional crystallization, and converted back to the enantiomers by removal of the resolving agent. The process can be laborious and depends on the divergent solubilities of the diastereomers, which is difficult to predict. Often the less soluble diastereomer is targeted and the other is discarded or racemized for reuse. It is common to test several resolving agents. Typical derivatization involves salt formation between an amine and a carboxylic acid. Simple deprotonation then yields back the pure enantiomer. Examples of chiral derivatizing agents are tartaric acid and the amine brucine. The method was introduced (again) by Louis Pasteur in 1853 by resolving racemic tartaric acid with optically active (+)-cinchotoxine.
Case study
One modern-day method of chiral resolution is used in the organic synthesis of the drug duloxetine:
In one of its steps the racemic alcohol 1 is dissolved in a mixture of toluene and methanol to which solution is added optically active (S)-mandelic acid 3. The alcohol (S)-enantiomer forms an insoluble diastereomeric salt with the mandelic acid and can be filtered from the solution. Simple deprotonation with sodium hydroxide liberates free (S)-alcohol. In the meanwhile the (R)-alcohol remains in solution unaffected and is recycled back to the racemic mixture by epimerization with hydrochloric acid in toluene. This process is known as RRR synthesis in which the R's stand for Resolution-Racemization-Recycle.
Common resolving agents
Antimony potassium tartrate, an anion, that forms diastereomeric salts with chiral cations.
Camphorsulfonic acid, an acid that forms diastereomeric salts with chiral amines
1-Phenylethylamine, a base that forms diastereomeric salts with chiral acids. Many related chiral amines have been demonstrated.
The chiral pool consists of many widely available resolving agents.
Spontaneous resolution and related specialized techniques
Via the process known as spontaneous resolution, 5-10% of all racemates crystallize as mixtures of enantiopure crystals. This phenomenon allowed Louis Pasteur to separate left-handed and right-handed sodium ammonium tartrate crystals. These experiments underpinned his discovery of optical activity. In 1882 he went on to demonstrate that by seeding a supersaturated solution of sodium ammonium tartrate with a d-crystal on one side of the reactor and a l-crystal on the opposite side, crystals of opposite handedness will form on the opposite sides of the reactor.
Spontaneous resolution has also been demonstrated with racemic methadone. In a typical setup 50 grams dl-methadone is dissolved in petroleum ether and concentrated. Two millimeter-sized d- and l-crystals are added and after stirring for 125 hours at 40 °C two large d- and l-crystals are recovered in 50% yield.
Another form of direct crystallization is preferential crystallization also called resolution by entrainment of one of the enantiomers. For example, seed crystals of (−)- induce crystallization of this enantiomer from an ethanol solution of (±)-.
Chiral column chromatography
In chiral column chromatography the stationary phase is made chiral with similar resolving agents as described above.
Further reading
References
Stereochemistry | Chiral resolution | Physics,Chemistry | 913 |
63,713,707 | https://en.wikipedia.org/wiki/Environment-wide%20association%20study | An environment-wide association study, also known as an environmental-wide association study (abbreviated EWAS), is a type of epidemiological study analogous to the genome-wide association study, or GWAS. The EWAS systematically examines the association between a complex disease and multiple individual environmental factors, controlling for multiple hypothesis testing.
References
Epidemiology | Environment-wide association study | Environmental_science | 74 |
33,687,518 | https://en.wikipedia.org/wiki/Convexity%20%28finance%29 | In mathematical finance, convexity refers to non-linearities in a financial model. In other words, if the price of an underlying variable changes, the price of an output does not change linearly, but depends on the second derivative (or, loosely speaking, higher-order terms) of the modeling function. Geometrically, the model is no longer flat but curved, and the degree of curvature is called the convexity.
Terminology
Strictly speaking, convexity refers to the second derivative of output price with respect to an input price. In derivative pricing, this is referred to as Gamma (Γ), one of the Greeks. In practice the most significant of these is bond convexity, the second derivative of bond price with respect to interest rates.
As the second derivative is the first non-linear term, and thus often the most significant, "convexity" is also used loosely to refer to non-linearities generally, including higher-order terms. Refining a model to account for non-linearities is referred to as a convexity correction.
Mathematics
Formally, the convexity adjustment arises from the Jensen inequality in probability theory: the expected value of a convex function is greater than or equal to the function of the expected value:
Geometrically, if the model price curves up on both sides of the present value (the payoff function is convex up, and is above a tangent line at that point), then if the price of the underlying changes, the price of the output is greater than is modeled using only the first derivative. Conversely, if the model price curves down (the convexity is negative, the payoff function is below the tangent line), the price of the output is lower than is modeled using only the first derivative.
The precise convexity adjustment depends on the model of future price movements of the underlying (the probability distribution) and on the model of the price, though it is linear in the convexity (second derivative of the price function).
Interpretation
The convexity can be used to interpret derivative pricing: mathematically, convexity is optionality – the price of an option (the value of optionality) corresponds to the convexity of the underlying payout.
In Black–Scholes pricing of options, omitting interest rates and the first derivative, the Black–Scholes equation reduces to "(infinitesimally) the time value is the convexity". That is, the value of an option is due to the convexity of the ultimate payout: one has the option to buy an asset or not (in a call; for a put it is an option to sell), and the ultimate payout function (a hockey stick shape) is convex – "optionality" corresponds to convexity in the payout. Thus, if one purchases a call option, the expected value of the option is higher than simply taking the expected future value of the underlying and inputting it into the option payout function: the expected value of a convex function is higher than the function of the expected value (Jensen inequality). The price of the option – the value of the optionality – thus reflects the convexity of the payoff function.
This value is isolated via a straddle – purchasing an at-the-money straddle (whose value increases if the price of the underlying increases or decreases) has (initially) no delta: one is simply purchasing convexity (optionality), without taking a position on the underlying asset – one benefits from the degree of movement, not the direction.
From the point of view of risk management, being long convexity (having positive Gamma and hence (ignoring interest rates and Delta) negative Theta) means that one benefits from volatility (positive Gamma), but loses money over time (negative Theta) – one net profits if prices move more than expected, and net loses if prices move less than expected.
Convexity adjustments
From a modeling perspective, convexity adjustments arise every time the underlying financial variables modeled are not a martingale under the pricing measure.
Applying Girsanov's theorem allows expressing the dynamics of the modeled financial variables under the pricing measure and therefore estimating this convexity adjustment.
Typical examples of convexity adjustments include:
Quanto options: the underlying is denominated in a currency different from the payment currency. If the discounted underlying is martingale under its domestic risk neutral measure, it is not any more under the payment currency risk neutral measure
Constant maturity swap (CMS) instruments (swaps, caps/floors)
Option-adjusted spread (OAS) analysis for mortgage-backed securities or other callable bonds
IBOR forward rate calculation from Eurodollar futures
IBOR forwards under LIBOR market model (LMM)
References
Benhamou, Eric, Global derivatives: products, theory and practices, pp. 111–120, 5.4 Convexity Adjustment (esp. 5.4.1 Convexity correction)
Mathematical finance | Convexity (finance) | Mathematics | 1,001 |
52,481,840 | https://en.wikipedia.org/wiki/Sleepy%20Bears | Sleepy Bears is a 1999 children's picture book by Mem Fox. It is about a bear preparing her family of six baby bears for hibernation.
Reception
In a review of Sleepy Bears, Booklist wrote: "As in Koala Lou (1988), Fox depicts the comfort and security of family without ever resorting to the syrup of many "I love you" books for preschoolers". School Library Journal called it a cleverly written bedtime book, while Kirkus Reviews found it "a bewitching collection of sleepy time rhymes".
Sleepy Bears has also been reviewed by Publishers Weekly.
See also
Time for Bed - another bedtime book by Mem Fox
References
External links
Library holdings of Sleepy Bears
1999 children's books
Australian children's books
Picture books by Mem Fox
Children's books about bears
Sleep in fiction
Pan Books books | Sleepy Bears | Biology | 176 |
23,287,111 | https://en.wikipedia.org/wiki/Yhyakh | Yhyаkh (, ) is the festival that celebrates the rebirth of nature after a hard winter, the triumph of life, the beginning of a new year in the Sakha Republic. Historic celebration is observed on the 21st June, the day of the summer solstice.
Celebration
Sakha people celebrate the New Year twice a year – in winter with the rest of citizens of Russia, and in summer – according to the ancient traditions. Yakutia is the largest region of Russia. The winter temperatures sometimes reach −60 °C, while the summer is very short, lasting only three months. The holiday is celebrated in the period between 10 and 25 June.
The Yhyakh festival (literally meaning "abundance") is related to a cult of a solar deity, with a fertility cult. Ancient Sakha celebrated the New Year at the Yhyаkh festival. Its traditions include women and children decorating trees and tethering posts with "salama" (nine bunches of horse hair hung on horse-hair ropes). The oldest man, wearing white, opens the holiday. He starts the ritual by sprinkling kymys on the ground, feeding the fire. He prays to the Ai-ii spirits for the well-being of the people who depend on them and asks the spirits to bless all the people gathered.
Afterwards, people sing and dance Ohuakhai, play national games, eat national dishes, and drink kymys.
During years of stagnation, the traditional ceremony was almost forgotten. Nevertheless, the 21st century saw a revival of Sakha culture, including Yhyakh. Until 1990, when the first Yhyakh was held in Yakutsk, traditionally accurate celebrations were only held in a few regions of the republic.
Ohuokhai Dance
The Ohuokhai (Оhуохай) dance has its roots in the period when the Sakha people lived further south and were cattle-breeders, termed "sun worshippers". It is a native dance that combines three forms of art: dancing, singing and poetry. The Sakha word for "dance", Üñküü (Yҥкүү) comes from the verb üñ (Үҥ, "to worship").
The Ohuokhai is a simultaneous round dance and song. Dancers form a circle and dance, arm in arm, hand in hand, with the left foot put forward, while making rhythmical, graceful movements with their bodies, legs, feet and arms. A lead singer improvises the lyrics and the other dancers repeat them. This Ohuokhai leader has a special talent not only for singing but also, what is more important, for poetic improvisation. There song leaders compete at the national Yhyakh festival for the best poetic expression, best song and biggest circle.
Poetic improvisation of the Ohuokhai represents one of the richest and oldest genres of Sakha folklore.
The melody of the Ohuokhai is put to many types of music, from marching tunes to operas. Kylyhakh is the special singing technique of vocal cord vibration. This technique gives a unique national Sakha colouring highly appreciated by experts in "throat singing". The Ohuokhai plays an important role in the development of the musical and choreographic arts.
A famous folk singer, poet and composer, Sergey Zverev from the Suntarsky region added many new elements in the expressiveness of the movements.
Celebrations by the Sakha Diaspora
On June 23, 2024, the Sakha American Cultural Association in Washington State organized and celebrated Yhyakh in Lynnwood, WA.
See also
Sun Dance, a sacred ceremony of the Plains Indians carried out around Midsummer
References
Bibliography
Дидактический материал "Национально-региональный компонент на уроках английского языка" – Шамaева М.И., Семенова В.Д., Ситникова Н.В., Якутск 1995. (Didactic material "the National-regional component at lessons of English language" – Shamaeva M. I, Semenova V. D, Sitnikova N.V., Yakutsk 1995.)
Sakha Republic
Turkic mythology
New Year celebrations
Cultural festivals in Russia
Summer traditions
June observances
Observances on non-Gregorian calendars
Summer holidays (Northern Hemisphere)
Indigenous peoples days
Asian shamanism
Shamanistic holidays
Shamanistic festivals
Summer solstice | Yhyakh | Astronomy | 964 |
17,257,634 | https://en.wikipedia.org/wiki/Londonderry%20Lithia | Londonderry Lithia was a brand of bottled lithia water sold in the northeastern United States during the late 19th and early 20th centuries. The source of the water was in Londonderry, New Hampshire, and the company headquarters of the Londonderry Lithia Spring Water Company was in Nashua, New Hampshire.
As a marketing promotion, Annie Kopchovsky, the first woman to bicycle around the world, changed her name in 1895 to Annie Londonderry and carried the company's placard on her journey.
Composition
According to the company, the water had been analyzed by Prof. H. Halvorson and found to contain among various other minerals 8.620 grains of lithium bicarbonate per Imperial gallon. However, following the prohibition of adulterated and misbranded drugs, a government chemist determined that the water contained only a spectroscopic trace of lithium, less than 1/1200 grain per gallon, and that sodium chloride and sodium bicarbonate had been added to some samples. This resulted in action condemning and forfeiting the product. The company ceased production by 1920.
References
External links
Lithia Springs chapter of the History of Londonderry
David Rumsey Map Collection engraving
1895 New York Times article
Bottled water brands
Companies based in Nashua, New Hampshire
Soft drinks
Patent medicines
Lithia water
Defunct manufacturing companies based in New Hampshire | Londonderry Lithia | Chemistry | 274 |
7,075,186 | https://en.wikipedia.org/wiki/Mere%20%28lake%29 | A mere is a shallow lake, pond, or wetland, particularly in Great Britain and other parts of western Europe.
Derivation of the word
Etymology
The word mere is recorded in Old English as mere ″sea, lake″, corresponding to
Old Saxon meri,
Old Low Franconian *meri (Dutch meer ″lake, pool″, Picard mer ″pool, lake″, Northern French toponymic element -mer),
Old High German mari / meri (German Meer ″sea″, but also Maar ″circular lake″),
Goth. mari-, marei,
Old Norse marr ″sea″ (Norwegian mar ″sea″, Shetland Norn mar ″mer, deep water fishing area″, Faroese marrur ″mud, sludge″, Swedish place name element mar-, French mare ″pool, pond″).
They derive from reconstituted Proto-Germanic *mari, itself from Indo-European *mori, the same root as marsh and moor. The Indo-European root *mori gave also birth to similar words in other European languages: Latin mare, ″sea″ (Italian mare, Spanish mar, French mer); Old Celtic *mori, ″sea″ (Gaulish mori-, more, Irish muir, Welsh môr, Breton mor); and Old Slavic morje.
Signification
The word once included the sea or an arm of the sea in its range of meaning but this marine usage is now obsolete (OED). It is a poetical or dialect word meaning a sheet of standing water, a lake or a pond (OED). The OED fourth definition ("A marsh, a fen.") includes wetland such as fen amongst usages of the word which is reflected in the lexicographers' recording of it. In a quotation from the year 598, mere is contrasted against moss (bog) and field against fen. The OED quotation from 1609 does not say what a mere is, except that it looks black. In 1629 mere and marsh were becoming interchangeable but in 1876 mere was "heard, at times, applied to ground permanently under water": in other words, a very shallow lake.
The online edition of the OED quoted examples relate to:
the sea: Old English to 1530: 7 quotations
standing water: Old English to 1998: 22 quotations
arm of the sea: 1573 to 1676: 4 quotations
marsh or fen: 1609 to 1995: 7 quotations
Characteristics
Where land similar to that of Martin Mere, gently undulating glacial till, becomes flooded and develops fen and bog, the remnants of the original mere remain until the whole is filled with peat. This can be delayed where the mere is fed by lime-rich water from chalk or limestone upland and a significant proportion of the outflow from the mere takes the form of evaporation. In these circumstances, the lime (typically calcium carbonate) is deposited on the peaty bed and inhibits plant growth, therefore, peat formation. A typical feature of these meres is that they are alongside a river rather than having the river flowing through them. In this way, the mere is replenished by seepage from the bed of the lime-rich river, through the river's natural levée, or by winter floods. The water of the mere is then static through the summer, when the concentration of the calcium carbonate rises until it is precipitated on the bed of the mere.
Even quite shallow lake water can develop a thermocline in the short term but where there is a moderately windy climate, the circulation caused by wind drift is sufficient to break this up. (The surface is blown down-wind in a seiche and a return current passes either near the bottom or just above the thermocline if that is present at a sufficient depth.) This means that the bed of the shallow mere is aerated and bottom-feeding fish and wildfowl can survive, providing a livelihood for people around. Expressed more technically, the mere consists entirely of the epilimnion. This is quite unlike Windermere where in summer, there is a sharp thermocline at a depth of 9 to 15 metres, well above the maximum depth of 60 metres or so. (M&W p36)
At first sight, the defining feature of a mere is its breadth in relation to its shallow depth. This means that it has a large surface in proportion to the volume of water it contains. However, there is a limiting depth beyond which a lake does not behave as a mere since the sun does not warm the deeper water and the wind does not mix it. Here, a thermocline develops but where the limiting dimensions lie is influenced by the sunniness and windiness of the site and the murkiness of the water. This last usually depends on how eutrophic (rich in plant nutrients) the water is. Nonetheless, in general, with the enlargement of the extent of a mere, the depth has to become proportionately less if it is to behave as a mere.
English meres
Aqualate Mere, Staffordshire
Cop Mere, Staffordshire
Bomere Pool, Shropshire
Buttermere, Cumbria (Lake District)
Diss Mere, Norfolk
Brooke Mere, Norfolk
Fowlmere, Cambridgeshire
Grasmere, Cumbria (Lake District)
Hornsea Mere, East Riding of Yorkshire
Horsey Mere, Norfolk
Martin Mere, Lancashire
The Meres, south and east of Ellesmere, Shropshire (see below)
Orton Mere, Cambridgeshire
Quidenham Mere, Norfolk
Raby Mere, Merseyside
Scarborough Mere, North Yorkshire
Scoulton Mere, Norfolk
Sea Mere, Norfolk
Thirlmere, Cumbria (Lake District)
Thorpeness Meare (Suffolk)
Windermere, Cumbria (Lake District)
Marton Mere, Blackpool (Lancashire)
There are many examples in Cheshire, including:
Alsager Mere
Budworth Mere
Comber Mere
Hatch Mere
Mere
Oak Mere
Pick Mere
Radnor Mere
Redes Mere
Rostherne Mere
Shakerley Mere
Tatton Mere
Many examples also occur in north Shropshire, especially around the town of Ellesmere, which is sometimes known as 'the Shropshire lake district', such as:
Blakemere
Colemere
Crosemere
Ellesmere (The Mere)
Kettlemere
Newtonmere
Sweatmere
Whitemere
Fenland
The Fens of eastern England, as well as fen, lowland moor (bog) and other habitats, included a number of meres. As at Martin Mere in Lancashire, when the fens were being drained to convert the land to pasture and arable agriculture, the meres went too but some are easily traced owing to the characteristic soil. For the reasons given above, it is rich in both calcium carbonate and humus. On the ground, its paleness stands out against the surrounding black, humic soils and on the soil map, the former meres show as patches of the Willingham soil association, code number 372 (Soil Map).
Apart from those drained in the medieval period, they are shown in Saxton's map of the counties (as they were in his time) of Cambridgeshire and Huntingdonshire. The following is a list of known meres of the eastern English Fenland with their grid references.
Saxton's meres are named as:
Trundle Mere TL2091
Whittlesey Mere TL2291
Stretham Mere TL5272
Soham Mere. TL5773
Ug Mere TL2487
Ramsey Mere TL3189
In Jonas Moor's "map of the Great Levell of the Fenns" of 1720, though Trundle Mere is not named, the above are all named but one, included with the addition of:
Benwick Mere TL3489
In the interval, Stretham Mere had gone and the main features of the modern drainage pattern had appeared.
Ugg, Ramsey and Benwick meres do not show in the soil map. Others which do but which appear to have been drained before Saxton's mapping in 1576 are at:
TL630875
TL6884
TL5375
TL5898
The last appears to be the "mare 'Wide' vocatum" of Robert of Swaffham's version of the Hereward story (Chapter XXVI). If it is, it will have been in existence in the 1070s, when the events of the story took place.
Meres in Wales
Hanmer Mere, Clwyd
Marloes Mere, Pembrokeshire
Meres in the Netherlands
Meres similar to those of the English Fens but more numerous and extensive used to exist in the Netherlands, particularly in Holland. See Haarlemmermeer, for example. However, the Dutch word meer is used more generally than the English mere. It means "lake", as also seen in the names of lakes containing meer in Northern Germany, e.g. Steinhuder Meer. When the Zuiderzee was enclosed by a dam and its saltwater became fresh, it changed its status from a sea (zee) to being known as the IJsselmeer, the lake into which the River IJssel flows.
Australian meres
Beachmere, Queensland
Austinmer, New South Wales
Citations
General sources
Crossley-Holland, K. (1987). The Poetry of Legend: Classics of the Medieval World Beowulf. (C-H)
Macan, T. T. and Worthington, E. B. (1972). Life in Lakes and Rivers Fontana. (M&W)
Moor, J. (c1980s). A Map of the Great Levell of the Fenns Extending into ye Countyes of Norfolk, Suffolke, Northampton, Lincoln, Cambridge, Huntingdon and the Isle of Ely facsimile edition by Cambridgeshire Library Service
Ordnance Survey 1:50,000 Sheets 142 & 143
Oxford English Dictionary (OED)
Saxton, C. (1992)[1576]. Christopher Saxton's 16th Century Maps. The counties of England & Wales. With Introduction by William Ravenhill. . Cambridgeshire map.
Soils of England and Wales, Sheet 4 Eastern England. Soil Survey of England and Wales (1983). (Soil Map)
Swaffham, R. (1895-7)[c. 1260]. Gesta Herwardi. Transcribed by S. H. Miller and translated by W. D. Sweeting.
External links
Lakes
Limnology | Mere (lake) | Environmental_science | 2,148 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.